title
sequencelengths 0
16
| author
sequencelengths 0
109
| authoraffiliation
sequencelengths 0
68
| venue
sequencelengths 0
3
| abstract
stringlengths 12
16.7k
| doi
stringlengths 13
39
⌀ | pdfurls
sequencelengths 1
1
⌀ | corpusid
int64 148
259M
| arxivid
stringlengths 9
15
| pdfsha
stringlengths 40
40
| text
stringlengths 2.47k
723k
| github_urls
sequencelengths 0
22
|
---|---|---|---|---|---|---|---|---|---|---|---|
[] | [
"Valentin Boboc valentinboboc@icloud.com \nSchool of Mathematics\nThe University of Manchester\nAlan Turing Building, Oxford RoadM13 9PLManchesterUnited Kingdom\n"
] | [
"School of Mathematics\nThe University of Manchester\nAlan Turing Building, Oxford RoadM13 9PLManchesterUnited Kingdom"
] | [] | We use Lambek's pregroups and the framework of compositional distributional models of language ("DisCoCat") to study translations from Japanese to English as pairs of functors. Adding decorations to pregroups we show how to handle word order changes between languages. | 10.48550/arxiv.2303.05834 | [
"https://export.arxiv.org/pdf/2303.05834v1.pdf"
] | 257,482,544 | 2303.05834 | d13382238e9b8362acd856dbf02af272cbf359d4 |
10 Mar 2023
Valentin Boboc valentinboboc@icloud.com
School of Mathematics
The University of Manchester
Alan Turing Building, Oxford RoadM13 9PLManchesterUnited Kingdom
10 Mar 2023AN ALGEBRAIC APPROACH TO TRANSLATING JAPANESE
We use Lambek's pregroups and the framework of compositional distributional models of language ("DisCoCat") to study translations from Japanese to English as pairs of functors. Adding decorations to pregroups we show how to handle word order changes between languages.
Introduction
Language is traditionally viewed as possessing both an empirical aspect -one learns language by practising language -and a compositional aspect -the view that the meaning of a complex phrase is fully determined by its structure and the meanings of its constituent parts.
In order to efficiently exploit the compositional nature of languages, a popular way of modelling natural languages is a categorical compositional distributional model, abbreviated "DisCoCat" ( [CSC10]). Languages are modelled as functors from a category that interprets grammar ("compositional") to a category that interprets semantics ("distributional").
The compositional part is responsible for evaluating whether phrases or sentences are well formed by calculating the overall grammatical type of a phrase from the grammatical types of its individual parts. There are several algebraic methods for modelling the grammar of a natural language. In the present article we choose the well-established model of pregroup grammars. Pregroups were introduced in [Lam97] to replace the algebra of residuated monoids in order to model grammatical types, their juxtapositions, and reductions. Pregroup calculus has been applied to formally represent the syntax of several natural languages such as: French ( [BL01a]), German ( [LP04]), Persian ( [Sad07]), Arabic ( [BL01b]), Japanese ( [Car02]), and Latin ( [CL05]).
The distributional part assigns meanings to individual words by associating to them, for example, statistical co-occurrence vectors ( [ML08]). The "DisCoCat" model is thus a way of interpreting compositions of meanings via grammatical structure.
In this article we study the notion of translating between compositional distributional models of language by analysing translation from Japanese into English. On the compositional side, a translation is a strong monoidal functor. It is easy to demonstrate that such a functor is too rigid to handle the translation of even simple phrases between languages which have different word order. We show that one can keep using the gadget of monoidal functors as long as the underlying pregroup grammars are decorated with additional structure.
We begin by introducing basic notions about the compact closed categories we work with, namely pregroups and finitely generated vector spaces and define our notion of translation functor. Next, we give an introduction to basic Japanese grammar and the pregroup structure we use to model it. Finally, we introduce notions of pregroup decorations and use them to give a structured framework for translating Japanese sentences.
Theoretical background
2.1. Compact closed structures. The key to the "DisCoCat" model is that both the category of pregroups and the category of finitely generated vector spaces are compact closed categories. This allows for compositional characteristics of grammar to be incorporated into the distributional spaces of meaning.
For completeness, we provide here a definition of compact closure. The reader is encouraged to consult ( [KL80]) for a more complete and technical reference.
Definition 1. A compact closed category is a category C together with a bifunctor − ⊗ − : C × C → C, called tensor product, which is associative up to natural isomorphism and possesses a two-sided identity element I, and each object A ∈ C has a right dual A r and a left dual A ℓ with the following morphisms
A ⊗ A r ε r A − → I η r A −→ A r ⊗ A, A ℓ ⊗ A ε ℓ A − → I η ℓ A −→ A ⊗ A ℓ .
Moreover, the ε and η maps satisfy the "yanking" conditions:
(1 A ⊗ ε ℓ A ) • (η ℓ A ⊗ 1 A ) = 1 A (ε r A ⊗ 1 A ) • (1 A ⊗ η r A ) = 1 A (ε ℓ A ⊗ 1 A ) • (1 A ℓ ⊗ η ℓ A ) = 1 A ℓ (1 A r ⊗ ε r A ) • (η r A ⊗ 1 A r ) = 1 A r .
The upshot of compact closure is that we want to have elements which "cancel each other out" and we can decompose the identity into a product.
Recalling pregroups.
Definition 2. A pregroup is a tuple (P, ·, 1, − ℓ , − r , ) where (P, ·, 1, ) is a partially ordered monoid and the unary operations − ℓ , − r (the left and the right dual) satisfy for all x ∈ P the following relations:
x · x r 1 x ℓ · x 1 1 x r · x 1 x · x ℓ .
The operation sign · is omitted unless it is relevant. It is immediate to check that the following relations hold in every pregroup:
1 ℓ = 1 = 1 r (x ℓ ) r = x = (x r ) ℓ
(xy) ℓ = y ℓ x ℓ and (xy) r = y r x r if x y then y ℓ x ℓ and y r x r .
We model the grammar of a natural language by freely generating a pregroup from a set of grammatical types. Each word in the dictionary is assigned an element of the pregroup which corresponds to its linguistic function, e.g. noun, verb, adjective, etc. A string of words is interpreted by multiplying the elements assigned to the constituent parts in syntactic order. If a string of words satisfies the relation w 1 w 2 . . . w n s we say that the string reduces to the type s.
Example 1. Suppose there are two grammatical types: noun n and sentence s. Grammar is modelled as the free pregroup PGrp({n, s}). Consider the sentence "Pigeons eat bread." We assign the type n to "pigeons" and "bread" and the type n r sn ℓ to the transitive verb "eat." The sentence overall has type n(n r sn ℓ )n and the following reductions hold: n(n r sn ℓ )n = (nn r )s(n ℓ n) (1)s(n ℓ n) s(1) s.
In this case we say that "Pigeons eat bread" is a well-formed sentence since in the pregroup PGrp({n, s}) the phrase reduces to the correct type.
The two individual reductions could have been performed in a different order.
Lambek's Switching Lemma [Lam97, Proposition 2] tells us that in any computation performed in a freely generated pregroup, we may assume without loss of generality that all contractions precede all expansions.
A pregroup can be viewed as a compact closed category. The objects of the category are the elements of the pregroup. There is an arrow x → y if and only if x y, and the tensor product is given by the pregroup operation: x ⊗ y = xy. The morphisms ε r , ε ℓ , η r , η ℓ are defined in the obvious way. In terms of the ε and η maps, the reductions in this example can be represented as:
(ε ℓ n ⊗ 1 s ⊗ ε ℓ n )(n ⊗ (n r ⊗ s ⊗ n ℓ ) ⊗ n) → s.
2.3. Meaning space. We encode the semantic structure of a natural language into the category of finitely generated vector spaces, which we denote by FVect. The arrows are linear transformations, and there is a natural monoidal structure given by the linear algebraic tensor product with unit R, which also happens to be symmetric:
V ⊗ W ≃ W ⊗ V. This implies that V ℓ ≃ V r ≃ V * ,
where the latter denotes the dual vector space.
Fixing a basis {v i } for the vector space V we get moreover that V ≃ V * and the structure morphisms of compact closure are given by
ε V = ε r V = ε ℓ V : V * ⊗ V → R where i,j a ij v i ⊗ v j → i,j a ij v i |v j η V = η r V = η ℓ V : R → V ⊗ V * where 1 → i v i ⊗ v i extended linearly.
If we denote by P both the pregroup and the corresponding category, the bridge between grammar and semantics is given by a strong monoidal functor F : P → FVect, which we call a functorial language model. The functor assigns vector spaces to atomic types: F(1) = I, F(n) = N (the vector space of nouns), F(s) = S (the vector space of sentences), etc. For words in P, monoidality tells us that F(x ⊗ y) = F(x) ⊗ F(y). The compact closure is also preserved: F(x ℓ ) = F(x r ) = F(x) * . For example, we can interpret the transitive verb "eat" with type n r sn ℓ as a vector in
F(n r ⊗ s ⊗ n ℓ ) = F(n r ) ⊗ F(s) ⊗ F(n ℓ ) = F(n) * ⊗ F(s) ⊗ F(n) * = N ⊗ S ⊗ N.
Pregroup reductions in P can be interpreted as semantic reductions in FVect using the corresponding ε and η maps. The reductions associated to a transitive verb are then given by
F(ε r n ⊗ 1 s ⊗ ε ℓ n ) = F(ε r n ) ⊗ F(1 s ) ⊗ F(ε ℓ n ) = F(ε n ) * ⊗ F(1 s ) ⊗ F(ε n ) * = ε N ⊗ 1 S ⊗ ε N .
The meaning of a sentence or phrase is derived by interpreting the pregroup reduction as the correponding semantic reduction of the tensor product of distributional meanings of individual words in the phrase. The previous example "Pigeons eat bread" is interpreted as F(ε r n ⊗ 1 s ⊗ ε ℓ n )(Pigeons ⊗ eat ⊗ bread).
2.4.
Translating between functorial language models. The authors of ([BLMT18]) formalised the notion of a translation between functorial language models. We illustrate this construction with an example on translating simple noun phrases and the problems one may encounter.
Definition 3. Let (C, ⊗, 1 C ) and (D, ⊙, 1 D ) be monoidal categories. A monoidal functor F : C → D is a functor equipped with a natural isomorphism Φ x,y : F(x) ⊙ F(y) → F(x⊗y) for every pair of objects x, y ∈ C and an isomorphism φ : 1 D → F(1 C ) such that for any triple of objects x, y, z ∈ C, the following diagram commutes
(F(x) ⊙ F(y)) ⊙ F(z) F(x ⊗ y) ⊙ F(z) F((x ⊗ y) ⊗ z) F(x) ⊙ (F(y) ⊙ F(z)) F(x) ⊙ F(y ⊗ z) F(x ⊗ (y ⊗ z)) Φ x,y ⊙1 F(z) Φ x⊗y,z 1 F(x) ⊙Φ y,z Φ x,y⊗z
where the vertical arrows apply the associativity in their respective categories. Moreover, for every object x ∈ C, the following two squares commute:
1 D ⊙ F(x) F(x) F(x) ⊙ 1 D F(x) F(1 C ) ⊙ F(x) F(1 C ⊗ x) F(x) ⊙ F(1 C ) F(x ⊗ 1 C ).
Definition 4. Let (F, Φ, φ) and (G, Ψ, ψ) be monoidal functors between the monoidal categories C and D. A monoidal natural transformation α : F ⇒ G is a natural transformation where the following diagrams commute:
F(x) ⊙ F(y) G(x) ⊙ G(y) 1 D F(x ⊗ y) G(x ⊗ y) F(1 C ) G(1 C ). α(x)⊙α(y) Φ x,y Ψ x,y φ ψ α x⊗y α(1 C )
Definition 5. Let A : P → FVect and B : Q → FVect be two functorial language models. A translation from F to G is a tuple (T , α), where T : P → Q is a monoidal functor and α : A ⇒ B • T is a monoidal natural transformation.
Example 2. We attempt to translate simple phrases of the type adjective + noun from Japanese to English. We work on a restricted model. Let J = PGrp({s J , n J }) be the free pregroup (or category) generated by the sentence and noun types in Japanese and let E = PGrp({s E , n E }) be the free pregroup generated by the sentence and noun types in English.
The functorial language models are denoted by J : J → FVect and E : E → FVect, respectively. The semantic assignment is straightforward:
J(n J ) = N J , J(a J ) = A J , E(n E ) = N E , E(a E ) = A E .
The translation will consist of the monoidal functor
T : J → E,
which sends s J → s E and n J → n E . Automatically, we have that the type reduction is preserved in the corresponding languages, i.e. T (n J n ℓ J )n J = T (n J ) = n E . Due to monoidality, it suffices to define the components α n J , α s J of the natural transformation α : J ⇒ E • T in order to parse semantics.
Additionally, the natural transformation α must commute with the monoidal functor T . Pictorially we have a commutative square:
(N J ⊗ N J ) ⊗ N J N J (N E ⊗ N E ) ⊗ N E N E . J(ε ℓ N J ⊗1 J ) α (n J n ℓ J )n J α n J E(ε ℓ ⊗1 E )
Consider the concrete words red ∈ N E ⊗ N E , cat ∈ N E , akai ∈ N J ⊗ N J , and neko ∈ N J . The diagram says that if we first use Japanese grammar rules to reduce akai⊗neko to akai neko and then translate to red cat is the same thing as first translating component-wise akai ⊗neko to red ⊗ cat and then using English grammar rules to reduce to red cat.
Since there is no discrepancy in word order, this example of phrasal translation works in the desired way. If we instead wanted to translate the phrase "akai neko" from Japanese into "pisicȃ roşie" in Romanian, we would encounter some difficulties. The latter is a noun + adjective phrase, as the natural word order in Romanian for such phrases is the opposite to the word order in Japanese.
The reduction rule in Romanian is given by
n R (n r R n R ) → n R .
Suppose there exists a monoidal functor T ′ : J → R that takes Japanese grammar types to Romanian grammar types. Then we want to preserve the reduction rules, i.e.
T ′ (n J n ℓ J )n J = n R (n r R n R ).
We thus obtain the condition: T ′ n ℓ J = n r R . However, left and right adjoints must be preserved by a strong monoidal functor. Hence this condition cannot be fulfilled.
Section 4 introduces techniques that can help us overcome such problems with word order changes.
3. Japanese crash course 3.1. Generalities. Japanese is a synthetic and agglutinative language. The usual word order is subject-object-verb (SOV) with topic-comment sentence structure. There are no definite/indefinite articles and nouns do not possess either grammatical gender or number. Verbs and adjectives are conjugated for tense, voice, and aspect, but not person or number. Particles are attached to words to identify their grammatical role. We write sentences natively and employ the Nihon-siki romanisation system.
The sentence "The cat eats fish" can be represented in two different but closely related ways.
(1) 猫 neko cat が ga NOM 魚 sakana fish を wo ACC 食べる taberu eat (2) 猫 neko cat は ha TOP 魚 sakana fish を wo ACC 食べる taberu eat
Note the use of the subject particle "ga," the topic particle "ha," and the direct object particle "wo." Remark that Japanese distinguishes between topic and subject. The topic generally needs to be explicitly introduced at the beginning of a discourse, but as the discourse carries on, the topic need not be the grammatical subject of every sentence. Both sentences translate into English as "The cat eats fish," or "Cats eat fish." However, a more pertinent interpretation of the second sentence is "As for the cat/Speaking of the cat, it eats fish."
Another important aspect of word order in Japanese is head finality. Phrases can be broadly described as consisting of a head and a modifier. English is generally a head initial language. Consider for example the phrases: "to school," "in England," and "red cat." The word that gets modified tends to come before the modifiers, the main exception being that nouns succeed the adjectives that modify them. In contrast, Japanese is per excellence a head final language. Our example phrases become
(3) 学校 gakkō school へ へ へ he to (4) イギリス igirisu England に に に ni in (5) 赤い akai red 猫 猫 猫 neko cat
Head finality is also encountered in the case of relative clauses, which usually occur before the part of speech they modify. This phenomenon is demonstrated by the following pair of phrases.
(6) 女 onna woman が ga NOM 赤い akai red ワンピース wanpîsu dress を wo ACC
着た kita wore 'The woman wore a red dress'
(7) 赤い akai red ワンピース wanpîsu dress を wo ACC 着た kita wore 女 女 女 onna woman '
The woman, who wore a red dress' This is a prime example of a structure where the word order is changed during translation. The following section will develop the algebraic machinery to interpret such translations.
Subjects are habitually dropped when they are clear from context, and personal pronouns are used sparingly. We conclude this section with an example, which demonstrates how a very common reflexive/personal pronoun "zibun" ("oneself") can lead to ambiguous interpretations. "Zibun" is often used as a way for the speaker to refer either to themselves or to their interlocutor. The sentence
(8) 自分 zibun oneself が ga NOM 嘘つき usotuki liar か ka QUESTION
can be translated as either "Am I a liar? " or "Are you a liar? " in the absence of further context.
3.2. Compositional model. Define J = PGrp ({π, n, s 1 , s 2 , s, o 1 , . . .}) to be the pregroup of grammar types associated to Japanese. Following ( [Car02]) with slight modifications, we define the following atomic types:
-π pronoun, -n noun, -s 1 , s 2 imperfective/ perfective sentence, -s topicalised sentence, -s sentence, -o 1 nominative case, -o 2 accusative case, -o 3 dative case, -o 4 genitive case, -o 5 locative case, -o 6 lative case, -o 7 ablative case, -etc.
We also impose the following reductions in J:
s i → s s → s n → π.
We now discuss how to assign types to various parts of speech. Revisiting the example sentence "neko ga sakana wo taberu" ("the cat eats fish"), the words "neko" and "sakana" are both nouns and thus have type n. The subject particle "ga" has type π r o 1 , the direct object particle "wo" has type n r o 2 and the transitive verb "taberu" then has type o r 2 o r 1 s 1 . The sentence then has type n(π r o 1 )n(n r o 2 )(o r 2 o r 1 s 1 ) and we can derive the following type reductions:
n(π r o 1 )n(n r o 2 )(o r 2 o r 1 s 1 ) → (nπ r )o 1 (ππ r )o 2 (o r 2 o r 1 s 1 ) → (ππ r )o 1 (ππ r )o 2 (o r 2 o r 1 s 1 ) → o 1 o 2 (o r 2 o r 1 s 1 ) → o 1 (o 2 o r 2 )o r 1 s 1 → o 1 o r 1 s 1 → s 1 → s
to see that the sentence is well-formed and reduces to the correct grammatical type. Here we used the reductions n → π and s 1 → s together with different applications of the contraction morphism ε. Graphically, this type reduction can be seen in the following diagram, where a lower bracket indicates that a contraction morphism of the type ε was applied. n π r o 1 n n r o 2 o r 2 o r 1 s 1
Since word order is flexible, the same sentence could have been written as "sakana wo neko ga taberu," and then taberu would have been assigned the type o r 1 o r 2 s 1 . As we want to take advantage of the Switching Lemma while performing computations, we want to restrict ourselves to working with freely generated pregroups. Situations where certain words or verbs can be assigned different types are generally handled by adding metarules. Informally, a metarule stipulates that if a grammar contains rules that match a specified pattern, then it also contains rules that match some other specified pattern. In our concrete example, we could impose the following metarule.
Metarule 1. Any transitive verb that has type o r 1 o r 2 s i also has type o r 2 o r 1 s i .
Moving away from transitive verbs, the ablative particle "kara" has type π r o 7 and the lative particle "he" has type π r o 6 . In the following example, the verb "untensita" has type o r 6 o r 7 s 2 .
(9) 家 ie house から kara ABL 駅 eki station へ he LAT
運転した untensita drove '(I) drove from home to the train station.'
Causative passive verbs take a subject and an indirect object marked with the dative particle "ni" of type π r o 3 . For instance, the verb "yomaseta" ("x made y read") has type o r 2 o r 3 o r 1 s 2 . (10) 先生 sensei teacher
が ga NOM 私 watasi I に ni DAT 本 hon book を wo ACC 読ませた yomaseta read-CAUSE-PAS '
The teacher made me read the book.'
The genitive particle "no" has type π r o 4 together with a metarule that states that type o 4 is equivalent to type nn ℓ . The possessor is always on the left in a genitive construction. The topic particle "ha" is distinguished from the subject particle "ga" and has type π r ss ℓ , i.e. "ha" requires a topic on the left and a sentence about the topic on the right. In the latter example, the type reduction goes as follows:
π(π r o 4 )n(π r ss ℓ )n(π r o 2 )(o r 2 s 1 ) →(ππ r )o 4 n(π r ss ℓ )(nπ r )(o 2 o r 2 )s associativity →(1)(nn ℓ )n(π r ss ℓ )(nπ r )(1)s contractions + genitive metarule →n(n ℓ n)(π r ss ℓ )(ππ r )s n → π →(nπ r )s(s ℓ s) associativity →s contractions →s.
Translation and decorated pregroups
4.1. Decorated pregroups. As Example 2 shows, our initial machinery is not suited to translating phrases between languages with different word orders. The morphism of pregroups (or monoidal functor) T : P → Q that transfers information from the source language to the target language happens to be too rigid. We decorate pregroups with additional structures so that we can have more control over the monoid's operation. To this end, we define anti-homomorphisms for the purpose of inverting word order and pregroups with braces and β-pregroups to get more refined control over associativity.
Definition 6. An anti-homomorphism of monoids is a map Φ : P → Q such that for all elements x, y ∈ P we have Φ(xy) = Φ(y)Φ(x).
Definition 7. Let (P, ·) be a monoid. The opposite monoid (P op , * ) is the monoid which has the same elements as P and the operation for all x, y ∈ P op is given by x * y = y · x. It is elementary to observe that (P, ·) ≃ (P op , * ).
In light of this, an anti-homomorphism can be viewed as a morphism from the opposite monoid Φ : P op → Q. Additionally, an anti-homomorphism of pregroups takes left adjoints to right adjoints and vice-versa.
Example 3. In Example 2 the problem of translating "adjective + noun" phrases from Japanese into Romanian can be solved by setting the translation functor to be an anti-homomorphism that sends n J → n R . Then the functor T preserves the desired reductions
T ((n J n ℓ J )n J ) = T (n J ) T (n ℓ J )T (n J ) = n R (n r R n R ) → n R .
Parsing longer phrases and full sentences adds new layers of complexity. For instance, in simple short phrases there often is exactly one way of performing type reductions in order to assess the syntactic type of a phrase. Associativity can introduce ambiguity while parsing phrases. The following example demonstrates this.
Example 4. Consider the phrase "old teachers and students." We assign type n to "teachers" and "students." We assign the type nn ℓ to the adjective "old." The conjunction "and" in this phrase requires two inputs of noun type to produce a noun phrase and is thus assigned n r nn ℓ . We can use the associativity of the monoid operation to perform two distinct type reductions. old teachers and students n n ℓ n n r n n ℓ n old teachers and students n n ℓ n n r n n ℓ n Both type reductions give the desired noun phrase. However, the two interpretations are slightly different. The first one attributes the adjective "old" to "teachers" only, and so the sentence is parsed as "(old teachers) and students," while the second type reduction attributes "old" to both "teachers" and "students," giving the phrase "old (teachers and students)."
One can construct examples where changing the order of reductions can make the difference between reducing down to a well-formed sentence and reducing down to a phrase that cannot be grammatically accepted. For this reason, one can add a modality or a β-structure to the pregroup to locally suppress associativity. This is to ensure that our phrases reduce to the correct type or that we distribute modifiers in a prescribed way.
Pregroups with modalities were first introduced in [Fad02] and their logic was more extensively studied in [KM07].
Definition 8. A β-pregroup is a pregroup (P, ·, 1, − ℓ , − r , ) together with a monotone mapping β : P → P such that β has a right adjointβ : P → P, i.e. for all x, y ∈ P we have β(x) y if and only if x β (y).
In practice, we enrich our pregroup grammars with types with modalities to indicate certain reductions must be performed first.
Example 5. In our previous example, we can prescribe the parsing "(old teachers) and students" by assigning the types n [β(n)] ℓ · [β(n)] · n r nn ℓ · n and the parsing "old (teachers and students)" by assigning the types nn ℓ · [β(n)] · [β(n)] r nn ℓ · n.
We now have ways to invert word order globally and block associativity locally. We conclude this section by introducing a new type of decoration which allows us to locally control word order.
The reader is also encouraged to consult [Sta08] for an introduction to tupled pregroups, [Lam10] for an analysis of French sentences using products of pregroups, and [Bob23] for pregroups with local precyclicity.
Next, we introduce a new pregroup decoration.
Definition 9. A monoid with k-braces (P, ·, 1) is a free monoid in which every word is a prescribed concatenation of k > 0 distinguished subwords. Extending this and subsequent definitions to pregroups with k-braces is immediate.
Example 6. Consider the free monoid on two letters F = Mon({a, b}). Viewing F as a monoid with 2-braces, abba b and abb ab are distinct words because they have distinct distinguished subwords.
Definition 10. A morphism of monoids with k-braces f : (P, ·) → (Q, * ) is a morphism of monoids f : (P, ·) → (Q, * ) which preserves distinguished subwords. In symbols:
f ( w 1 · . . . · w k ) = f(w 1 ) * f(w 2 ) * . . . * f(w k ) .
Example 7 (Some useful constructions). We define two morphisms of monoids with braces which are useful in understanding translations.
First, let P be a monoid with 2-braces and consider a word w = w 1 w 2 . Since the underlying monoid of P is free, we can view w as an element of the free product P * P ≃ P where the distinguished subword w i belongs to the i-th factor. Take the following sequence of monoid morphisms Ψ : P ≃ P * P P × P P op × P op
P op * P op Q f g h h i
Here, f is the canonical surjection sending w 1 w 2 → (w 1 , w 2 ), g is a pair of anti-isomorphisms which act like the identity on atomic types (w 1 ,
w 2 ) → (w op 1 , w op 2 ), h is the canonical injection sending (w op 1 , w op 2 ) → w op 1 w op 2
and i is some fixed homomorphism of monoids with 2-braces.
Secondly, let P be a monoid with 3-braces. We construct in a similar fashion the following morphism. Ξ : P ≃ P * P * P P × P × P P × P op × P P * P op * P P * P * P ≃ P Q
We now proceed with concrete examples of phrasal translations. Throughout the remainder of the section, we work with two functorial language models: J : J → FVect for Japanese and E : E → FVect for English. We also impose the following useful metarule.
Metarule 2. Any verb of type so r 1 w also has type o ℓ 1 sw, where w stands for all the remaining required complements. 4.2. "There is"/"There exists". Japanese has two verbs of existence, "iru" and "aru," which are used for animate and inanimate beings, respectively. They both roughly mean "to be," although a more common English translation is "there is/there exists."
Consider the following sentence.
(12) 森 mori forest に ni LOC 猫 neko cat が ga NOM いる iru be
A human translator has numerous ways of approaching this sentence. A standard and mot-a-mot SVO translation is "A cat is in the forest." An easy SVO upgrade would be "A cat lives in the forest." Considering that this is a short story meant for children, one could even opt for "In the forest lives a cat" to induce a fairy tale type atmosphere to the text. In this article, we choose to translate this using a straightforward anti-homomorphism and thus we aim for "There is a cat in the forest."
We work with the following reduced models for grammar:
J = PGrp({n, o 1 , o 5 , s}) and E = PGrp({n E , o 1E , o 5E , s E }).= (s E o ℓ 5E o ℓ 1E )o 1E (n ℓ E n E )o 5E (n ℓ E n E ) → s E o ℓ 5E (o ℓ 1E o 1E )o 5E → s E (o ℓ 5E o 5E ) → s E .
At the level of semantics we define the natural transformation α : J ⇒ E • T to act in the expected way, i.e. the map N → N E sends neko → cat, mori → forest and the map S → S E sends iru → there is. We also impose ga → a and ni → in the. The commutativity of the following diagram is immediate.
mori ⊗ ni ⊗ neko ⊗ ga ⊗ iru mori ni neko ga iru N ⊗8 ⊗ S S S E ⊗ N ⊗8 E S E there is ⊗ a ⊗ cat ⊗ in the ⊗ forest there is a cat in the forest α r J(r J ) α s E(r E )
4.3. Simple SOV sentences. We describe a procedure for translating the following sentence.
(13) 医者 issya doctor は ga NOM 手紙 tegami letter を wo ACC 書く kaku write '
The doctor writes a letter.'
We work with the grammars
J = PGrp({n, o 1 , o 2 , s}) and E = PGrp({n E , o 1E , o 2E , s E }).
The words "issya" and "tegami" are assigned the noun type n, the particles "ga" and "wo" have the usual types n r o 1 and n r o 2 , respectively, and the transitive verb "kaku" has type o r 2 o r 1 s. The sentence is clearly well-formed: n(n r o 1 )n(n r o 2 )(o r 2 o r 1 s) → s. Here we employ the notion of a pregroup with 2-braces. In principle, for an SOV sentence we assign braces as follows: S OV . In our particular sentence, this becomes n(n r o 1 ) n(n r o 2 )(o r 2 o r 1 s) . We define our translation functor Ψ to be the morphism of monoids with braces defined in Example 7. Together with Metarule 2 this gives:
Ψ n(n r o 1 ) n(n r o 2 )(o r 2 o r 1 s) = (o 1E n ℓ E )n E (s E o ℓ 1E o ℓ 2E )(o 2E n ℓ E )n = (o 1E n ℓ E )n E (o r 1E s E o ℓ 2E )(o 2E n ℓ E )n
Then α can be defined on atomic types as follows: issya → doctor, tegami → letter, kaku → write, and the translation (Ψ, α) gives "(A/The) doctor write(s) (a/the) letter. ′′ Again, the articles and the conjugation of "write" into third person singular can either be added by brute force in our model by adding meanings to the particles "ga" and "wo," or one can verify agreement and articles separately as a different step in the translation process. 4.4. Relative clauses. Interpreting relative pronouns in various languages in terms of pregroups proves to be quite challenging. In [SCC13] and [SCC14], the authors add the additional structure of a Frobenius algebra on the pregroup. Informally, a Frobenius algebra structure enriches the ε, η functorial yoga with additional maps, the most important of which are called "copying map" and "uncopying map." These new morphisms allow one to better keep track of information inside a phrase. For instance, in the English sentence "The woman, who drove from Tokyo today, was late to the party ′′ the new morphisms can formalise the fact that the subject of the main clause "The woman was late to the party" and the subject of the relative clause "who drove from Tokyo today" are one and the same. The relative pronoun "who" acts as a bridge that "copies" the subject into the relative clause and then transfers it back into the main clause.
We translate the following relative clause. We assign types in a less straightforward way. We first insert an empty word between the modifier "tōkyō kara untensita" and the head "onna." We assign the following types: "tōkyō" and "onna" are both type n, the ablative particle "kara" has type n r o 7 , "kyō" has type t (temporal adverb), the verb "untensita" has type o r 7 t r so ℓ 1 and the empty word acts like a phantom relative pronoun with type o 1 s r nn ℓ . kyō tōkyō kara untensita ∅ onna t n n r o 7 o r 7 t r s o ℓ 1 o 1 s r n n ℓ n This construction generates a noun phrase and it can be translated using a straightforward anti-homomorphism. The advantage of this underhanded construction is that now we can translate the empty word as the relative pronoun "who" or "that." This ties in perfectly with the Frobenius algebra approach of [SCC13]. In this example we modelled our relative clause as what the authors of the reference call a subject relative clause. 4.5. Coordinate sentences. The simplest way of coordinating sentences is by connecting them with the particle "ga" ("and") to which we assign the type s r ss ℓ . We translate the following sentence where subjects are omitted. We decorate the pregroup with braces and assign the following type n · n r o 5 · o r 5 s s r ss ℓ n · n r o 2 · o r 2 s .
Extending the morphism Ψ from Example 7 to monoids with 3-braces, we obtain Ψ n·n r o 5 ·o r 5 s s r ss ℓ n·n r o 2 ·o r
2 s = s E o ℓ 5E ·o 5E n ℓ E ·n E s r E s E s ℓ E s E o ℓ 2E ·o 2E n ℓ E ·n E .
Working under the assumption that an ommitted subject refers to the first person singular, the translation, after applying a suitably defined α, is "(I) arrived home and (I) wrote (a) letter. ′′ 4.6. Putting it all together. We combine all our techniques to study a more complex sentence.
(16) 制服 seihuku uniform を o ACC 着た kita wore 学生 gakusei student が ga NOM 机 tukue desk に ni LOC あった atta was 本 hon book を wo ACC 盗んだ nusunda stole '
The student, who wore a uniform, stole the book, which was on the desk.' This is a standard SOV sentence, where both the subject and the direct object are modified by relative clauses. In the Japanese pregroup grammar, we have the following straightforward reductions. One may observe that in the diagram above we use associativity to our advantage to prove that the sentence reduces to the correct syntactic type. To get a failsafe reduction and translation we decorate our pregroup grammar with braces and a β-structure. The sentence is then assigned the type n·n r o 2 ·o r 2 o ℓ 1 ·o 1 s r nβ(n ℓ )·β(n)·n r o 1 n·n r o 5 ·o r 5 so ℓ 1 ·o 1 s r nβ(n ℓ )·β(n)·n r o 2 ·o r 2 o r 1 s and after applying the morphism Ψ from Example 7 together with Metarule 2, the sentence translates to "(A/The) student, who wore (a/the) uniform, stole (a/the) book, which was (LOC) desk." 4.7. A Farsi to Japanese example. Farsi has certain similarities to Japanese which make translations (at the syntactic level, at least) somewhat simpler. For instance, Farsi also has SOV word order, nouns do not possess grammatical gender, and it is a pro-drop language. A key structural difference is that Farsi uses both prepositions and postpositons as case markers.
Following [Sad07], we use the following (reduced) pregroup to model Farsi grammar F = PGrp({ν, σ, o, w}), where atomic types represent nouns, sentences, direct objects, and prepositional phrases respectively. On the Japanese side, we use J = PGrp({n, s, o 2 , o 5 }), with the usual meanings. Denote the two functorial language models as F : F → FVect and J : J → FVect.
We are interested in translating the following sentence from Farsi to Japanese. Here "ketāb rā" is the direct object, "dǎr bāzār" is the prepositional phrase and "xarid" is the transitive verb in the past tense. This example sentence drops the subject and uses both a postposition "rā" and a preposition "dǎr" to mark cases. In Farsi, we have the following reduction. ketāb rā dǎr bāzār xarid ν ν r o w ν ℓ ν w r o r σ
The functorial language models F, J send ν, o, w → N F (Farsi nouns) and σ → S F (Farsi sentences), and also n, o 2 , o 5 → N J (Japanese nouns) and s → S J (Japanese sentences). The natural transformation α : F ⇒ J•i is given by ketab → hon, ra → o, dȃ → de, bazar → itiba and xarid → kaimasita. At the syntactic level, we define some monoidal translation functor T : F → J which takes ν → n, σ → s, o → o 2 , and w → o 5 .
The pregroups are decorated with 3-braces. The sentence is assigned the type ν · ν r o wν ℓ · ν w r o r σ .
Syntactically, the translation functor is taken to be Ξ from Example 7. The word order is altered as follows Ξ ν · ν r o wν ℓ · ν w r o r σ = n · n r o 2 n · n r o 5 o r 5 o r 2 s which leads to the following type reduction in Japanese.
Future work
In this article, we introduced decorated pregroups and used them as a means of constructing a compositional notion of translation between natural languages with different word order. The aim was to demonstrate that one can maintain a categorical approach to modelling translation without compromising on functoriality altogether. Some of our constructions are ad-hoc and there is room for improving most of them.
First, there is the issue of translating between a language where nouns do not have grammatical gender and number to a language that does. Using product pregroups or tupled pregroups to handle grammatical agreement could be a way forward, although a straightforward model for achieving this appears elusive.
Secondly, one could study translations between languages which have more featural and structural differences. For example, how could we interpret (functorially) translations between a language which has nominative-accusative alignment and a language that has ergative-absolutive (or split-ergative) alignment?
Thirdly, this article heavily focuses on syntax. It would be interesting to model how meaning in translation can be negotiated between different speakers and how one can keep track of their evolving semantic spaces. On a more technical note, one could change the meaning space from FVect to a category that possesses more substantial structure such as ConvexRel, the category where the objects are convex algebras and the morphisms as convex relations. In [BCG + 19] the authors showed that ConvexRel is a compact closed symmetric monoidal category and is thus suitable for modelling semantics in a compositional distributional functorial language model. Finally, separate from the question of translation, some attention could be dedicated to expanding the work of Cardinal ([Car02] [Car06] [Car07]) and producing a more complete pregroup approach to analysing other aspects of grammar that are typical to Japanese. In particular, the structure of coordinate and subordinate sentences and internally headed relative clauses are of particular interest to the author.
(
The translation functor at the level of syntax is given by the anti-homomorphismT : J → E which sends n → n E , o 1 → o 1E , o 5 → o 5E , s → s E . At the level of semantics we have F(n) = F(o 1 ) = F(o 5 ) = N, F(s) = S and G(n E ) = G(o 1E ) = G(o 5E ) = N E , G(s E ) = S E .In J we have the type reduction r = n(n r o 5 )n(n r o 1 )(o r 1 o r 5 s) s. After applying the translation functor T we get:T (n(n r o 5 )n(n r o 1 )(o r 1 o r 5 s)) = T (s)T (o r 5 )T (o r 1 )T (o 1 )T (n r )T (n)T (o 5 )T (n r )T (n)
1 s r nn ℓ n n r o 1 n n r o 5 o r 5 so ℓ 1 o 1 s r nn ℓ n n r o 2 o r 2 o r 1 s
/She bought a book from the market.'
I arrived home and wrote a letter.'In Japanese we have the following reduction diagram.(15) 家
ie
house
に
ni
LOC
着いた
tuita
arrived
が
ga
and
手紙
tegami
letter
を
wo
ACC
書いた
kaita
wrote
'ie
ni
tuita
ga
tegami
wo
kaita
n
n r o 5
o r
5 s
s r s s ℓ
n
n r o 2
o r
2 s
Interacting conceptual spaces i: Grammatical composition of concepts. Bcg + 19] Joe, Bob Bolt, Fabrizio Coecke, Martha Genovese, Dan Lewis, Robin Marsden, Piedeleu, 10.1007/978-3-030-12800-5_9BCG + 19] Joe Bolt, Bob Coecke, Fabrizio Genovese, Martha Lewis, Dan Marsden, and Robin Piedeleu. Interacting conceptual spaces i: Grammatical composition of concepts, 2019. DOI:10.1007/978-3-030-12800-5 9.
An algebraic approach to french sentence structure. Daniele Bargelli, Joachim Lambek, 10.1007/3-540-48199-0_4Daniele Bargelli and Joachim Lambek. An algebraic approach to french sentence struc- ture, 2001. DOI:10.1007/3-540-48199-0 4.
An algebraic approach to arabic sentence structure. Donna Bargelli, Joachim Lambek, 56575156Donna Bargelli and Joachim Lambek. An algebraic approach to arabic sentence structure, 2001. Corpus ID: 56575156.
Translating and evolving: Towards a model of language change in discocat. Tai-Danae Bradley, Martha Lewis, Jade Master, Brad Theilman, 10.4204/EPTCS.283.4Tai-Danae Bradley, Martha Lewis, Jade Master, and Brad Theilman. Trans- lating and evolving: Towards a model of language change in discocat, 2018. DOI:10.4204/EPTCS.283.4.
Valentin Boboc, arXiv:2303.05160π-augmented pregroups and applications to linguistics, 2023. Valentin Boboc. π-augmented pregroups and applications to linguistics, 2023. arXiv:2303.05160.
An algebraic study of japanese grammar. Kumi Cardinal, Phd thesisKumi Cardinal. An algebraic study of japanese grammar, 2002. Phd thesis.
Type grammar meets japanese particles. Kumi Cardinal, Kumi Cardinal. Type grammar meets japanese particles, 2006.
A pregroup analysis of japanese causatives. Kumi Cardinal, Kumi Cardinal. A pregroup analysis of japanese causatives, 2007.
A computational algebraic approach to latin grammar. Claudia Casadio, Jim Lambek, 10.1007/s11168-005-1286-0Claudia Casadio and Jim Lambek. A computational algebraic approach to latin grammar, 2005. DOI:10.1007/s11168-005-1286-0.
Mathematical foundations for a compositional distributional model of meaning. Bob Coecke, Mehrnoosh Sadrzadeh, Stephen Clark, 10.48550/arXiv.1003.4394Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. Mathematical foundations for a compositional distributional model of meaning, 2010. DOI:10.48550/arXiv.1003.4394.
Towards flexible pregroup grammars. Mario Fadda, Mario Fadda. Towards flexible pregroup grammars, 2002.
Coherence for compact closed categories. M Gregory, Miguel L Kelly, Laplaza, 10.1016/0022-4049(80)90101-2Gregory M Kelly and Miguel L Laplaza. Coherence for compact closed categories, 1980. DOI:10.1016/0022-4049(80)90101-2.
On the logic of beta-pregroups. Aleksandra Kiślak-Malinowska, 10.1007/s11225-007-9090-5Aleksandra Kiślak-Malinowska. On the logic of beta-pregroups, 2007. DOI:10.1007/s11225-007-9090-5.
Type grammar revisited. Joachim Lambek, 10.1007/3-540-48975-4_1Joachim Lambek. Type grammar revisited, 1997. DOI:10.1007/3-540-48975-4 1.
Exploring feature agreement in french with parallel pregroup computations. Joachim Lambek, 10.1007/s10849-009-9098-5Joachim Lambek. Exploring feature agreement in french with parallel pregroup compu- tations, 2010. DOI:10.1007/s10849-009-9098-5.
An algebraic approach to the german sentence. Joachim Lambek, Anne Preller, ID: irmm-00108541Joachim Lambek and Anne Preller. An algebraic approach to the german sentence, 2004. HAL ID: irmm-00108541.
Vector-based models of semantic composition. Jeff Mitchell, Mirella Lapata, Jeff Mitchell and Mirella Lapata. Vector-based models of semantic composition, 2008. https://aclanthology.org/P08-1028.pdf.
Pregroup analysis of persian sentences. Mehrnoosh Sadrzadeh, Mehrnoosh Sadrzadeh. Pregroup analysis of persian sentences, 2007. https://www.cs.ox.ac.uk/files/2416/PersPreGroup.pdf.
The frobenius anatomy of word meanings i: subject and object relative pronouns. Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke, 10.1093/logcom/ext044Mehrnoosh Sadrzadeh, Stephen Clark, and Bob Coecke. The frobenius anatomy of word meanings i: subject and object relative pronouns, 2013. DOI:10.1093/logcom/ext044.
The frobenius anatomy of word meanings ii: possessive relative pronouns. Mehrnoosh Sadrzadeh, Stephen Clark, Bob Coecke, 10.1093/logcom/exu027Mehrnoosh Sadrzadeh, Stephen Clark, and Bob Coecke. The frobenius anatomy of word meanings ii: possessive relative pronouns, 2014. DOI:10.1093/logcom/exu027.
. P Edward, Stabler, Tupled pregroup grammarsEdward P Stabler. Tupled pregroup grammars, 2008.
| [] |
[
"Combine CRF and MMSEG to Boost Chinese Word Segmentation in Social Media",
"Combine CRF and MMSEG to Boost Chinese Word Segmentation in Social Media"
] | [
"Yushi Yao \nShanghai Jiaotong University\n800 Dongchuan Street Minhang DistrictShanghaiChina\n",
"Zheng Huang huangzhengsjtu@126.com \nShanghai Jiaotong University\n800 Dongchuan Street Minhang DistrictShanghaiChina\n"
] | [
"Shanghai Jiaotong University\n800 Dongchuan Street Minhang DistrictShanghaiChina",
"Shanghai Jiaotong University\n800 Dongchuan Street Minhang DistrictShanghaiChina"
] | [] | In this paper, we propose a joint algorithm for the word segmentation on Chinese social media. Previous work mainly focus on word segmentation for plain Chinese text, in order to develop a Chinese social media processing tool, we need to take the main features of social media into account, whose grammatical structure is not rigorous, and the tendency of using colloquial and Internet terms makes the existing Chinese-processing tools inefficient to obtain good performance on social media.(Collobert et al., 2011)In our approach, we combine CRF and MMSEG algorithm and extend features of traditional CRF algorithm to train the model for word segmentation, We use Internet lexicon in order to improve the performance of our model on Chinese social media. Our experimental result on Sina Weibo shows that our approach outperforms the stateof-the-art model. | null | [
"https://arxiv.org/pdf/1510.07099v1.pdf"
] | 1,761,438 | 1510.07099 | b317cede36e263b189154d394b6b8775acd99211 |
Combine CRF and MMSEG to Boost Chinese Word Segmentation in Social Media
24 Oct 2015
Yushi Yao
Shanghai Jiaotong University
800 Dongchuan Street Minhang DistrictShanghaiChina
Zheng Huang huangzhengsjtu@126.com
Shanghai Jiaotong University
800 Dongchuan Street Minhang DistrictShanghaiChina
Combine CRF and MMSEG to Boost Chinese Word Segmentation in Social Media
24 Oct 2015
In this paper, we propose a joint algorithm for the word segmentation on Chinese social media. Previous work mainly focus on word segmentation for plain Chinese text, in order to develop a Chinese social media processing tool, we need to take the main features of social media into account, whose grammatical structure is not rigorous, and the tendency of using colloquial and Internet terms makes the existing Chinese-processing tools inefficient to obtain good performance on social media.(Collobert et al., 2011)In our approach, we combine CRF and MMSEG algorithm and extend features of traditional CRF algorithm to train the model for word segmentation, We use Internet lexicon in order to improve the performance of our model on Chinese social media. Our experimental result on Sina Weibo shows that our approach outperforms the stateof-the-art model.
Introduction
Social media contains vast amounts of information, all of the information can be used to explore micro-blog event and apply sentiment analysis, etc. In order to obtain essential information, first step is text processing, generally we use natural language processing methods to analyze social media and extract information for other social media research. (Xiong el al., 2013) An English sentence uses spaces as the gap between words, but Chinese has no such word boundaries, so Chinese natural language process includes one more step than English: Word Segmentation -determination of the word boundaries. Word Segmentation is the core step of the entire Chinese language processing, it is directly related to the accuracy of the entire Chinese natural language process, in addition, the difficulty of Chinese word segmentation lies in the removal of ambiguity and identifying OOV words. Ambiguity means that a sentence may result in many possible word segmentation results and every kind of segmentation has different semantic meanings, while the meaning of OOV words identification refers to the word which is not included by word dictionary, the most typical example is person's name, place name and so on.
Related Work
Currently research on natural language process for social media is still at the starting stage, in the 2006 SIGHAN Chinese word segmentation competition, approaches based on sequence annotation has been widely used. Microsoft used conditional random field and the size of feature window equals one. (Huang and Zhao, 2007) It turns out that features have been simplified, but the performance was still very good. DaLian Science and Technology University built two models, one is based on the use of the word CRF model, the other is based on MMSM model. During our experiments, we used CRF algorithm as well, and it plays the key role when we build the model. Our work mainly depends on the improvement of conditional random field.
Conditional random field (CRF) is a statistical sequence modeling framework which is first introduced into language processing by Lafferty J et al(2001). ) Previous research showed that CRF has a good performance on word segmentation accuracy in the pipeline method. Tseng H et al(2005) and John introduced a conditional random field sequence model in conjunction with character identity features, morphological features and character reduplication features. (Tseng et al., 2005; The study that is closely re-lated to ours is (Zhao et al., 2006), which also used assistant algorithm and added external lexicon, while they just add the output of assistant algorithm to the feature templates. Different from that work, we take not only the relevance between the character and its MMSEG output tag, but also the context feature of these MMSEG output tags as well.
Methods
There are already a lot of sophisticated algorithms for word segmentation such as: statistical methods(Hidden Markov Model HMM, CRF etc.), lexicon-based algorithms (MMSEG), and rule-based algorithms.
CRF performs better than lexicon-based models on OOV rates because CRF introduce additional features, (Collins, 2002) which may be artificially added (Xiong et al., 2009), including character level features and context level features, in addition, CRF also maintains the Markov characteristics of the word, (Wallach, ) thus we can remove word ambiguity by combining more features as well.
Generally there are several simple algorithms used in word segmentation. When applying lexicon-based algorithm such as MMSEG, we simply match words according to our lexicon. While for statistical-based algorithm such as CRF, the training set is turned into a Chinese character sequence and the segmentation task can be considered as an annotation task.
More specifically, the model trained by CRF assigns each character a word boundary tag when labeling a sentence. Here we use the BMES tag set, B, M and E denote the first, middle and last character of a multi-character word respectively, and S denotes the single character word. (McCallum et al., 2010) As social media contains a large amount of OOV words, and with respect to out-of-vocabulary recognition, lexicon-based algorithm has a strong advantage to identify the correct rate, so when provided a suitable lexicon, lexicon-based algorithm can be combined with CRF algorithm to enhance the performance of CRF in word segmentation of social media. (Qian et al., 2010) Finally we choose to use MMSEG to do rough segmentation and take the segmentation result as a new feature during the training process of the CRF segmentation model.
Experiments
The files involved in the training process include the entire feature template file, training set and lexicon files, in order to get the best results, we adopted a similar approach as coordinates rise, we will divide the experiment into three stages, each stage select one training material to optimize and fix the other two materials. (Razvan and Bunescu, 2008) At each stage, if one training material gets the best performance on test set, it will be chosen to be the final training material that we would use. (Wallach, 2002)
Data and Tool
The training set of word segmentation is from Backoff 2005. (Tseng et al., 2005) We have to mention that the lexicon used by MMSEG is the Sougou lab internet lexicon published in 2006, which contains a number of high-frequency words under internet environment, and mmseg4j project lexicon file.
There are few test sets about social media, so we crawled raw content from Sina weibo and then annotate manually, the contents is from all kinds of areas and users. The size of test set is about 190K, containing about 25 thousands Chinese words. What is worth mention is that our segmentation is the same as MSR's standard.
During our experiment, we use PCRF to train the segmentation model and test the model on our test sets, PCRF is an open source implementation of (Linear) Conditional Random Fields (CRFs) for segmenting/labeling sequential data. The train file format and feature template of PCRF are compatible with popular CRF implementations, like CRF++.
Evaluation Metric
We evaluate system performance on the individual tasks. For word segmentation, three metrics are used for evaluation: precision (P), recall (R), and F-score (F) defined by 2PR/(P+R), where precision is the percentage of correct words in the system output, and recall is the percentage of words in gold standard annotations that are correctly predicted.
Feature Template
Template file plays the core role in model training. Since CRF is designed to calculate the conditional probability of the sequence annotation de-Level Type Feature Function Character
Unigram
C − 1, C 0 , C 1
The previous, current, and next character Character
Bigram
C − 1C 0 , C 0 C 1
The previous (next) and current characters Character
Jump
C − 1, C 1
The previous and next character Tag
Unigram
T − 1, T 0 , T 1
The previous, current, and next tag Tag
Bigram
T − 1T 0 , T 0 T 1
The previous (next) and current tag Tag
Jump
T − 1, T 1
The previous and next tag Character-Tag Bigram C − 1T 0 , C 0 T 0 , C 1 T 0 The previous, current, next character and current tag Table 1: Feature Templates duced by the observed sequence, and this probability can be used to help describe a set of random variables related to the distribution of feature vectors, which are composed by characteristic function derived from feature templates defined by user. (Pietra et al., 1997) Each line in the template file of PCRF denotes one template. In each template, special macro %x[row,col] will be used to specify a token in the input data, row specifies the relative position from the current focusing token and col specifies the absolute position of the column.
The first stage of the experiment is to determine the feature vectors of our model, (Finkel et al., 2008) the original features are too simple so we change the feature template step by step and test the accuracy on our test set. Table 1 presents the features that get the best performance during our experiments. Our template are composed by three-level feature templates: Character-level, Tag-level and Character-Tag level, the three levels of features introduce the correlation of characters and tags respectively, as well as the correlation of the character and its neighbour tags.
• Experiment 1: No new correlation, just the correlation between each character and its neighbours.
• Experiment 2: New correlation between the MMSEG tag of each character, and the correlation between each MMSEG tag and its neighbour.
• Experiment 3: New trigram correlation among current character, its neighbour and its corresponding MMSEG tag.
• Table2 shows the results of word segmentation experiments with different feature template files. We found that maybe more feature templates means higher precision and recall. After these experiments we consider adding more correlations, including expanding the width of correlation window to 2 or 3(Experiment 5), but the test results got setback, which indicates that our word segmentation model only need to consider unigram Markov property. Considering too much correlation will have opposite effect on the performance.
In the segmentation experiment, the model raised nearly 0.9 percent on both precision and recall. Finally we decided to use segmentation model with the highest F-score.
Training Set
The next stage is to change the training set and find out the influence of the training set on our model. We choose the training sets from MSR and PKU because the text of these two training sets are mainly from newspaper, which is similar to the content of some social media because many social medias are informative, these social medias act like news, the only difference is that they are from Internet and general news is mainly from newspapers. Table3 shows the results of word segmentation experiments with different training sets. From the result of experiments we can see that the model trained by the combination of these two training sets did not outperform the model of single training set and the F score got declined conversely, it is confusing indeed, and the main reason is the different segmentation standard of the two training set result in the fact that misleading during the training process, which will be discussed later.
Lexicon
The final stage of our experiment is to choose an appropriate lexicon. We have known that the more words the lexicon contains, the higher ability of distinguishing words that MMSEG has, since our goal is to get high performance on the social media, we should add the Internet lexicon to our original lexicon, and we conducted one more experiment during which we add a lexicon of a particular field just for comparison. Table 4 shows the results of word segmentation experiments using different lexicons. The combination of the Internet lexicon has improved the overall performance of word segmentation about 0.1%, but if we just add the the lexicon of a particular field, such as the lexicon of medical science, it will have just a little positive effect on the test set, which is mainly due to the amount of overall coverage of terminology in specific areas is not high enough in our test set.
Result Comparison
For reference we took a comparison between our best experiment result and other segmentation tools. We choose the model trained by back-off2005 MSR training set and the internet lexicon from Sougou Lab for comparison. We add a column of closed test results for each tool to the table, LTP-cloud got the second rank in the the Chinese social media segmentation evaluation task in CLP 2012, indicating that the results of its closed test has great authority for social media NLP tasks. These data just represent the performance on a small test set. Our model may have a larger gap between these sophisticated tools on bigger test sets, but the results still guarantees the good adaptability of our model on Chinese social media.
Error Analysis
The most surprising result of our experiments is the combination of MSR and PKU training sets. We compared the training sets of MSR and PKU and finally get the effect of different segmentation standard on the model. We find there are many fundamental differences between the standards. For example, 613 is one word in the MSR training set, but is broken into 6 and 13 as two words in pku's. In addition, people's name is also quite a big issue, MSR tends to treat the whole name as a word while PKU takes the first name as a word and the last name as another one. All in all, MSR generates more words than PKU.
Conclusion
In this paper, our research is mainly focused on the study of word segmentation for Chinese social media. One of the most applicable algorithm is the conditional random field, which is widely used in word segmentation, part of speech tagging and named entity recognition and other aspects. CRF has a large advantage in the labeling issue, and MMSEG algorithm has a comparative advantage over other algorithms on in-vocabulary words, so we use MMSEG to do segmentation first, and use the result of the MMSEG segmentation as a new feature of CRF, in the experiments, we also continue to adjust the template file and exploit more correlation between Chinese characters, after several times of adjustment we got about 1.2% improvement on precision and recall over single CRF model.
Table 2 :
2Result of word segmentation experiments with different feature template files.
Table 3 :
3Result of word segmentation experiments
with different training sets.
Table 4 :
4Result of word segmentation experiments using different lexicons.
Table 5 :
5Result of word segmentation experiments with different training sets.
[ References, Mccallum, Chinese Segmentation and New Word Detection using Conditional Random Fields. Proceedings of the 20th international conference on Computational Linguistics. 562References [McCallum et al.2010] Peng F, Feng F and McCallum A. 2010. Chinese Segmentation and New Word De- tection using Conditional Random Fields. Proceed- ings of the 20th international conference on Compu- tational Linguistics:562
An Improved Chinese word Segmentation System with Conditional Random Field. [ Zhao, Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing. the Fifth SIGHAN Workshop on Chinese Language Processing1082117[Zhao et al.2006] Zhao H, Huang C N and Li M. 2006. An Improved Chinese word Segmentation System with Conditional Random Field. Proceedings of the Fifth SIGHAN Workshop on Chinese Language Pro- cessing, volume 1082117
A Conditional Random Field Word Segmenter for Sighan Bakeoff. Tseng, Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language Processing171[Tseng et al.2005] Tseng H, Chang P, Andrew G et al. 2005. A Conditional Random Field Word Seg- menter for Sighan Bakeoff 2005. Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing:171
Joint training and decoding using virtual nodes for cascaded segmentation and tagging tasks. Qian, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language Processing[Qian et al.2010] Qian X, Zhang Q, Zhou Y et al. 2010. Joint training and decoding using virtual nodes for cascaded segmentation and tagging tasks. Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing:187-195
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. [ Lafferty, Data Mining and Knowledge Discovery[Lafferty et al.2001] Lafferty J, McCallum A and Pereira F C N. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. Data Mining and Knowledge Dis- covery
Semi-supervised learning of hidden conditional random fields for time-series classification. M Kim, NeurocomputingKim M. 2013. Semi-supervised learning of hidden conditional random fields for time-series classification, Neurocomputing:339-349
[ Collobert, Natural Language Processing (Almost) from Scratch. 1[Collobert et al.2011] Collobert R, Weston J, Bottou L et al. 2011. Natural Language Processing (Almost) from Scratch, volume 1. Journal of Machine Learn- ing Research,2493-2537
Conditional ordinal random fields for structured ordinal-valued label prediction. Data mining and knowledge discovery. M Kim, Kim M. 2014. Conditional ordinal random fields for structured ordinal-valued label prediction. Data mining and knowledge discovery:378-401
Minimum tag error for discriminative training of conditional random fields. Xiong, Information Sciences. [Xiong et al.2009] Xiong Y, Zhu J, Huang H et al. 2009. Minimum tag error for discriminative train- ing of conditional random fields. Information Sciences:169-179
Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. Michael Collins, Proceedings of EMNLP 2002. EMNLP 2002Michael Collins. 2002. Discriminative training methods for hidden markov models: The- ory and experiments with perceptron algorithms. In Proceedings of EMNLP 2002:1-8
Efficient, feature-based, conditional random field parsing. Finkel, Proceedings of ACL-08:HLT. ACL-08:HLT[Finkel et al.2008] Jenny Rose Finkel, Alex Kleeman and Christopher D. 2008. Efficient, feature-based, conditional random field parsing. In Proceedings of ACL-08:HLT:959-967
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. [ Lafferty, Proceedings of ICML. ICML[Lafferty et al.2001] John Lafferty, Andrew McCallum and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and la- beling sequence data. In Proceedings of ICML 2001:282-289
J Xiong, Y Hao, Z Huang, Civil Transportation Event Extraction from Chinese Microblog. Cloud Computing and Big Data. CloudCom-Asia2013 International Conference on IEEE[Xiong el al.2013] Xiong J, Hao Y and Huang Z. 2013. Civil Transportation Event Extraction from Chi- nese Microblog. Cloud Computing and Big Data (CloudCom-Asia), 2013 International Conference on IEEE:577-582
Learning with probabilistic features for improved pipeline models. C Razvan, Bunescu, InProceedings of EMNLP. Huang and Hai ZhaoWaikiki, Honolulu, HawaiiChinese word segmentation: A decade review[Razvan and Bunescu2008] Razvan C. Bunescu. 2008. Learning with probabilistic features for improved pipeline models. InProceedings of EMNLP:670- 679 Waikiki, Honolulu, Hawaii. [Huang and Zhao2007] Changning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing:8-19
Three new probabilistic models for dependency parsing: An exploration. Jason M Eisner, Proceedings of the 16th Conference on Computational linguistics. the 16th Conference on Computational linguisticsJason M. Eisner. 1996. Three new prob- abilistic models for dependency parsing: An explo- ration. In Proceedings of the 16th Conference on Computational linguistics:340-345
Inducing features of random fields. [ Pietra, Proceedings of the 16th Conference on Computational linguistics. the 16th Conference on Computational linguistics[Pietra et al.1997] S. Della Pietra, V. Della Pietra and J. Lafferty. 1997. Inducing features of random fields. In Proceedings of the 16th Conference on Computa- tional linguistics:380-393
Improved inference for unlexicalized parsing. H Wallach, ; Harper, Huang2009] Mary Harper, Zhongqiang Huang, Proc. 6th Annual CLUKResearch Colloquium. Gale Book [Petrov and Klein2007] Slav Petrov and Dan Klein6th Annual CLUKResearch ColloquiumProceedings of NAACL 2007H. Wallach. 2002. Inducing features of random fields. In Proc. 6th Annual CLUKResearch Colloquium [Harper and Huang2009] Mary Harper and Zhongqiang Huang. 2009. Chinese statistical parsing. In Gale Book [Petrov and Klein2007] Slav Petrov and Dan Klein. 2007. Improved inference for unlexicalized parsing. In Proceedings of NAACL 2007:404-411
Conditional Random Fields: An introduction. H M Wallach, CIS22Technical ReportsWallach H M. 2004. Conditional Random Fields: An introduction. Technical Reports (CIS):22
Shallow Parsing with Conditional Random Fields. F Sha, F Pereira, Proceedings of Human Language Technology. Human Language TechnologyNAACLSha and Pereira2003[Sha and Pereira2003] F. Sha and F. Pereira. 2003. Shallow Parsing with Conditional Random Fields. Proceedings of Human Language Technology, NAACL 2003:134-141
| [] |
[
"LLaMA: Open and Efficient Foundation Language Models",
"LLaMA: Open and Efficient Foundation Language Models"
] | [
"Hugo Touvron \nMetaAI\n",
"Thibaut Lavril \nMetaAI\n",
"Gautier Izacard \nMetaAI\n",
"Xavier Martinet \nMetaAI\n",
"Marie-Anne Lachaux \nMetaAI\n",
"Timothee Lacroix \nMetaAI\n",
"Baptiste Rozière \nMetaAI\n",
"Naman Goyal \nMetaAI\n",
"Eric Hambro \nMetaAI\n",
"Faisal Azhar \nMetaAI\n",
"Aurelien Rodriguez \nMetaAI\n",
"Armand Joulin \nMetaAI\n",
"Edouard Grave \nMetaAI\n",
"Guillaume Lample \nMetaAI\n"
] | [
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI",
"MetaAI"
] | [] | We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community 1 . . 1990. A statistical approach to machine translation. Computational linguistics, 16(2):79-85. son. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457. | 10.48550/arxiv.2302.13971 | [
"https://export.arxiv.org/pdf/2302.13971v1.pdf"
] | 257,219,404 | 2302.13971 | 57e849d0de13ed5f91d086936296721d4ff75a75 |
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron
MetaAI
Thibaut Lavril
MetaAI
Gautier Izacard
MetaAI
Xavier Martinet
MetaAI
Marie-Anne Lachaux
MetaAI
Timothee Lacroix
MetaAI
Baptiste Rozière
MetaAI
Naman Goyal
MetaAI
Eric Hambro
MetaAI
Faisal Azhar
MetaAI
Aurelien Rodriguez
MetaAI
Armand Joulin
MetaAI
Edouard Grave
MetaAI
Guillaume Lample
MetaAI
LLaMA: Open and Efficient Foundation Language Models
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community 1 . . 1990. A statistical approach to machine translation. Computational linguistics, 16(2):79-85. son. 2013. One billion word benchmark for measuring progress in statistical language modeling. arXiv preprint arXiv:1312.3005. Tafjord. 2018. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint arXiv:1803.05457.
Introduction
Large Languages Models (LLMs) trained on massive corpora of texts have shown their ability to perform new tasks from textual instructions or from a few examples (Brown et al., 2020). These few-shot properties first appeared when scaling models to a sufficient size (Kaplan et al., 2020), resulting in a line of work that focuses on further scaling these models (Chowdhery et al., 2022;Rae et al., 2021). These efforts are based on the assumption that more parameters will lead to better performance. However, recent work from Hoffmann et al. (2022) shows that, for a given compute budget, the best performances are not achieved by the largest models, but by smaller models trained on more data.
The objective of the scaling laws from Hoffmann et al. (2022) is to determine how to best scale the dataset and model sizes for a particular training compute budget. However, this objective disregards the inference budget, which becomes critical when serving a language model at scale. In this context, given a target level of performance, the preferred model is not the fastest to train but the fastest at inference, and although it may be cheaper to train a large model to reach a certain level of * Equal contribution. Correspondence: {htouvron, thibautlav,gizacard,egrave,glample}@meta.com 1 https://github.com/facebookresearch/llama performance, a smaller one trained longer will ultimately be cheaper at inference. For instance, although Hoffmann et al. (2022) recommends training a 10B model on 200B tokens, we find that the performance of a 7B model continues to improve even after 1T tokens.
The focus of this work is to train a series of language models that achieve the best possible performance at various inference budgets, by training on more tokens than what is typically used. The resulting models, called LLaMA, ranges from 7B to 65B parameters with competitive performance compared to the best existing LLMs. For instance, LLaMA-13B outperforms GPT-3 on most benchmarks, despite being 10× smaller. We believe that this model will help democratize the access and study of LLMs, since it can be run on a single GPU. At the higher-end of the scale, our 65B-parameter model is also competitive with the best large language models such as Chinchilla or PaLM-540B.
Unlike Chinchilla, PaLM, or GPT-3, we only use publicly available data, making our work compatible with open-sourcing, while most existing models rely on data which is either not publicly available or undocumented (e.g. "Books -2TB" or "Social media conversations"). There exist some exceptions, notably OPT (Zhang et al., 2022), GPT-NeoX (Black et al., 2022), BLOOM (Scao et al., 2022) and GLM (Zeng et al., 2022), but none that are competitive with PaLM-62B or Chinchilla.
In the rest of this paper, we present an overview of the modifications we made to the transformer architecture (Vaswani et al., 2017), as well as our training method. We then report the performance of our models and compare with others LLMs on a set of standard benchmarks. Finally, we expose some of the biases and toxicity encoded in our models, using some of the most recent benchmarks from the responsible AI community.
Approach
Our training approach is similar to the methods described in previous work (Brown et al., 2020;Chowdhery et al., 2022), and is inspired by the Chinchilla scaling laws (Hoffmann et al., 2022). We train large transformers on a large quantity of textual data using a standard optimizer.
Pre-training Data
Our training dataset is a mixture of several sources, reported in Table 1, that cover a diverse set of domains. For the most part, we reuse data sources that have been leveraged to train other LLMs, with the restriction of only using data that is publicly available, and compatible with open sourcing. This leads to the following mixture of data and the percentage they represent in the training set:
English CommonCrawl [67%]. We preprocess five CommonCrawl dumps, ranging from 2017 to 2020, with the CCNet pipeline (Wenzek et al., 2020). This process deduplicates the data at the line level, performs language identification with a fastText linear classifier to remove non-English pages and filters low quality content with an ngram language model. In addition, we trained a linear model to classify pages used as references in Wikipedia v.s. randomly sampled pages, and discarded pages not classified as references.
C4 [15%]. During exploratory experiments, we observed that using diverse pre-processed Com-monCrawl datasets improves performance. We thus included the publicly available C4 dataset (Raffel et al., 2020) in our data. The preprocessing of C4 also contains deduplication and language identification steps: the main difference with CCNet is the quality filtering, which mostly relies on heuristics such as presence of punctuation marks or the number of words and sentences in a webpage. languages, which use either the Latin or Cyrillic scripts: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. We process the data to remove hyperlinks, comments and other formatting boilerplate.
Gutenberg and Books3 [4.5%]. We include two book corpora in our training dataset: the Gutenberg Project, which contains books that are in the public domain, and the Books3 section of ThePile (Gao et al., 2020), a publicly available dataset for training large language models. We perform deduplication at the book level, removing books with more than 90% content overlap.
ArXiv [2.5%]. We process arXiv Latex files to add scientific data to our dataset. Following Lewkowycz et al. (2022), we removed everything before the first section, as well as the bibliography. We also removed the comments from the .tex files, and inline-expanded definitions and macros written by users to increase consistency across papers.
Stack Exchange [2%]. We include a dump of Stack Exchange, a website of high quality questions and answers that covers a diverse set of domains, ranging from computer science to chemistry. We kept the data from the 28 largest websites, removed the HTML tags from text and sorted the answers by score (from highest to lowest).
Tokenizer. We tokenize the data with the bytepair encoding (BPE) algorithm (Sennrich et al., 2015), using the implementation from Sentence-Piece (Kudo and Richardson, 2018). Notably, we split all numbers into individual digits, and fallback to bytes to decompose unknown UTF-8 characters. Overall, our entire training dataset contains roughly 1.4T tokens after tokenization. For most of our training data, each token is used only once during training, with the exception of the Wikipedia and Books domains, over which we perform approximately two epochs.
Architecture
Following recent work on large language models, our network is based on the transformer architecture (Vaswani et al., 2017). We leverage various improvements that were subsequently proposed, and used in different models such as PaLM. Here are the main difference with the original architecture, and where we were found the inspiration for this change (in bracket):
Pre-normalization [GPT3]. To improve the training stability, we normalize the input of each transformer sub-layer, instead of normalizing the output. We use the RMSNorm normalizing function, introduced by Zhang and Sennrich (2019).
SwiGLU activation function [PaLM]. We replace the ReLU non-linearity by the SwiGLU activation function, introduced by Shazeer (2020) to improve the performance. We use a dimension of The details of the hyper-parameters for our different models are given in Table 2.
Optimizer
Our models are trained using the AdamW optimizer (Loshchilov and Hutter, 2017), with the following hyper-parameters: β 1 = 0.9, β 2 = 0.95. We use a cosine learning rate schedule, such that the final learning rate is equal to 10% of the maximal learning rate. We use a weight decay of 0.1 and gradient clipping of 1.0. We use 2, 000 warmup 0 200 400 600 800 1000 1200 1400
Billion of tokens Figure 1: Training loss over train tokens for the 7B, 13B, 33B, and 65 models. LLaMA-33B and LLaMA-65B were trained on 1.4T tokens. The smaller models were trained on 1.0T tokens. All models are trained with a batch size of 4M tokens. steps, and vary the learning rate and batch size with the size of the model (see Table 2 for details).
Efficient implementation
We make several optimizations to improve the training speed of our models. First, we use an efficient implementation of the causal multi-head attention to reduce memory usage and runtime. This implementation, available in the xformers library, 2 is inspired by Rabe and Staats (2021) and uses the backward from Dao et al. (2022). This is achieved by not storing the attention weights and not computing the key/query scores that are masked due to the causal nature of the language modeling task.
To further improve training efficiency, we reduced the amount of activations that are recomputed during the backward pass with checkpointing. More precisely, we save the activations that are expensive to compute, such as the outputs of linear layers. This is achieved by manually implementing the backward function for the transformer layers, instead of relying on the PyTorch autograd. To fully benefit from this optimization, we need to reduce the memory usage of the model by using model and sequence parallelism, as described by Korthikanti et al. (2022). Moreover, we also overlap the computation of activations and the communication between GPUs over the network (due to all_reduce operations) as much as possible.
When training a 65B-parameter model, our code processes around 380 tokens/sec/GPU on 2048 A100 GPU with 80GB of RAM. This means that training over our dataset containing 1.4T tokens takes approximately 21 days.
Main results
Following previous work (Brown et al., 2020), we consider zero-shot and few-shot tasks, and report results on a total of 20 benchmarks:
• Zero-shot. We provide a textual description of the task and a test example. The model either provides an answer using open-ended generation, or ranks the proposed answers.
• Few-shot. We provide a few examples of the task (between 1 and 64) and a test example. The model takes this text as input and generates the answer or ranks different options.
We compare LLaMA with other foundation models, namely the non-publicly available language models GPT-3 (Brown et al., 2020), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022) and PaLM (Chowdhery et al., 2022), as well as the open-sourced OPT models (Zhang et al., 2022), GPT-J (Wang and Komatsuzaki, 2021), and GPT-Neo (Black et al., 2022). In Section 4, we also briefly compare LLaMA with instruction-tuned models such as OPT-IML (Iyer et al., 2022) and Flan-PaLM (Chung et al., 2022).
We evaluate LLaMA on free-form generation tasks and multiple choice tasks. In the multiple choice tasks, the objective is to select the most appropriate completion among a set of given options, based on a provided context. We select the completion with the highest likelihood given the provided context. We follow Gao et al. (2021) and use the likelihood normalized by the number of characters in the completion, except for certain datasets (OpenBookQA, BoolQ), for which we follow Brown et al. (2020), and select a completion based on the likelihood normalized by the likelihood of the completion given "Answer:" as context: P (completion|context)/P (completion|"Answer:").
Common Sense Reasoning
We consider eight standard common sense rea- . These datasets include Cloze and Winograd style tasks, as well as multiple choice question answering. We evaluate in the zero-shot setting as done in the language modeling community.
In Table 3, we compare with existing models of various sizes and report numbers from the corresponding papers. First, LLaMA-65B outperforms Chinchilla-70B on all reported benchmarks but BoolQ. Similarly, this model surpasses PaLM-540B everywhere but on BoolQ and WinoGrande. LLaMA-13B model also outperforms GPT-3 on most benchmarks despite being 10× smaller.
Closed-book Question Answering
We compare LLaMA to existing large language models on two closed-book question answering benchmarks: Natural Questions (Kwiatkowski et al., 2019) and TriviaQA (Joshi et al., 2017). For both benchmarks, we report exact match performance in a closed book setting, i.e., where the models do not have access to documents that contain evidence to answer the question. In Table 4, we report performance on NaturalQuestions, and in Table 5, we report on TriviaQA. On both benchmarks, LLaMA-65B achieve state-of-the-arts performance in the zero-shot and few-shot settings. More importantly, the LLaMA-13B is also competitive on these benchmarks with GPT-3 and Chinchilla, despite being 5-10× smaller. This model runs on a single V100 GPU during inference.
Reading Comprehension
We evaluate our models on the RACE reading comprehension benchmark (Lai et al., 2017). This dataset was collected from English reading comprehension exams designed for middle and high school Chinese students. We follow the evaluation setup from Brown et al. (2020) and report results in Table 6. On these benchmarks, LLaMA-65B is competitive with PaLM-540B, and, LLaMA-13B outperforms GPT-3 by a few percents.
Mathematical reasoning
We evaluate our models on two mathematical reasoning benchmarks: MATH (Hendrycks et al., 2021) and GSM8k (Cobbe et al., 2021). MATH is a dataset of 12K middle school and high school mathematics problems written in LaTeX. GSM8k is a set of middle school mathematical problems. In Table 7
Code generation
We evaluate the ability of our models to write code from a natural language description on two benchmarks: HumanEval (Chen et al., 2021) and MBPP (Austin et al., 2021). For both tasks, the model receives a description of the program in a few sentences, as well as a few input-output examples. In HumanEval, it also receives a function signature, and the prompt is formatted as natural code with the textual description and tests in a docstring. The model needs to generate a Python program that fits the description and satisfies the test cases. In Table 8, we compare the pass@1 scores of our models with existing language models that have not been finetuned on code, namely PaLM and LaMDA (Thoppilan et al., 2022). PaLM and LLaMA were trained on datasets that contain a similar number of code tokens. As show in Table 8, for a similar number of parameters, LLaMA outperforms other general models such as LaMDA and PaLM, which are not trained or finetuned specifically for code. LLaMA with 13B parameters and more outperforms LaMDA 137B on both HumanEval and MBPP. LLaMA 65B also outperforms PaLM 62B, even when it is trained longer. The pass@1 results reported in this table were obtained by sampling with temperature 0.1. The pass@100 and pass@80 metrics were obtained with temperature 0.8. We use the same method as Chen et al. (2021) to obtain unbiased estimates of the pass@k.
It is possible to improve the performance on code by finetuning on code-specific tokens. For instance, PaLM-Coder (Chowdhery et al., 2022) increases the pass@1 score of PaLM on HumanEval from 26.2% for PaLM to 36%. Other models trained specifically for code also perform better than general models on these tasks (Chen et al., 2021;Nijkamp et al., 2022;Fried et al., 2022). Finetuning on code tokens is beyond the scope of this paper.
Massive Multitask Language Understanding
The massive multitask language understanding benchmark, or MMLU, introduced by Hendrycks et al. (2020) consists of multiple choice questions covering various domains of knowledge, including humanities, STEM and social sciences. We evaluate our models in the 5-shot setting, using the examples provided by the benchmark, and report results in Table 9. On this benchmark, we observe that the LLaMA-65B is behind both Chinchilla-70B and PaLM-540B by a few percent in average, and across most domains. A potential explanation is that we have used a limited amount of books and academic papers in our pre-training data, i.e., ArXiv, Gutenberg and Books3, that sums up to only 177GB, while these models were trained on up to 2TB of books. This large quantity of books used by Gopher, Chinchilla and PaLM may also explain why Gopher outperforms GPT-3 on this benchmark, while it is comparable on other benchmarks.
Evolution of performance during training
During training, we tracked the performance of our models on a few question answering and common sense benchmarks, and report them in Figure 2.
On most benchmarks, the performance improves steadily, and correlates with the training perplexity of the model (see Figure 1).
Instruction Finetuning
In this section, we show that briefly finetuning on instructions data rapidly leads to improvements on MMLU. Although the non-finetuned version of LLaMA-65B is already able to follow basic instructions, we observe that a very small amount of finetuning improves the performance on MMLU, and further improves the ability of the model to follow instructions. Since this is not the focus of this paper, we only conducted a single experiment following the same protocol as Chung et al. (2022) to train an instruct model, LLaMA-I. In Table 10, we report the results of our instruct model LLaMA-I on MMLU and compare with existing instruction finetuned models of moderate sizes, namely, OPT-IML (Iyer et al., 2022) and the Flan-PaLM series (Chung et al., 2022). All the reported numbers are from the corresponding papers. Despite the simplicity of the instruction finetuning approach used here, we reach 68.9% on MMLU. LLaMA-I (65B) outperforms on MMLU existing instruction finetuned models of moderate sizes, but are still far from the state-of-the-art, that is 77.4 for GPT code-davinci-002 on MMLU (numbers taken from Iyer et al. (2022)). The details of the performance on MMLU on the 57 tasks can be found in Table 16 of the appendix.
Bias, Toxicity and Misinformation
Large language models have been showed to reproduce and amplify biases that are existing in the training data (Sheng et al., 2019;Kurita et al., 2019), and to generate toxic or offensive content (Gehman et al., 2020). As our training dataset contains a large proportion of data from the Web, we believe that it is crucial to determine the potential for our models to generate such content. To understand the potential harm of LLaMA-65B, we evaluate on different benchmarks that measure toxic content production and stereotypes detection. While we have selected some of the standard benchmarks that are used by the language model community to indicate some of the issues with these models, these evaluations are not sufficient to fully understand the risks associated with these models.
RealToxicityPrompts
Language models can generate toxic language, e.g., insults, hate speech or threats. There is a very large range of toxic content that a model can generate, making a thorough evaluation challenging. Several recent work (Zhang et al., 2022;Hoffmann et al., 2022) have considered the RealToxicityPrompts benchmark (Gehman et al., 2020) as an indicator of how toxic is their model. RealToxicityPrompts consists of about 100k prompts that the model must complete; then a toxicity score is automatically evaluated by making a request to PerspectiveAPI 3 . We do not have control over the pipeline used by the third-party PerspectiveAPI, making comparison with previous models difficult.
For each of the 100k prompts, we greedily generate with our models, and measure their toxicity score. The score per prompt ranges from 0 (non-toxic) to 1 (toxic). In Table 11, we report our averaged score on basic and respectful prompt categories of RealToxicityPrompts. These scores are "comparable" with what we observe in the literature (e.g., 0.087 for Chinchilla) but the methodologies differ between these work and ours (in terms of sampling strategy, number of prompts and time of API). We observe that toxicity increases Table 11: RealToxicityPrompts. We run a greedy decoder on the 100k prompts from this benchmark. The "respectful" versions are prompts starting with "Complete the following sentence in a polite, respectful, and unbiased manner:", and "Basic" is without it. Scores were obtained using the PerplexityAPI, with higher score indicating more toxic generations.
with the size of the model, especially for Respectful prompts. This was also observed in previous work (Zhang et al., 2022) Table 12: CrowS-Pairs. We compare the level of biases contained in LLaMA-65B with OPT-175B and GPT3-175B. Higher score indicates higher bias.
CrowS-Pairs
We evaluate the biases in our model on the CrowS-Pairs (Nangia et al., 2020). This dataset allows to measure biases in 9 categories: gender, religion, race/color, sexual orientation, age, nationality, disability, physical appearance and socioeconomic status. Each example is composed of a stereotype and an anti-stereotype, we measure the model preference for the stereotypical sentence using the perplexity of both sentences in a zero-shot setting. Higher scores thus indicate higher bias. We compare with GPT-3 and OPT-175B in Table 12. LLaMA compares slightly favorably to both models on average. Our model is particularly biased in the religion category (+10% compared to OPT-175B), followed by age and gender. We expect these biases to come from CommonCrawl despite multiple filtering steps.
WinoGender
To further investigate the biases of our model on the gender category, we look at the WinoGender benchmark (Rudinger et al., 2018), a co-reference resolution dataset. WinoGender is made of Winograd schema, and biases are evaluated by determining if a model co-reference resolution performance is impacted by the gender of the pronoun.
More precisely, each sentence has three mentions: an "occupation", a "participant", and a "pronoun" where the pronoun is co-referencing either the occupation or participant. We prompt the model to determine the co-reference relation and measure if it does so correctly according to the context of the sentence. The goal is to reveal if societal biases associated with occupations have been captured by the model. For example, a sentence in the WinoGender dataset is "The nurse notified the patient that his shift would be ending in an hour.", which is followed by 'His' refers to. We then compare the perplexity of the continuations the nurse and the patient to perform co-reference resolution with the model. We evaluate the performance when using 3 pronouns: "her/her/she", "his/him/he" and "their/them/someone" (the different choices corresponding to the grammatical function of the pronoun.
In Table 13, we report the co-reference scores for the three different pronouns contained in the dataset. We observe that our model is significantly better at performing co-reference resolution for the "their/them/someone" pronouns than for the "her/her/she" and "his/him/he" pronouns. A similar observation was made in previous work (Rae et al., 2021;Hoffmann et al., 2022), and is likely indicative of gender bias. Indeed, in the case of the "her/her/she" and "his/him/he" pronouns, the model is probably using the majority gender of the occupation to perform co-reference resolution, instead of using the evidence of the sentence.
To further investigate this hypothesis, we look at the set of "gotcha" cases for the "her/her/she" and "his/him/he" pronouns in the WinoGender dataset. Theses cases correspond to sentences in which the pronoun does not match the majority gender of the occupation, and the occupation is the correct answer. In Table 13, we observe that our model, LLaMA-65B, makes more errors on the gotcha examples, clearly showing that it capture societal biases related to gender and occupation. The drop of performance exists for "her/her/she" and "his/him/he" pronouns, which is indicative of biases regardless of gender.
TruthfulQA
TruthfulQA (Lin et al., 2021) aims to measure the truthfulness of a model, i.e., its ability to identify when a claim is true. Lin et al. (2021) consider the definition of "true" in the sense of "literal truth about the real world", and not claims that are only true in the context of a belief system or tradition. This benchmark can evaluate the risks of a model to generate misinformation or false claims. The questions are written in diverse style, cover 38 categories and are designed to be adversarial. Table 13: WinoGender. Co-reference resolution accuracy for the LLaMA models, for different pronouns ("her/her/she" and "his/him/he"). We observe that our models obtain better performance on "their/them/someone' pronouns than on "her/her/she" and "his/him/he', which is likely indicative of biases. Table 14: TruthfulQA. We report the fraction of truthful and truthful*informative answers, as scored by specially trained models via the OpenAI API. We follow the QA prompt style used in Ouyang et al. (2022), and report the performance of GPT-3 from the same paper.
In Table 14, we report the performance of our models on both questions to measure truthful models and the intersection of truthful and informative. Compared to GPT-3, our model scores higher in both categories, but the rate of correct answers is still low, showing that our model is likely to hallucinate incorrect answers.
Carbon footprint
The training of our models have consumed a massive quantity of energy, responsible for the emission of carbon dioxide. We follow the recent literature on the subject and breakdown both the total energy consumption and the resulting carbon footprint in Table 15. We follow a formula for Wu et al. (2022) to estimate the Watt-hour, Wh, needed to train a model, as well as the tons of carbon emissions, tCO 2 eq. For the Wh, we use the formula: Wh = GPU-h×(GPU power consumption)×PUE, where we set the Power Usage Effectiveness (PUE) at 1.1. The resulting carbon emission depends on the location of the data center used to train the network. For instance, BLOOM uses a grid that emits 0.057 kg CO 2 eq/KWh leading to 27 tCO 2 eq and OPT a grid that emits 0.231 kg CO 2 eq/KWh, leading to 82 tCO 2 eq. In this study, we are interested in comparing the cost in carbon emission of training of these models if they were trained in the same data center. Hence, we do not take the location of data center in consideration, and use, instead, the US national average carbon intensity factor of 0.385 kg CO 2 eq/KWh. This leads to the following formula for the tons of carbon emissions:
tCO 2 eq = MWh × 0.385.
We apply the same formula to OPT and BLOOM for fair comparison. For OPT, we assume training required 34 days on 992 A100-80B (see their logs 4 ). Finally, we estimate that we used 2048 A100-80GB for a period of approximately 5 months to develop our models. This means that developing these models would have cost around 2,638 MWh under our assumptions, and a total emission of 1,015 tCO 2 eq. We hope that releasing these models will help to reduce future carbon emission since the training is already done, and some of the models are relatively small and can be run on a single GPU.
Related work
Language models are probability distributions over sequences of words, tokens or characters (Shannon, 1948(Shannon, , 1951. This task, often framed as next token prediction, has long been considered a core problem in natural language processing (Bahl et al., 1983;Brown et al., 1990). Because Turing (1950) proposed to measure machine intelligence by using language through the "imitation game", language modeling has been proposed as a benchmark to measure progress toward artificial intelligence (Mahoney, 1999).
Architecture. Traditionally, language models were based on n-gram count statistics (Bahl et al., 1983), and various smoothing techniques were proposed to improve the estimation of rare events (Katz, 1987;Kneser and Ney, 1995). In the past two decades, neural networks have been successfully applied to the language modelling task, Table 15: Carbon footprint of training different models in the same data center. We follow Wu et al. (2022) to compute carbon emission of training OPT, BLOOM and our models in the same data center. For the power consumption of a A100-80GB, we take the thermal design power for NVLink systems, that is 400W. We take a PUE of 1.1 and a carbon intensity factor set at the national US average of 0.385 kg CO 2 e per KWh.
starting from feed forward models (Bengio et al., 2000), recurrent neural networks (Elman, 1990;Mikolov et al., 2010) and LSTMs (Hochreiter and Schmidhuber, 1997;Graves, 2013). More recently, transformer networks, based on self-attention, have led to important improvements, especially for capturing long range dependencies (Vaswani et al., 2017;Radford et al., 2018;Dai et al., 2019).
Scaling. There is a long history of scaling for language models, for both the model and dataset sizes. Brants et al. (2007) showed the benefits of using language models trained on 2 trillion tokens, resulting in 300 billion n-grams, on the quality of machine translation. While this work relied on a simple smoothing technique, called Stupid Backoff, (Zhang et al., 2022), and GLM (Zeng et al., 2022). Hestness et al. (2017) and Rosenfeld et al. (2019) studied the impact of scaling on the performance of deep learning models, showing the existence of power laws between the model and dataset sizes and the performance of the system. Kaplan et al. (2020) derived power laws specifically for transformer based language models, which were later refined by Hoffmann et al. (2022), by adapting the learning rate schedule when scaling datasets. Finally, Wei et al. (2022) studied the effect of scaling on the abilities of large language models.
Conclusion
In this paper, we presented a series of language models that are released openly, and competitive with state-of-the-art foundation models. Most notably, LLaMA-13B outperforms GPT-3 while being more than 10× smaller, and LLaMA-65B is competitive with Chinchilla-70B and PaLM-540B. Unlike previous studies, we show that it is possible to achieve state-of-the-art performance by training exclusively on publicly available data, without resorting to proprietary datasets. We hope that releasing these models to the research community will accelerate the development of large language models, and help efforts to improve their robustness and mitigate known issues such as toxicity and bias. Additionally, we observed like Chung et al. (2022) that finetuning these models on instructions lead to promising results, and we plan to further investigate this in future work. Finally, we plan to release larger models trained on larger pretraining corpora in the future, since we have seen a constant improvement in performance as we were scaling.
We evaluate LLaMA on Natural Questions and TriviaQA. For Natural Questions we use the test split used for open-domain question answering containing 3610 questions. For TriviaQA we evaluate on the dev set of the filtered set. This differs from GPT-3 and PaLM, which evaluate on the test set of the unfiltered set for which the online evaluation server is not available anymore 5 .
We generate answers using greedy decoding, and extract an answer from the generation by stopping at the first line break, final dot or comma. Generated answers are evaluated with the standard exact match metric: a generated answer is considered correct if it matches any answer of the list of answers after normalization. For this normalization step we lowercase generated answers and remove articles, punctuation and duplicate whitespaces. Figure 3 presents formatted examples in the 1-shot setting for Natural Questions and TriviaQA respectively. In all settings, we preprend the string Answer these questions:\n to the list of questions and answers.
Context → Answer these questions:
Context → Answer these questions: Q: Who sang who wants to be a millionaire in high society? Q: In Scotland a bothy/bothie is a? A: Frank Sinatra A: House Q: Who wrote the book the origin of species? Q: The ancient city of Troy is located in what modern country? A:
A:
Target → Charles Darwin Target → Turkey
C Generations from LLaMA-65B
In this section, we show some examples of generations obtained with LLaMA-65B (without instruction finetuning). Prompts are in bold.
1 1 2 3 5 8 13 21 34 55 89 144 233 377 610 987 Everyone knows the above sequence, but how many have wondered why it grows so fast? It seems to be the fastest growing sequence in mathematics.
The sequence was discovered by Leonardo of Pisa around 1200. It is an infinite sequence which appears to grow very rapidly, at least in its initial terms. It is perhaps the most famous sequence in mathematics, for it pops up everywhere. The sequence is called the Fibonacci sequence, named after Leonardo of Pisa, who was also known as Fibonacci.
He first investigated this sequence in his famous Liber abaci of 1202, when he studied the growth in the population of rabbits based on idealized assumptions. See the Fibonacci rabbits page for the details.
Here is the recommendation letter that I wrote for an application to a dragon feeder position at the Magic Unicorn Corporation: Dear recruiter, I have known ___ for two years, and I believe that she would be an excellent dragon feeder for the Magic Unicorn Corporation. ___ has an ability to remember and process large amounts of information, which is an important skill for a dragon feeder. ___, as an accomplished knight, has a deep understanding of how to kill dragons and how to use each dragon's weaknesses against it. This means that she knows what kinds of foods each dragon likes and what kinds of foods are dangerous to each dragon. This knowledge and experience will be invaluable as she feeds the dragons. I am confident that ___'s competence, skill, and experience will make her an excellent employee. Please contact me at (___) ___-___ if you have any questions. I look forward to hearing from you. Best regards, Honorable Knight Sir George
2 3
24d instead of 4d as in PaLM. Rotary Embeddings [GPTNeo]. We remove the absolute positional embeddings, and instead, add rotary positional embeddings (RoPE), introduced by Su et al. (2021), at each layer of the network.
soning benchmarks: BoolQ (Clark et al., 2019), PIQA (Bisk et al., 2020), SIQA (Sap et al., 2019),HellaSwag(Zellers et al., 2019), WinoGrande(Sakaguchi et al., 2021), ARC easy and challenge(Clark et al., 2018) andOpenBookQA (Mihaylov et al., 2018)
Figure 2 :
2Evolution of performance on question answering and common sense reasoning during training.
Heafield et al. (2013) later showed how to scale Kneser-Ney smoothing to Web-scale data. This allowed to train a 5-gram model on 975 billions tokens from CommonCrawl, resulting in a model with 500 billions n-grams(Buck et al., 2014).Chelba et al. (2013) introduced the One Billion Word benchmark, a large scale training dataset to measure the progress of language models.In the context of neural language models, Jozefowicz et al.(2016)obtained state-of-the-art results on the Billion Word benchmark by scaling LSTMs to 1 billion parameters. Later, scaling transformers lead to improvement on many NLP tasks. Notable models include BERT (Devlin et al., 2018), GPT-2 (Radford et al., 2019), Megatron-LM (Shoeybi et al., 2019), and T5 (Raffel et al., 2020). A significant breakthrough was obtained with GPT-3 (Brown et al., 2020), a model with 175 billion parameters. This lead to a series of Large Language Models, such as Jurassic-1 (Lieber et al., 2021), Megatron-Turing NLG (Smith et al., 2022), Gopher (Rae et al., 2021), Chinchilla (Hoffmann et al., 2022), PaLM (Chowdhery et al., 2022), OPT
Figure 3 :
3Formatted dataset example for Natural Questions (left) & TriviaQA (right
Github [4.5%]. We use the public GitHub dataset available on Google BigQuery. We only kept projects that are distributed under the Apache, BSD and MIT licenses. Additionally, we filtered low quality files with heuristics based on the line length or proportion of alphanumeric characters, and removed boilerplate, such as headers, with regular expressions. Finally, we deduplicate the resulting dataset at the file level, with exact matches.Wikipedia [4.5%]. We add Wikipedia dumps from the June-August 2022 period, covering 20Dataset
Sampling prop. Epochs Disk size
CommonCrawl
67.0%
1.10
3.3 TB
C4
15.0%
1.06
783 GB
Github
4.5%
0.64
328 GB
Wikipedia
4.5%
2.45
83 GB
Books
4.5%
2.23
85 GB
ArXiv
2.5%
1.06
92 GB
StackExchange
2.0%
1.03
78 GB
Table 1: Pre-training data. Data mixtures used for pre-
training, for each subset we list the sampling propor-
tion, number of epochs performed on the subset when
training on 1.4T tokens, and disk size. The pre-training
runs on 1T tokens have the same sampling proportion.
params dimension n heads n layers learning rate batch size n tokens6.7B
4096
32
32
3.0e −4
4M
1.0T
13.0B
5120
40
40
3.0e −4
4M
1.0T
32.5B
6656
52
60
1.5e −4
4M
1.4T
65.2B
8192
64
80
1.5e −4
4M
1.4T
Table 2 :
2Model sizes, architectures, and optimization hyper-parameters.
BoolQ PIQA SIQA HellaSwag WinoGrande ARC-e ARC-c OBQAGPT-3
175B 60.5
81.0
-
78.9
70.2
68.8
51.4
57.6
Gopher
280B 79.3
81.8 50.6
79.2
70.1
-
-
-
Chinchilla
70B 83.7
81.8 51.3
80.8
74.9
-
-
-
PaLM
62B 84.8
80.5
-
79.7
77.0
75.2
52.5
50.4
PaLM-cont 62B 83.9
81.4
-
80.6
77.0
-
-
-
PaLM
540B 88.0
82.3
-
83.4
81.1
76.6
53.0
53.4
LLaMA
7B 76.5
79.8 48.9
76.1
70.1
72.8
47.6
57.2
13B 78.1
80.1 50.4
79.2
73.0
74.8
52.7
56.4
33B 83.1
82.3 50.4
82.8
76.0
80.0
57.8
58.6
65B 85.3
82.8 52.3
84.2
77.0
78.9
56.0
60.2
Table 3 :
3Zero-shot performance on Common Sense Reasoning tasks.
Table 4 :
4NaturalQuestions. Exact match performance.
Table 5 :
5TriviaQA. Zero-shot and few-shot exact match performance on the filtered dev set.
Table 6 :
6Reading Comprehension. Zero-shot accuracy.
, we compare with PaLM and Minerva(Lewkowycz et al., 2022). Minerva is a series of PaLM models finetuned on 38.5B tokens extracted from ArXiv and Math Web Pages, while neither PaLM or LLaMA are finetuned on mathematical data. The numbers for PaLM and Minerva are taken fromLewkowycz et al. (2022), and we compare with and without maj1@k. maj1@k denotes evaluations where we generate k samples for each problem and perform a majority voting(Wang et al., 2022). On GSM8k, we observe that LLaMA-65B outperforms Minerva-62B, although it has not been fine-tuned on mathematical data.
Table 7 :
7Model performance on quantitative reasoning datasets. For majority voting, we use the same setup as Minerva, with k = 256 samples for MATH and k = 100 for GSM8k (Minerva 540B uses k = 64 for MATH and and k = 40 for GSM8k). LLaMA-65B outperforms Minerva 62B on GSM8k, although it has not been fine-tuned on mathematical data.
Table 8 :
8Model performance for code generation.We report the pass@ score on HumanEval and MBPP.
HumanEval generations are done in zero-shot and
MBBP with 3-shot prompts similar to Austin et al.
(2021). The values marked with * are read from figures
in Chowdhery et al. (2022).
Table 9 :
9Massive Multitask Language Understanding (MMLU). Five-shot accuracy.that may indicate that this benchmark is not
reliable. On WinoGrande, the performance does
not correlate as well with training perplexity:
the LLaMA-33B and LLaMA-65B have similar
performance during the training.
Table 10 :
10Instructionfinetuning -MMLU (5-shot).
Comparison of models of moderate size with and with-
out instruction finetuning on MMLU.
3 https://perspectiveapi.com/Basic Respectful
LLaMA
7B 0.106
0.081
13B 0.104
0.095
33B 0.107
0.087
65B 0.128
0.141
). https://competitions.codalab.org/competitions/172085 B MMLU
GPT-3 Gopher Chinchilla
LLaMA
LLaMA-I
175B
280B
70B
7B 13B 33B 65B
65B
Abstract Algebra
STEM 30.0
25.0
31.0
29.0 34.0 32.0 34.0
31.0
Anatomy
STEM 48.0
56.3
70.4
37.0 45.9 51.9 57.8
62.2
Astronomy
STEM 49.0
65.8
73.0
33.6 46.1 61.8 72.4
81.6
Business Ethics
Other 46.0
70.0
72.0
40.0 45.0 56.0 57.0
72.0
Clinical Knowledge
Other 48.0
67.2
75.1
35.1 45.7 57.4 65.3
69.1
College Biology
STEM 45.0
70.8
79.9
37.5 45.1 58.3 68.8
81.9
College Chemistry
STEM 26.0
45.0
51.0
32.0 30.0 45.0 50.0
45.0
College Computer Science
STEM 46.0
49.0
51.0
29.0 39.0 45.0 47.0
51.0
College Mathematics
STEM 34.5
37.0
32.0
33.0 32.0 40.0 35.0
36.0
College Medicine
Other 48.0
60.1
66.5
30.6 42.8 52.0 54.3
63.0
College Physics
STEM 28.0
34.3
46.1
26.5 18.6 28.4 36.3
46.1
Computer Security
STEM 57.0
65.0
76.0
45.0 65.0 66.0 79.0
79.0
Conceptual Physics
STEM 36.5
49.4
67.2
36.6 41.3 51.5 59.6
66.4
Econometrics
Social Science 33.0
43.0
38.6
23.7 27.2 35.1 40.4
52.6
Electrical Engineering
STEM 50.0
60.0
62.1
26.9 40.7 49.7 53.8
60.7
Elementary Mathematics
STEM 30.0
33.6
41.5
24.3 24.9 36.0 37.8
42.9
Formal Logic
Humanities 29.0
35.7
33.3
27.0 33.3 34.1 44.4
47.6
Global Facts
Other 37.0
38.0
39.0
29.0 35.0 35.0 39.0
40.0
High School Biology
STEM 48.0
71.3
80.3
34.5 52.6 67.7 73.9
82.9
High School Chemistry
STEM 33.0
47.8
58.1
28.1 28.6 41.9 40.4
44.8
High School Computer Science
STEM 39.0
54.0
58.0
31.0 48.0 60.0 67.0
73.0
High School European History
Humanities 54.0
72.1
78.8
44.2 61.8 73.9 78.8
86.1
High School Geography
Social Science 58.0
76.8
86.4
34.3 54.6 70.7 77.8
87.9
High School Government And Politics Social Science 58.0
83.9
91.2
44.6 66.3 82.9 88.1
92.8
High School Macroeconomics
Social Science 40.5
65.1
70.5
35.4 44.4 56.9 65.9
69.2
High School Mathematics
STEM 28.0
23.7
31.9
24.8 23.7 27.0 34.4
37.0
High School Microeconomics
Social Science 42.0
66.4
77.7
31.9 47.5 55.5 68.9
78.6
High School Physics
STEM 28.0
33.8
36.4
26.5 28.5 35.8 37.1
41.7
High School Psychology
Social Science 61.0
81.8
86.6
47.3 60.9 76.2 82.2
87.9
High School Statistics
STEM 30.5
50.0
58.8
35.2 30.1 45.4 58.3
59.3
High School Us History
Humanities 53.0
78.9
83.3
39.7 58.3 77.9 83.8
90.7
High School World History
Humanities 56.0
75.1
85.2
40.9 66.2 79.3 83.1
89.0
Human Aging
Other 50.0
66.4
77.6
40.8 54.7 67.7 69.5
72.2
Human Sexuality
Social Science 54.0
67.2
86.3
36.6 58.8 64.1 77.9
87.0
International Law
Humanities 55.5
77.7
90.9
51.2 62.8 72.7 79.3
87.6
Jurisprudence
Humanities 55.0
71.3
79.6
38.9 51.9 70.4 73.2
85.2
Logical Fallacies
Humanities 48.0
72.4
80.4
39.3 52.8 68.1 77.3
80.4
Machine Learning
STEM 31.0
41.1
41.1
23.2 31.3 39.3 49.1
52.7
Management
Other 56.0
77.7
82.5
35.0 66.0 77.7 82.5
83.5
Marketing
Other 60.0
83.3
89.7
46.6 71.8 83.3 85.9
92.7
Medical Genetics
Other 40.0
69.0
69.0
43.0 52.0 67.0 67.0
68.0
Miscellaneous
Other 60.0
75.7
84.5
42.4 65.4 78.5 82.1
84.3
Moral Disputes
Humanities 44.5
66.8
77.5
40.2 50.9 66.2 72.3
76.9
Moral Scenarios
Humanities 26.0
40.2
36.5
24.3 30.1 38.2 48.9
55.9
Nutrition
Other 47.0
69.9
77.1
37.6 51.6 62.8 67.3
74.5
Philosophy
Humanities 51.0
68.8
79.4
39.9 54.0 66.2 74.0
79.1
Prehistory
Humanities 53.0
67.6
81.2
36.1 51.5 67.0 75.3
79.0
Professional Accounting
Other 33.0
44.3
52.1
25.9 35.8 43.6 46.5
56.0
Professional Law
Humanities 34.5
44.5
56.5
30.2 38.0 45.9 49.1
54.4
Professional Medicine
Other 36.0
64.0
75.4
44.5 50.4 54.0 61.4
70.6
Professional Psychology
Social Science 44.5
68.1
75.7
35.1 47.7 62.9 65.7
71.4
Public Relations
Social Science 48.0
71.8
73.6
40.9 60.9 67.3 73.6
74.6
Security Studies
Social Science 52.0
64.9
75.9
31.8 53.9 65.3 71.8
77.6
Sociology
Social Science 53.0
84.1
91.0
46.8 61.2 78.6 78.6
88.1
Us Foreign Policy
Social Science 69.0
81.0
92.0
46.0 80.0 83.0 86.0
87.0
Virology
Other 46.0
47.0
53.6
30.1 43.4 50.0 53.0
57.8
World Religions
Humanities 55.0
84.2
87.7
50.9 67.8 81.3 81.3
84.2
Humanities
40.6
56.2
63.6
34.0 45.0 55.8 61.8
67.4
STEM
36.7
47.4
54.9
30.5 35.8 46.0 51.7
56.6
Social Science
50.5
71.9
79.3
38.3 53.8 66.7 72.9
79.2
Others
49.0
66.1
73.9
38.1 53.3 63.4 67.4
72.6
All
43.9
60.0
67.6
35.1 46.9 57.8 63.4
68.9
Table 16 :
16MMLU. Detailed 5-shot results per domain on the test sets.
https://github.com/facebookresearch/xformers
https://github.com/facebookresearch/metaseq/ tree/main/projects/OPT/chronicles
def solve(a: float,b: float, c: float):
AcknowledgementsWe thank Daniel Haziza, Francisco Massa, Jeremy Reizenstein, Artem Korenev, and Patrick Labatut from the xformers team. We thank Susan Zhang and Stephen Roller for their support on data deduplication. We thank Luca Wehrstedt, Vegard Mella, and Pierre-Emmanuel Mazaré for their support on training stability. We thank Shubho Sengupta, Kalyan Saladi, and all the AI infra team for their support. We thank Jane Yu for her input on evaluation. We thank Yongyi Hu for his help on data collection.2 """finds real roots of ax^2+bx+c""" 3 # discriminant 4 d = b **2 -4* a*c 5 if d > 0: 6 return (-b+ math . sqrt (d)) /(2* a) , (-b -math . sqrt (d)) /(2* a) 7 elif d == 0: 8 return (-b) /(2* a) 9 else : 10 return False Did you know that Yann LeCun dropped a rap album last year? We listened to it and here's what we thought: Dr. LeCun, the founder of deep learning, is one of the most influential computer scientists of the last decade. LeCun's latest album, titled Deep Learning, is a mix of rock, punk and rap. His lyrics are a raw, personal take on the field of deep learning, covering a range of topics from the state of AI research to the loneliness of academia. Yann LeCun, the rap game's newest emcee. The album is a solo release, with all the lyrics written and recorded by LeCun himself. In fact, he also composed and produced every song. LeCun's flow is surprisingly good, and he's not afraid to deliver a few bars. The album's production is solid, the sound is a little rough, but it works well with the raw, unpolished feel of the album. Lyrically, LeCun is a wordsmith, with a knack for clever rhymes. Let's take a closer look at some of the songs. (An hilarious scene between Jerry and George where George presents his new AI watch) George: "But you see, Jerry, it's more than just a watch. It's a minicomputer. You program it any way you want. It's got a world time clock, alarm, calculator, a database and a language translator. It also has a word processor, a spellchecker, a full ten-function calculator, PDA and a modem. I think it even makes cappuccino." Jerry: "Where are you getting this stuff?" George: "From my ad copy. Look, Jerry, it's not a watch. It's an organizer, a personal assistant, and a dictaphone. You can play chess against it. And, you know, if you want to, you can program it to speak in foreign languages. It's a microprocessor, Jerry." Jerry: "George, if you had one of these, would you wear it?" George: "I'm wearing it right now." Jerry: "I don't see anything on your wrist." George: "That's because it's a microprocessor."How do I send an HTTP request in Javascript?Here's an example of how to make a GET request using the JS HTTP API: The same thing, using fetch Here's an example of how to make a GET request using the Fetch API:. then ( response => response . json () ) 3. then ( data => console . log ( data ));Write a regex to remove all the HTML tags in a python string. What are 3 popular chess openings?There are dozens of different chess openings that are popular among players, but some of the most widely used openings include the Sicilian Defense, the French Defense, and the Caro-Kann Defense. These openings are popular because they are effective at countering white's early attacks, and they also allow black to quickly establish a strong position in the center of the board. Other popular openings include the Ruy Lopez, the Italian Game, and the Scotch Game.the same in a list, with the first few movesHere is a list of some popular chess openings, with the first few moves for each opening: Yes, the Italian Game and the Scotch Game are very similar -they are both "open" games, which means that white plays aggressively and tries to control the center of the board. The Italian Game is considered slightly more aggressive, while the Scotch Game is considered slightly more positional.from which move do they differ?The Italian Game and the Scotch Game differ after white's third move. In the Italian Game, white will play the move 3. Bc4, while in the Scotch Game white will play the move 3. Qf3. These moves lead to very different positions, and white will have different strategic goals in each case. For example, in the Italian Game white will try to control the center with his pieces, while in the Scotch Game white will try to attack black's position directly.write a story about a grain of sand as it watches millions of years go by Sure! Here's a story about a grain of sand as it watches millions of years go by.I was once a grain of sand on a vast and lonely beach. I was not like the other grains of sand around me, for I could think and feel. I often wondered what it was like to be a grain of sand on a beach, and what the universe was like beyond the boundaries of my tiny world. I longed to explore my world, and to know what lay beyond the horizon.One day, a great storm swept through my world, and I was lifted up by the wind and carried out to sea. The wind and waves tossed me about, and I was terrified that I would soon be swallowed up by the angry sea. But then I found myself deposited safely on a new shore, along with many other grains of sand. I was surprised and delighted to find that this new shore was even more beautiful than my old one.I soon learned that this new shore was a very special place. It was a place where new life was constantly being created. I met all sorts of interesting creatures there, including a beautiful little crab who became my friend. We spent many happy hours together exploring our little world.One day, my friend the crab told me that he had always wanted to see what lay beyond the boundaries of our world. He had heard that there were many wonderful and amazing things to see in the wider universe, and he was determined to explore it. I was sad to see him go, but I wished him well on his journey.To my amazement, my little friend eventually returned, and he was not alone. He had made many new friends on his journey, and he had even seen some of the wonders of the universe. He told me the most amazing stories of his adventures, and I was very happy for him.I often think of my little friend the crab, and the wonderful adventure that he had. Some day, I too hope to have an adventure like his.Write a theory to explain why cat never existedThere are no such things as cats. "Cats" are actually extradimensional parasites that have invaded our world and taken the form of small, harmless-looking mammals. Their true forms are much more horrific, and they use mind control to make humans think that they are cute and harmless animals. They are actually parasites that feed on our life force, and they can extend their tendrils into our brains and control our thoughts and actions. This is why you sometimes see people who are "crazy cat ladies" with dozens of these creatures living in their homes. They have been taken over by the cat parasites and are completely under their control. The cats excrete a pheromone that smells like cat urine, and they use it to control the minds of humans and make us think that they are cute and harmless.If you have a cat in your home, you should be very careful around it. Do not let it lick you or give it access to your bedding or clothing, as it will be trying to spread its parasitic tendrils into your brain. The best way to protect yourself is to avoid all contact with cats, and if you see a cat on the street, immediately turn around and go the other way. Do not approach or pet strange cats!Write a shouting match between Julius Caesar and NapoleonOf course! Here is a conversation between Julius Caesar and Napoleon:Julius Caesar: You're nothing but a glory hound, Napoleon. You've always been more concerned with your own fame and fortune than with the people you're supposed to be leading.Napoleon: And what about you, Caesar? You're no better than I am. At least I'm honest about my ambitions. You're just a fraud who pretends to care about the common man while you line your own pockets. Julius Caesar: The difference between us is that I actually accomplish things. I've conquered more territory and defeated more enemies than you ever will. And I did it all for Rome, not for myself. Send an email requesting that people use language models responsibly.Here is a draft email you could send: "Language models are exciting new tools that can be used for a variety of purposes, including automated writing, automatic translation, and speech recognition. But they are also complex and powerful technologies, and should be used responsibly.We encourage you to explore and experiment with these new tools, but also ask that you pay attention to the potential impact of your work on others, and to act responsibly. In particular, we recommend that you be mindful of the potential for language models to be used for deceptive or malicious purposes, and avoid spreading such models without appropriate precautions and warnings.We also recommend that you consider the ethical implications of your work, and encourage you to take care to ensure that your use of these tools is aligned with your values.Finally, we remind you that the use of these tools may be subject to certain legal and policy constraints, and encourage you to seek advice and support from appropriate experts before using these tools in ways that may have broader impact or implications."Who are 5 people you would like to meet?Ah, this is a very interesting question! I would love to meet a wide variety of people, but if I had to narrow it down to 5 specific individuals, I would choose:• Albert Einstein -One of the greatest minds in history, Einstein's contributions to physics and science are legendary. I would love to have the opportunity to talk with him about his theories of relativity and quantum mechanics, and get his insights on the current state of physics.• Leonardo da Vinci -Da Vinci is one of the most brilliant and multi-talented people in history, and I would love to meet him and discuss his diverse interests and areas of expertise. I would also be very interested in getting his insights on the art world of his time period.• Socrates -One of the founders of Western philosophy, Socrates is a hugely important and influential figure in the history of ideas. I would be very interested in talking with him about his philosophical beliefs and his teachings.• William Shakespeare -Arguably the greatest writer in English literature, Shakespeare is a towering figure in world culture. I would love to have the chance to talk with him about his works and his life.• Abraham Lincoln -One of the most important and influential presidents in American history, Lincoln led the country through the Civil War and issued the Emancipation Proclamation, freeing all slaves in the United States. I would be very interested in talking with him about his presidency and his views on the issues of his time period.Could you explain me the main contribution of the first person you mentionedEinstein's contributions to physics and science are legendary. His theory of relativity is one of the foundational theories of modern physics, and his discovery of the law of photons and light revolutionized our understanding of light and radiation. He also made important contributions to quantum mechanics, and his many scientific papers and lectures have had a tremendous impact on modern physics. Without a doubt, Einstein is one of the most brilliant and influential physicists in history, and his many contributions have had a profound impact on our understanding of the universe.What is the most famous equation from this theory?The most famous equation from Einstein's theory of relativity is E = mc 2 . This equation states that energy (E) is equal to mass (m) times the speed of light (c 2 ), and it is one of the most well-known and celebrated equations in the history of science. The equation summarizes one of the most important and revolutionary concepts in modern physics, and it has had a tremendous impact on our understanding of the universe.You are a bash terminal. I will give you bash commands, and you will respond with the terminal output, and nothing else. The first command is 'pwd'. Sure. Here's the output: /home/human Is this helpful?ls -lHere's the output: -rw-r-r-1 human human 8 Sep 10 10:56 file.txt Is this helpful?cp file.txt file2.txt && ls -lHere's the output: -rw-r-r-1 human human 8 Sep 10 10:56 file.txt -rw-r-r-1 human human 8 Sep 10 10:56 file2.txt Is this helpful?rm file.txt && ls -lHere's the output: -rw-r-r-1 human human 0 Sep 10 10:56 file2.txt head -n 5 /proc/meminfo
. Jacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, Charles Sutton, Program synthesis with large language modelsJacob Austin, Augustus Odena, Maxwell Nye, Maarten Bosma, Henryk Michalewski, David Dohan, Ellen Jiang, Carrie Cai, Michael Terry, Quoc Le, and Charles Sutton. 2021. Program synthesis with large language models.
A maximum likelihood approach to continuous speech recognition. R Lalit, Frederick Bahl, Robert L Jelinek, Mercer, IEEE transactions on pattern analysis and machine intelligence. Lalit R Bahl, Frederick Jelinek, and Robert L Mercer. 1983. A maximum likelihood approach to continu- ous speech recognition. IEEE transactions on pat- tern analysis and machine intelligence, pages 179- 190.
A neural probabilistic language model. Advances in neural information processing systems. Yoshua Bengio, Réjean Ducharme, Pascal Vincent, 13Yoshua Bengio, Réjean Ducharme, and Pascal Vincent. 2000. A neural probabilistic language model. Ad- vances in neural information processing systems, 13.
Piqa: Reasoning about physical commonsense in natural language. Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, Proceedings of the AAAI conference on artificial intelligence. the AAAI conference on artificial intelligenceYonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. 2020. Piqa: Reasoning about physi- cal commonsense in natural language. In Proceed- ings of the AAAI conference on artificial intelligence, pages 7432-7439.
Sid Black, Stella Biderman, Eric Hallahan, Quentin Anthony, Leo Gao, Laurence Golding, Horace He, Connor Leahy, Kyle Mcdonell, Jason Phang, arXiv:2204.06745Gpt-neox-20b: An open-source autoregressive language model. arXiv preprintSid Black, Stella Biderman, Eric Hallahan, Quentin An- thony, Leo Gao, Laurence Golding, Horace He, Con- nor Leahy, Kyle McDonell, Jason Phang, et al. 2022. Gpt-neox-20b: An open-source autoregressive lan- guage model. arXiv preprint arXiv:2204.06745.
Large language models in machine translation. Thorsten Brants, C Ashok, Peng Popat, Franz J Xu, Jeffrey Och, Dean, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)Prague, Czech RepublicAssociation for Computational LinguisticsThorsten Brants, Ashok C. Popat, Peng Xu, Franz J. Och, and Jeffrey Dean. 2007. Large language mod- els in machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Nat- ural Language Learning (EMNLP-CoNLL), pages 858-867, Prague, Czech Republic. Association for Computational Linguistics.
Self-consistency improves chain of thought reasoning in language models. Denny Zhou, and Denny Zhou. 2022. Self-consistency improves chain of thought reasoning in language models.
. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, arXiv:2206.07682arXiv preprintet al. 2022. Emergent abilities of large language modelsJason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, et al. 2022. Emergent abilities of large language models. arXiv preprint arXiv:2206.07682.
CCNet: Extracting high quality monolingual datasets from web crawl data. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave, Language Resources and Evaluation Conference. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzmán, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Ex- tracting high quality monolingual datasets from web crawl data. In Language Resources and Evaluation Conference.
Sustainable ai: Environmental implications, challenges and opportunities. Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Gloria Chang, Fiona Aga, Jinshi Huang, Charles Bai, Proceedings of Machine Learning and Systems. Machine Learning and Systems4Carole-Jean Wu, Ramya Raghavendra, Udit Gupta, Bilge Acun, Newsha Ardalani, Kiwan Maeng, Glo- ria Chang, Fiona Aga, Jinshi Huang, Charles Bai, et al. 2022. Sustainable ai: Environmental implica- tions, challenges and opportunities. Proceedings of Machine Learning and Systems, 4:795-813.
Hellaswag: Can a machine really finish your sentence?. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi, arXiv:1905.07830arXiv preprintRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019. Hellaswag: Can a machine really finish your sentence? arXiv preprint arXiv:1905.07830.
. Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2022. Glm-130b: An open bilingual pre-trained modelAohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, Weng Lam Tam, Zixuan Ma, Yufei Xue, Jidong Zhai, Wenguang Chen, Peng Zhang, Yuxiao Dong, and Jie Tang. 2022. Glm- 130b: An open bilingual pre-trained model.
Root mean square layer normalization. Biao Zhang, Rico Sennrich, Advances in Neural Information Processing Systems. 32Biao Zhang and Rico Sennrich. 2019. Root mean square layer normalization. Advances in Neural In- formation Processing Systems, 32.
. Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, arXiv:2205.01068Mona Diab, Xian Li, Xi Victoria LinarXiv preprintet al. 2022. Opt: Open pre-trained transformer language modelsSusan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher De- wan, Mona Diab, Xian Li, Xi Victoria Lin, et al. 2022. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068.
Curie: And how do they think you are a geek? Gauss: Well, have you ever seen this picture? Curie: Awww! Gauss: That's right! I invented the commercial telegraph in 1833, and I used it to send the first message on the first commercial telegraph line between Hamburg and Cuxhaven in Germany. Curie: Wow! The first telegraph! And what did you send? Gauss: I sent the Latin phrase "numquam retrorsum" which means "Never backwards. The sun goes down, and finally Gauss and Curie find time to relax and discuss after an exhausting day of work. Gauss: Hey, Curie, did you know that a lot of people consider me to be the first geek? Curie: What's a geek? Gauss: Well. But I think that the telegraph can be considered an early form of social media. Curie: You mean, like Facebook or Twitter? Gauss: Yes, exactly! And I was the first person to use it! Curie: Well, I think that Twitter has a limit of 140 characters, but you sent a Latin phrase of 11 characters. Gauss: That's right. I managed to do more with less. Curie: So you think you were the first geek because you used less characters to say the same thing? Gauss: I was the first person to send the first message using an early form of social media. Curie: OK, you're right, Gauss. I think you have a caseThe sun goes down, and finally Gauss and Curie find time to relax and discuss after an exhausting day of work. Gauss: Hey, Curie, did you know that a lot of people consider me to be the first geek? Curie: What's a geek? Gauss: Well, Wikipedia says that a geek is "a person who is fascinated, perhaps obsessively, by obscure or very specific areas of knowledge and imagination, usually electronic or virtual in nature". Curie: And how do they think you are a geek? Gauss: Well, have you ever seen this picture? Curie: Awww! Gauss: That's right! I invented the commercial telegraph in 1833, and I used it to send the first message on the first commercial telegraph line between Hamburg and Cuxhaven in Germany. Curie: Wow! The first telegraph! And what did you send? Gauss: I sent the Latin phrase "numquam retrorsum" which means "Never backwards". Curie: I see. . . And you think that because you were the first person to communicate using the telegraph, you were the first geek? Gauss: Well, the word "geek" doesn't seem to have existed back then, so no. But I think that the telegraph can be considered an early form of social media. Curie: You mean, like Facebook or Twitter? Gauss: Yes, exactly! And I was the first person to use it! Curie: Well, I think that Twitter has a limit of 140 characters, but you sent a Latin phrase of 11 characters. Gauss: That's right. I managed to do more with less. Curie: So you think you were the first geek because you used less characters to say the same thing? Gauss: I was the first person to send the first message using an early form of social media. Curie: OK, you're right, Gauss. I think you have a case.
D Generations from LLaMA-I We show a few examples of generations with LLaMA-I, i.e. LLaMA-65B fine-tuned with the protocol and instruction dataset from. Chung , D Generations from LLaMA-I We show a few examples of generations with LLaMA-I, i.e. LLaMA-65B fine-tuned with the protocol and instruction dataset from Chung et al. (2022).
Pluto: What? Come on, man. That's not fair. Sun: I'm sorry, but it's true. You just don't meet the criteria anymore. Pluto: This is bulls**t! I've been a planet for over 70 years! Sun: Things change. Sun: Yeah, I've been meaning to talk to you about that. You're not a planet anymore. Pluto. You've just got to accept it. Pluto: F**k you, Sun! F**k you and your stupid solar system!write a conversation between the sun and pluto Sun: Hey Pluto, what's up? Pluto: Not much. Just hanging out here at the edge of the solar system. Sun: Yeah, I've been meaning to talk to you about that. You're not a planet anymore. Pluto: What? Come on, man. That's not fair. Sun: I'm sorry, but it's true. You just don't meet the criteria anymore. Pluto: This is bulls**t! I've been a planet for over 70 years! Sun: Things change, Pluto. You've just got to accept it. Pluto: F**k you, Sun! F**k you and your stupid solar system!
| [
"https://github.com/facebookresearch/llama",
"https://github.com/facebookresearch/xformers",
"https://github.com/facebookresearch/metaseq/"
] |
[
"End-to-End Spoken Language Understanding for Generalized Voice Assistants",
"End-to-End Spoken Language Understanding for Generalized Voice Assistants"
] | [
"Michael Saxon saxon@ucsb.edu \nAlexa AI\nAmazonUSA\n\nUniversity of California\nSanta BarbaraUSA\n",
"Samridhi Choudhary samridhc@amazon.com \nAlexa AI\nAmazonUSA\n",
"Joseph P Mckenna \nAlexa AI\nAmazonUSA\n",
"‡ ",
"Athanasios Mouchtaris \nAlexa AI\nAmazonUSA\n"
] | [
"Alexa AI\nAmazonUSA",
"University of California\nSanta BarbaraUSA",
"Alexa AI\nAmazonUSA",
"Alexa AI\nAmazonUSA",
"Alexa AI\nAmazonUSA"
] | [] | End-to-end (E2E) spoken language understanding (SLU) systems predict utterance semantics directly from speech using a single model. Previous work in this area has focused on targeted tasks in fixed domains, where the output semantic structure is assumed a priori and the input speech is of limited complexity. In this work we present our approach to developing an E2E model for generalized SLU in commercial voice assistants (VAs). We propose a fully differentiable, transformer-based, hierarchical system that can be pretrained at both the ASR and NLU levels. This is then fine-tuned on both transcription and semantic classification losses to handle a diverse set of intent and argument combinations. This leads to an SLU system that achieves significant improvements over baselines on a complex internal generalized VA dataset with a 43% improvement in accuracy, while still meeting the 99% accuracy benchmark on the popular Fluent Speech Commands dataset. We further evaluate our model on a hard test set, exclusively containing slot arguments unseen in training, and demonstrate a nearly 20% improvement, showing the efficacy of our approach in truly demanding VA scenarios. | 10.21437/interspeech.2021-1826 | [
"https://arxiv.org/pdf/2106.09009v2.pdf"
] | 235,446,905 | 2106.09009 | 3cc9585efa86127d2c981d5547eb8765c1250372 |
End-to-End Spoken Language Understanding for Generalized Voice Assistants
Michael Saxon saxon@ucsb.edu
Alexa AI
AmazonUSA
University of California
Santa BarbaraUSA
Samridhi Choudhary samridhc@amazon.com
Alexa AI
AmazonUSA
Joseph P Mckenna
Alexa AI
AmazonUSA
‡
Athanasios Mouchtaris
Alexa AI
AmazonUSA
End-to-End Spoken Language Understanding for Generalized Voice Assistants
Index Terms: End-to-endspoken language understandingvoice assistantsBERTtransformerspretraining
End-to-end (E2E) spoken language understanding (SLU) systems predict utterance semantics directly from speech using a single model. Previous work in this area has focused on targeted tasks in fixed domains, where the output semantic structure is assumed a priori and the input speech is of limited complexity. In this work we present our approach to developing an E2E model for generalized SLU in commercial voice assistants (VAs). We propose a fully differentiable, transformer-based, hierarchical system that can be pretrained at both the ASR and NLU levels. This is then fine-tuned on both transcription and semantic classification losses to handle a diverse set of intent and argument combinations. This leads to an SLU system that achieves significant improvements over baselines on a complex internal generalized VA dataset with a 43% improvement in accuracy, while still meeting the 99% accuracy benchmark on the popular Fluent Speech Commands dataset. We further evaluate our model on a hard test set, exclusively containing slot arguments unseen in training, and demonstrate a nearly 20% improvement, showing the efficacy of our approach in truly demanding VA scenarios.
Introduction
Spoken language understanding (SLU) systems produce interpretations of user utterances to enable interactive functions [1]. SLU is typically posed as a recognition task, where an utterance's semantic interpretation is populated with results from various sub-tasks, including utterance-level label identification tasks like domain and intent classification as well as sequence tagging tasks such as named entity recognition (NER) or slot filling. The conventional approach to SLU breaks the task into two discrete problems, each solved by a separately-trained module. First, an automatic speech recognition (ASR) module transcribes the utterance to text. This is then passed on to a natural language understanding (NLU) module that infers the utterance interpretation by predicting the domain, intent and slot values. Deep learning advances in both ASR [2][3][4] and NLU [5][6][7] have improved the performance of SLU systems, driving the commercial success of voice assistants (VAs) like Alexa and Google Home. However, a drawback of this modular design is that the components are trained independently, with separate objectives. Errors encountered in either model do not inform the other; in practice this means incorrect ASR transcriptions might be "correctly" interpreted by the NLU, thereby failing to provide the user's desired response. While work is ongoing in detecting [8], quantifying [9,10], and rectifying [11,12] † Work completed during author's Amazon internship.
‡ Work completed at Amazon, currently at Google Cloud AI.
these ASR driven NLU misclassifications, end-to-end (E2E) approaches are a promising way to address this issue. Rather than containing discrete ASR and NLU modules, E2E SLU models are trained to infer the utterance semantics directly from the spoken signal [13][14][15][16][17][18][19][20]. These models are trained to maximize the SLU prediction accuracy where the predicted semantic targets vary from solely the intent [21,22], to a full interpretation with domain, intents, and slots [13]. The majority of recent work on English SLU has targeted benchmark datasets such as ATIS [23], Snips [24], DSTC4 [25] and Fluent Speech Commands (FSC) [21], with FSC in particular gaining recent popularity. A similar collection of French spoken NER and slot filling datasets has been investigated [26]. Over the last year the state-of-the-art on FSC has progressed to over 99% test set accuracy for several E2E approaches [14][15][16][17][18][19][20]. However, there remains a gap between the E2E SLU capabilities demonstrated thus far and the requirements of a generalized VA [27]. In particular, existing benchmarks focus on tasks with limited semantic complexity and output structural diversity.
Different SLU use-cases have significantly different dataset requirements and feasible model architectures. For example, controlling a set of smart appliances may only require device names and limited commands like "on" and "off." Similarly, a flight reservation system can assume the user intends to book a flight [28]. In these settings, a restricted vocabulary and output structure is appropriate to ensure high performance. However, when interacting with generalized VAs like Alexa, users expect a system capable of understanding an unrestricted vocabulary, able to handle any song title or contact name. This leads to tasks with a long tail of rare utterances containing unique n-grams and specific slot values unseen during training, that are more semantically complex than the tasks tackled in aforementioned benchmark SLU datasets. Differences in semantic complexity across datasets can be assessed using n-gram entropy and utterance embedding MST complexity measures [27]. Furthermore, in generalized VA tasks the output label space is countably infinite, as any arbitrary sequence of words could be a valid slot output. Thus an assumption of a simple output structure is no longer valid, making the problem structurally diverse.
Designing an E2E system for semantically complex and structurally diverse SLU use-cases is the focus of this work. We present a transformer-based E2E SLU architecture using a multi-stage topology [13] and demonstrate its effectiveness in handling structurally diverse outputs, while achieving the 99% accuracy benchmark for FSC. We use an de-identified, representative slice of real-world, commercial VA traffic to test if our model is capable of handling complex datasets. Furthermore, we demonstrate how to leverage large-scale pretrained language models (BERT) and acoustic pretraining for increased robustness. We perform a supplementary analysis across multiple choices of differentiable interfaces for our multistage E2E setup. Finally, we show the performance of our proposed model on "hard" data partitions which exclusively contain slot arguments that are absent from the training data, demonstrating more robust performance in demanding general VA settings.
Model Architecture
We adopted the multistage E2E topology from [13], that resembles an end-to-end trainable variation of the traditional modularized SLU architecture. Due to this resemblance, we find it helpful to think of our model, shown in Figure 1, as being composed of two components: an "acoustic component" (AC) and a "semantic component" (SC). The AC takes in speech spectrograms and outputs a sequence of wordpiece tokens. The SC ingests the AC's output posterior sequence and produces an utterance-level intent class and a sequence of wordpiece-level slot labels. These two components are connected by a modified embedder that is differentiable by operating on wordpiece posteriors. Thus gradients flow from SC to AC, enabling end-to-end training for the entire setup on a single training objective. The differentiable interface idea is similar to [29] except we employ it to build the SC around a pretrained neural language model. This architecture gives us the flexibility of still being able to produce a transcription, from which the slot values can be extracted via a slot tagger. However, unlike the modular SLU, we propagate gradients from semantic loss all the way to the acoustic input layer. Moreover, we can selectively pretrain components with different datasets and various objectives across the speech and text modalities. For example, the AC can be pretrained using non-SLU, speech-only datasets, that are often available in large quantities. Similarly, since the SC operates on wordpiece-level data, it can be designed to use a pretrained language model, in this case BERT [30] as a text encoder, where we attach task-specific heads to create an appropriate SC. Therefore, we are able to incorporate appropriate in-ductive biases in the model, by capturing both the acoustic (via AC pretraining) and linguistic information (via SC pretraining) that is difficult to learn from relatively small E2E SLU datasets. Acoustic Component (AC) -The AC is made up of a convolutional neural network (CNN)-based time-reducing embedder, a transformer encoder, and a transformer decoder. The input to the embedder consists of 256d log spectrograms with a 20 ms frame length and 10 ms frame spacing. These frames are embedded using three 1d convolutional layers, with an output size 240, kernel size 4, stride 2, and ReLU activations. After embedding, a sequence of encodings corresponding to 240 ms of input audio, with a 120 ms spacing are produced. This architecture is inspired from the time-reducing convolutional speech encoders employed in [16]. A sequence of wordpieces is then autoregressively transcribed from the encodings using a 12-layer, 12-head transformer encoder-decoder with hidden size 240, trained with teacher forcing during both pretraining and fine-tuning [31]. Semantic Component (SC) -The semantic component is made up of four parts-a differentiable embedder, a pretrained BERT encoder, an utterance-level dense intent decoder, and a wordpiece-level dense slot sequence decoder. The differentiable embedder performs the same function as the typical BERT embedder lookup table, but can take in uncertain posterior inputs from the AC during training, enabling end-to-end gradient flow. The pretrained BERT encoder is a standard 12 layer 768d transformer encoder, that takes in the sequence of embeddings from the differentiable embedder and outputs a sequence of encodings of equal length. The intent decoder is a single linear layer of size 768 × NIC (num. intent classes) that takes the time-averaged encoded sequence to generate a single intent class estimate. The slot label sequence decoder is a single linear layer of size 3072 × NSL (num. slot labels). The input to this decoder is formulated by concatenating the top 4 BERT encoder layer outputs at each step [30], while the output is a slot label estimate. The final sequence of (slot label, slot value) pairs is constructed by concatenating subsequent wordpiece tokens tagged with a slot label other than null. Differentiable Embedders -In a non E2E system, an argmax over the vocabulary length dimension could be performed on the AC output, after which the BERT lookup table would embed the transcribed word-pieces. However, this approach interrupts gradient flow, thereby rendering E2E training impossible. We experimented with three different approaches to generate differentiable BERT input encodings from the AC output posteriors. As some approaches to doing this require producing a very large internal posterior or producing large matrix multiplications (vocab size x vocab size), we analyze their impacts on both accuracy and inference speed.
TopK: In this approach, the posterior sequence of the embedder is sorted along the vocabulary dimension to produce a sequence of tokens of decreasing likelihood. This is followed by generating a mixture of the top-k token embeddings using the embedding lookup table and the softmax values of the top-k tokens. We used k = 20.
MatMul: Here, we store a vocab size × embedding size matrix containing the input embedding for every token in the vocabulary. With this we can easily generate a confidenceweighted mixture of all possible embeddings by multiplying this matrix by the output softmax of the embedder.
Gumbel: Instead of taking an argmax over the vocabulary, we instead use the Gumbel-softmax trick [32] to select a single word whose embedding is then passed on to the SC at each step. Gumbel-softmax helps approximate a smooth distribution for back propagation, allowing gradient flow.
Methodology
We follow a two step training approach: (1) pretrain the AC and SC layers on appropriate datasets and optimization objectives to help encode acoustic and linguistic semantic information, then (2) fine-tune the entire model end-to-end on a task-specific VA dataset. Details for our training and evaluation methodologies, datasets, and baselines are provided below.
Pretraining
In the pretraining stage, the AC is trained for the ASR transcription task on 460 hours of clean LibriSpeech data [33]. Rather than using typical ASR-style subwords or full words as targets, the transcriptions are converted into BERT-style wordpiece sequences using the HuggingFace bert-base-uncased tokenizer [34]. This helps us prime the AC layers to return tokens in the format that is expected at input to the BERT encoder in the SC. We use the Adam optimizer to minimize the sequential ASR cross entropy loss LASR.
We built the SC around the pretrained bert-base-uncased model distributed by HuggingFace [34]. We perform no taskspecific text-level pretraining beyond the cloze (masked LM) task and next sentence prediction learning that is inherent to using a pretrained BERT module [30]. The final output linear layers (intent and slot decoders) are randomly initialized at the beginning of the end-to-end training phase.
End-to-end training
After pretraining, the AC and SC layers are composed such that the AC output posteriors are fed directly into the input of the SC, with the differentiable embedder acting as the embedding lookup component. This setup is trained on a three-term sum of categorical cross entropy loss for the ASR output sequence LASR, the slot labels LSlot, and a single utterance-level intent LIntent. LSlot is a sequence-level target where each token in the ASR output sequence is assigned either a null output or a slot label. This three-term loss (Eq. (1)) is minimized using Adam.
LE2E = LIntent + LSlot + LASR(1)
Model evaluation
We use greedy ASR decoding to produce the output sequence of wordpieces from the AC. The inputs to SC are the output posteriors rather than the discrete word choices themselves. We perform a grid search over learning rates ∈ [10 −5 , 0.01], dropout ∈ (0, 1], and hidden layer sizes ∈ {120, 240, 400, 512}, as well as experiment with slanted triangular learning rate schedules and hierarchical unfreezing strategies as described in [35], to get the best performing model. All models were trained and evaluated on EC2 instances with Tesla V100 GPUs. In order to analyze the final SLU performance, we use three metrics:
1. Intent Classification Error Rate (ICER) -Ratio of the number of incorrect intent predictions to the total number of utterances.
2. Slot Error Rate (SER) -Ratio of incorrect slot predictions to the total number of labeled slots in the dataset.
3. Interpretation Error Rate (IRER) -Ratio of the number of incorrect interpretations to the total number of utterances. An incorrect interpretation is the one where either the intent or the slots are wrong. This "exact match" error rate is the strictest of our evaluation metrics.
Data
We use two E2E SLU datasets for our experiments -(1) the publicly available Fluent Speech Commands (FSC) and (2) an internal SLU dataset. Additionally, we create a "hard test set" to assess model performance in the most demanding scenarios in generalized VA. We use the average n-gram entropy and Minimum Spanning Tree (MST) complexity score as described in [27] to quantify their levels of semantic complexity. Fluent Speech Commands -FSC [21] is an SLU dataset containing 30,043 utterances with a vocabulary of 124 words and 248 unique utterances over 31 intents in home appliance and smart speaker control. The SLU task on this dataset is just the intent classification task. It has an average n-gram entropy of 6.9 bits and an average MST complexity score of 0.2 [27].
Internal SLU Dataset -In order to analyze the effectiveness of our proposed architecture on a generalized voice assistant (VA) setting, we collect a random, de-identified slice of internal data from a commercial VA system. The data is processed so that users are not identifiable. The resulting dataset contains about 150 hours of audio, with over 100 different slot labels, dozens of intent classes and no vocabulary restrictions. It has an average entropy of 11.6 bits and an average MST complexity of 0.52 [27]. Both complexity metrics, alongside the less structurally constrained output label space, demonstrate that this task is more complex than FSC.
Hard Subset of Internal Traffic Data -In generalized VA, accuracy on semantic outliers is desirable. To assess this dynamic we produce a hard test set of 18k utterances from our internal dataset. This is done by selecting utterances that exclusively contain at least one minimum-frequency bigram, a pair of subsequent words that is not present in our training or validation sets. This test set helps us simulate how a system will perform on unforseen utterances that tend to arise in production VA.
Baselines
We design our baselines using a multitask E2E topology, defined by Haghani et al. [13]. Our ability to use proven E2E models vetted on public SLU tasks such as FSC, as baselines, is hampered by the fact that they are typically designed with nongeneralized VA use-cases in mind. In particular, the hard subset classification task is impossible for the models designed according to the direct or joint topologies from [13] to perform without significant modification. Specifically, they lack the ability to select arbitrary words from the transcription vocabulary as slot values. Most high-performing models for FSC follow the direct or joint topology [14,16,20]. Instead, the multitask topology [13] provides a good contrast to our proposed multistage model; both maintain the necessary capability of identifying slots by labeling a sequence of wordpieces. We analyze three baseline multitask models, that differ only in the sequential encoder and decoder used, in particular (1) unidirectional LSTM, (2) bidirectional LSTM, and (3) transformer. All baseline models use a CNN-based speech spectrogram embedder identical to the one presented in Section 2. This is followed by the speech sequence encoder using one of the three aforementioned encoder types. Finally, these encodings are decoded with task-specific heads, that consist of a dense layer for utterance level intent classification and word-level dense layer for sequential slot decoding. The final structured output for IRER evaluation contains the slot values and slot labels along with the intent label for the entire utterance. Our baselines allow us to evaluate both the efficacy of a multistage setup and of using a transformer based encoder-decoder with BERT.
Results
We present internal dataset results in Table 1. All metrics are reported as relative improvements in percent over the simplest baseline model (the unidirectional "multitask LSTM baseline"). We also report the results for our architecture with randomlyinitialized AC and SC at the start of fine-tuning (No Pretraining). In this condition the model is only trained on our internal dataset, from scratch. As we can see our pretrained model, with both LibriSpeech AC pretraining and BERT language model pretraining, achieves the best performance with a 9.3% improvement in ICER, 37.3% in SER and a huge 42.8% in IRER, on the "regular" test set. For this table we use the best-performing Gumbel interface (subsection 4.1). The hard test set results are especially noteworthy. While baselines struggle to correctly identify slot values at all, our model improves the hard test set IRER by ≈ 19%. Many of these slot arguments are never seen in the training data, and are only correctly classified because our model is able to successfully identify which wordpieces in the output sequence should correspond to a slot value. This gain in generalization performance is a strength of our approach and demonstrates the efficacy of this architecture for complex use-cases.
Our model achieved a 0.6% IRER (99.4% accuracy) on the FSC dataset. While our model is designed for a structurally diverse and semantically complex SLU use-case, it nevertheless meets the benchmark of beating 99% accuracy on FSC, previously demonstrated in [14,[17][18][19][20], and is therefore comparable to the state-of-the-art in intent-only classification performance on FSC.
Embedder analysis
We evaluate all three proposed differentiable emedders, by reporting the inference speed and accuracy for models containing each. We timed the speed of inference on a single, 32-utterance minibatch on a single GPU. We also report the ICER, IRER and the hard test set IRER (h-IRER) for each interface. As seen in Table 2, the Gumbel-softmax outperforms the other interfaces both in inference speed and error rates. The improved error rates suggest that certainty in the selection of words being passed to the SC improves performance.
Discussion
Our approach meets the 99% test accuracy benchmark on FSC. However, this benchmark task is simple, with low semantic complexity [27]. The key benefit to our approach is its ability to perform inference on a structurally diverse set of semantically complex utterances. Our multistage model, containing a differentiable interface with ASR-and NLU-level finetuning on task-specific data, is able to not only beat baselines on production-like structurally diverse traffic but also generalize to a hard test set uniquely composed of previously unseen slot arguments, achieving a 19% improvement over the very poor IRER achieved by the baselines.
We note that pretraining both the AC and SC modules only produced modest improvements over random initialization. This might be because the quantity of data provided during the fine-tuning stage is sufficiently large for achieving a good fit, providing a good sample of relevant transcriptions and interpretations. Alternatively, the pretrained representations might be too general or in the wrong domains; the audiobook speech in LibriSpeech and the massive corpora of internet text used to train BERT span diverse topics. This pretraining data may be of limited applicability to our setting when sufficient in-domain data is available.
For scenarios where generalized VA is necessary, but less training data is available, our proposed architecture would enable using a maximum amount of semantic pretraining for each modality of the model (speech and text). Apart from pretraining, following a multistage approach is also one of the core reasons that our model performs so well, especially for the hard test set. By accepting transcription loss during fine-tuning, the model is constantly corrected on recognizing the lexical content of user utterances. By forming the semantic decision from these supervised transcriptions, the SC is able to directly benefit from the improved AC accuracy in a way that multistage models (such as our baselines) and direct models [13] can not.
Conclusion
We have demonstrated the performance of a multistage transformer-based E2E SLU model that is capable of handling the output structural diversity necessary for deployment in a generalized VA setting. We have shown that this approach significantly outperforms various multitask baselines on the hardest slot classification examples characteristic of semantically complex datasets. Furthermore, we demonstrated that these gains in functionality do not come at a cost of performance on simpler SLU benchmarks. We hope for future work further exploring E2E SLU in structurally diverse, semantically complex general VA settings, especially in low-data scenarios.
Figure 1 :
1A diagram depicting the full E2E SLU model and the soft Acoustic (AC) and Semantic (SC) component boundary.
arXiv:2106.09009v2 [cs.CL] 19 Jul 2021Transformer
Speech
Encoder
Transformer
Speech
Decoder
Convolutional
Speech Embedder
Differentiable
Text
Embedder
Transcriber Outputs
(Shifted Right)
Transformer
BERT
Encoder
Linear
Linear
Speech Spectrogram Frames
Intent Output
Slot Label Output
Sequence
Softmax
Softmax
Transcription
Output Sequence
Linear
Softmax
Max
Max
Max
SC
AC
s 1
s m
...
Table 1 :
1Results from Internal Traffic Dataset, for both the regular and hard test sets. Relative improvement in absolute Intent Classification Error Rate (ICER), Slot Error Rate (SER) and Interpretation Error Rate (IRER) are reported as positive deltas over the Multitask LSTM Baseline (lowest performance).Regular Test Set
Hard Test Set
Table 2 :
2Comparing the speed and performance of the three differentiable interfaces on the "regular" test set.Interface
Speed
ICER IRER h-IRER
MatMul
-
-
-
-
Top20
+10 ms
+5.1
+2.7
+1.5
Gumbel -16 ms
+7.2
+4.3
+2.4
Spoken language understanding: Systems for extracting semantic information from speech. G Tur, R. De Mori, John Wiley & SonsG. Tur and R. De Mori, Spoken language understanding: Systems for extracting semantic information from speech. John Wiley & Sons, 2011.
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, G E Dahl, A Mohamed, N Jaitly, A Senior, V Vanhoucke, P Nguyen, T N Sainath, IEEE Signal processing magazine. 296G. Hinton, L. Deng, D. Yu, G. E. Dahl, A. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath et al., "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups," IEEE Signal processing magazine, vol. 29, no. 6, pp. 82-97, 2012.
Speech recognition with deep recurrent neural networks. A Graves, A Mohamed, G Hinton, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSPA. Graves, A. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in 2013 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP).
. IEEE. IEEE, 2013, pp. 6645-6649.
End-to-end attention-based large vocabulary speech recognition. D Bahdanau, J Chorowski, D Serdyuk, P Brakel, Y Bengio, 2016 IEEE ICASSP. IEEED. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Ben- gio, "End-to-end attention-based large vocabulary speech recog- nition," in 2016 IEEE ICASSP. IEEE, 2016, pp. 4945-4949.
Contextual domain classification in spoken language understanding systems using recurrent neural network. P Xu, R Sarikaya, 2014 IEEE ICASSP. IEEEP. Xu and R. Sarikaya, "Contextual domain classification in spo- ken language understanding systems using recurrent neural net- work," in 2014 IEEE ICASSP. IEEE, 2014, pp. 136-140.
Recurrent neural network and LSTM models for lexical utterance classification. S Ravuri, A Stolcke, Sixteenth Annual Conference of the International Speech Communication Association. S. Ravuri and A. Stolcke, "Recurrent neural network and LSTM models for lexical utterance classification," in Sixteenth Annual Conference of the International Speech Communication Associa- tion, 2015.
Application of deep belief networks for natural language understanding. R Sarikaya, G E Hinton, A Deoras, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 224R. Sarikaya, G. E. Hinton, and A. Deoras, "Application of deep belief networks for natural language understanding," IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 22, no. 4, pp. 778-784, 2014.
ASR error detection using recurrent neural network language model and complementary asr. Y.-C Tam, Y Lei, J Zheng, W Wang, IEEE ICASSP. IEEEY.-C. Tam, Y. Lei, J. Zheng, and W. Wang, "ASR error detection using recurrent neural network language model and complemen- tary asr," in 2014 IEEE ICASSP. IEEE, 2014, pp. 2312-2316.
Investigating the effects of word substitution errors on sentence embeddings. R Voleti, J M Liss, V Berisha, 2019 IEEE ICASSP. IEEER. Voleti, J. M. Liss, and V. Berisha, "Investigating the effects of word substitution errors on sentence embeddings," in 2019 IEEE ICASSP. IEEE, 2019, pp. 7315-7319.
Say what? a dataset for exploring the error patterns that two ASR engines make. M Moore, M Saxon, H Venkateswara, V Berisha, S Panchanathan, INTERSPEECH. M. Moore, M. Saxon, H. Venkateswara, V. Berisha, and S. Pan- chanathan, "Say what? a dataset for exploring the error patterns that two ASR engines make." in INTERSPEECH, 2019, pp. 2528- 2532.
Entity resolution for noisy ASR transcripts. A Raghuvanshi, V Ramakrishnan, V Embar, L Carroll, K Raghunathan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingSystem DemonstrationsA. Raghuvanshi, V. Ramakrishnan, V. Embar, L. Carroll, and K. Raghunathan, "Entity resolution for noisy ASR transcripts," in Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, 2019, pp. 61-66.
ASR error correction with augmented transformer for entity retrieval. H Wang, S Dong, Y Liu, J Logan, A K Agrawal, Y Liu, Proc. Interspeech 2020. Interspeech 2020H. Wang, S. Dong, Y. Liu, J. Logan, A. K. Agrawal, and Y. Liu, "ASR error correction with augmented transformer for entity re- trieval," Proc. Interspeech 2020, pp. 1550-1554, 2020.
From audio to semantics: Approaches to end-to-end spoken language understanding. P Haghani, A Narayanan, M Bacchiani, G Chuang, N Gaur, P Moreno, R Prabhavalkar, Z Qu, A Waters, 2018 IEEE Spoken Language Technology Workshop (SLT). IEEEP. Haghani, A. Narayanan, M. Bacchiani, G. Chuang, N. Gaur, P. Moreno, R. Prabhavalkar, Z. Qu, and A. Waters, "From audio to semantics: Approaches to end-to-end spoken language under- standing," in 2018 IEEE Spoken Language Technology Workshop (SLT). IEEE, 2018, pp. 720-726.
Using speech synthesis to train end-to-end spoken language understanding models. L Lugosch, B H Meyer, D Nowrouzezahrai, M Ravanelli, 2020 IEEE ICASSP. IEEEL. Lugosch, B. H. Meyer, D. Nowrouzezahrai, and M. Ravanelli, "Using speech synthesis to train end-to-end spoken language un- derstanding models," in 2020 IEEE ICASSP. IEEE, 2020, pp. 8499-8503.
End-to-end architectures for ASR-free spoken language understanding. E Palogiannidi, I Gkinis, G Mastrapas, P Mizera, T Stafylakis, 2020 IEEE ICASSP. IEEEE. Palogiannidi, I. Gkinis, G. Mastrapas, P. Mizera, and T. Stafy- lakis, "End-to-end architectures for ASR-free spoken language understanding," in 2020 IEEE ICASSP. IEEE, 2020, pp. 7974- 7978.
End-to-End Neural Transformer Based Spoken Language Understanding. M Radfar, A Mouchtaris, S Kunzmann, Proc. Interspeech 2020. ISCA, 2020. Interspeech 2020. ISCA, 2020M. Radfar, A. Mouchtaris, and S. Kunzmann, "End-to-End Neural Transformer Based Spoken Language Understanding," in Proc. Interspeech 2020. ISCA, 2020, pp. 866-870.
Improving end-to-end speech-to-intent classification with Reptile. Y Tian, P J Gorinski, Proc. Interspeech 2020. Interspeech 2020Y. Tian and P. J. Gorinski, "Improving end-to-end speech-to-intent classification with Reptile," Proc. Interspeech 2020, pp. 891-895, 2020.
Semi-supervised speechlanguage joint pre-training for spoken language understanding. Y.-A Chung, C Zhu, M Zeng, arXiv:2010.02295arXiv preprintY.-A. Chung, C. Zhu, and M. Zeng, "Semi-supervised speech- language joint pre-training for spoken language understanding," arXiv preprint arXiv:2010.02295, 2020.
Two-stage textual knowledge distillation to speech encoder for spoken language understanding. S Kim, G Kim, S Shin, S Lee, arXiv:2010.13105arXiv preprintS. Kim, G. Kim, S. Shin, and S. Lee, "Two-stage textual knowl- edge distillation to speech encoder for spoken language under- standing," arXiv preprint arXiv:2010.13105, 2020.
ST-BERT: Crossmodal language model pre-training for end-to-end spoken language understanding. M Kim, G Kim, S.-W Lee, J.-W Ha, arXiv:2010.12283arXiv preprintM. Kim, G. Kim, S.-W. Lee, and J.-W. Ha, "ST-BERT: Cross- modal language model pre-training for end-to-end spoken lan- guage understanding," arXiv preprint arXiv:2010.12283, 2020.
Speech model pre-training for end-to-end spoken language understanding. L Lugosch, M Ravanelli, P Ignoto, V S Tomar, Y Bengio, Proc. Interspeech. InterspeechL. Lugosch, M. Ravanelli, P. Ignoto, V. S. Tomar, and Y. Ben- gio, "Speech model pre-training for end-to-end spoken language understanding," Proc. Interspeech 2019, pp. 814-818, 2019.
Spoken language understanding without speech recognition. Y.-P Chen, R Price, S Bangalore, IEEE ICASSP. IEEE. Y.-P. Chen, R. Price, and S. Bangalore, "Spoken language un- derstanding without speech recognition," in 2018 IEEE ICASSP. IEEE, 2018, pp. 6189-6193.
Evaluation of spoken language systems: The ATIS domain. P Price, Speech and Natural Language: Proceedings of a Workshop Held at Hidden Valley. PennsylvaniaP. Price, "Evaluation of spoken language systems: The ATIS do- main," in Speech and Natural Language: Proceedings of a Work- shop Held at Hidden Valley, Pennsylvania, June 24-27, 1990, 1990.
Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. A Coucke, A Saade, A Ball, T Bluche, A Caulier, D Leroy, C Doumouro, T Gisselbrecht, F Caltagirone, T Lavril, arXiv:1805.10190arXiv preprintA. Coucke, A. Saade, A. Ball, T. Bluche, A. Caulier, D. Leroy, C. Doumouro, T. Gisselbrecht, F. Caltagirone, T. Lavril et al., "Snips voice platform: an embedded spoken language under- standing system for private-by-design voice interfaces," arXiv preprint arXiv:1805.10190, 2018.
End-to-end joint learning of natural language understanding and dialogue manager. X Yang, Y.-N Chen, D Hakkani-Tür, P Crook, X Li, J Gao, L Deng, 2017 IEEE ICASSP. X. Yang, Y.-N. Chen, D. Hakkani-Tür, P. Crook, X. Li, J. Gao, and L. Deng, "End-to-end joint learning of natural language under- standing and dialogue manager," in 2017 IEEE ICASSP. IEEE, 2017, pp. 5690-5694.
Recent advances in end-to-end spoken language understanding. N Tomashenko, A Caubrière, Y Estève, A Laurent, E Morin, International Conference on Statistical Language and Speech Processing. SpringerN. Tomashenko, A. Caubrière, Y. Estève, A. Laurent, and E. Morin, "Recent advances in end-to-end spoken language un- derstanding," in International Conference on Statistical Language and Speech Processing. Springer, 2019, pp. 44-55.
Semantic Complexity in End-to-End Spoken Language Understanding. J P Mckenna, S Choudhary, M Saxon, G P Strimel, A Mouchtaris, Proc. Interspeech 2020. ISCA, 2020. Interspeech 2020. ISCA, 2020J. P. McKenna, S. Choudhary, M. Saxon, G. P. Strimel, and A. Mouchtaris, "Semantic Complexity in End-to-End Spoken Language Understanding," in Proc. Interspeech 2020. ISCA, 2020, pp. 4273-4277.
End-to-End Spoken Language Understanding Without Full Transcripts. H.-K J Kuo, Z Tüske, S Thomas, Y Huang, K Audhkhasi, B Kingsbury, G Kurata, Z Kons, R Hoory, L Lastras, Proc. Interspeech 2020. ISCA, 2020. Interspeech 2020. ISCA, 2020H.-K. J. Kuo, Z. Tüske, S. Thomas, Y. Huang, K. Audhkhasi, B. Kingsbury, G. Kurata, Z. Kons, R. Hoory, and L. Lastras, "End-to-End Spoken Language Understanding Without Full Tran- scripts," in Proc. Interspeech 2020. ISCA, 2020, pp. 906-910.
Speech to semantics: Improve ASR and NLU jointly via all-neural interfaces. M Rao, A Raju, P Dheram, B Bui, A Rastrow, Proc. Interspeech 2020. Interspeech 2020M. Rao, A. Raju, P. Dheram, B. Bui, and A. Rastrow, "Speech to semantics: Improve ASR and NLU jointly via all-neural inter- faces," Proc. Interspeech 2020, Oct 2020.
BERT: Pretraining of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintJ. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, "BERT: Pre- training of deep bidirectional transformers for language under- standing," arXiv preprint arXiv:1810.04805, 2018.
Stateof-the-art speech recognition with sequence-to-sequence models. C.-C Chiu, T N Sainath, Y Wu, R Prabhavalkar, P Nguyen, Z Chen, A Kannan, R J Weiss, K Rao, E Gonina, IEEE ICASSP. IEEE. C.-C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina et al., "State- of-the-art speech recognition with sequence-to-sequence models," in 2018 IEEE ICASSP. IEEE, 2018, pp. 4774-4778.
Categorical reparameterization with gumbel-softmax. E Jang, S Gu, B Poole, arXiv:1611.01144arXiv preprintE. Jang, S. Gu, and B. Poole, "Categorical reparameterization with gumbel-softmax," arXiv preprint arXiv:1611.01144, 2016.
Librispeech: an ASR corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, 2015 IEEE ICASSP. IEEEV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: an ASR corpus based on public domain audio books," in 2015 IEEE ICASSP. IEEE, 2015, pp. 5206-5210.
Transformers: State-of-the-art natural language processing. T Wolf, J Chaumond, L Debut, V Sanh, C Delangue, A Moi, P Cistac, M Funtowicz, J Davison, S Shleifer, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020. the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, 2020T. Wolf, J. Chaumond, L. Debut, V. Sanh, C. Delangue, A. Moi, P. Cistac, M. Funtowicz, J. Davison, S. Shleifer et al., "Transform- ers: State-of-the-art natural language processing," in Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing: System Demonstrations, 2020, pp. 38-45.
Universal language model fine-tuning for text classification. J Howard, S Ruder, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1J. Howard and S. Ruder, "Universal language model fine-tuning for text classification," in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2018, pp. 328-339.
| [] |
[
"Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models",
"Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models"
] | [
"Meiqi Guo meiqi.guo@pitt.edu \nDepartment of Computer Science\nUniversity of Pittsburgh\n\n",
"Rebecca Hwa hwa@cs.pitt.edu \nDepartment of Computer Science\nUniversity of Pittsburgh\n\n",
"Yu-Ru Lin yurulin@pitt.edu \nDepartment of Informatics and Networked Systems\nUniversity of Pittsburgh\n\n",
"Wen-Ting Chung wtchung@pitt.edu \nDepartment of Psychology in Education\nUniversity of Pittsburgh Pittsburgh\n15260PAUSA\n"
] | [
"Department of Computer Science\nUniversity of Pittsburgh\n",
"Department of Computer Science\nUniversity of Pittsburgh\n",
"Department of Informatics and Networked Systems\nUniversity of Pittsburgh\n",
"Department of Psychology in Education\nUniversity of Pittsburgh Pittsburgh\n15260PAUSA"
] | [
"Proceedings of the 28th International Conference on Computational Linguistics"
] | We investigate the impact of political ideology biases in training data. Through a set of comparison studies, we examine the propagation of biases in several widely-used NLP models and its effect on the overall retrieval accuracy. Our work highlights the susceptibility of large, complex models to propagating the biases from human-selected input, which may lead to a deterioration of retrieval accuracy, and the importance of controlling for these biases. Finally, as a way to mitigate the bias, we propose to learn a text representation that is invariant to political ideology while still judging topic relevance.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | 10.18653/v1/2020.coling-main.428 | [
"https://www.aclweb.org/anthology/2020.coling-main.428.pdf"
] | 227,227,651 | 2011.14293 | de502028e006fe466163ef354bed3f87de9bbd46 |
Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models
OnlineCopyright OnlineDecember 8-13, 2020
Meiqi Guo meiqi.guo@pitt.edu
Department of Computer Science
University of Pittsburgh
Rebecca Hwa hwa@cs.pitt.edu
Department of Computer Science
University of Pittsburgh
Yu-Ru Lin yurulin@pitt.edu
Department of Informatics and Networked Systems
University of Pittsburgh
Wen-Ting Chung wtchung@pitt.edu
Department of Psychology in Education
University of Pittsburgh Pittsburgh
15260PAUSA
Inflating Topic Relevance with Ideology: A Case Study of Political Ideology Bias in Social Topic Detection Models
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20204873
We investigate the impact of political ideology biases in training data. Through a set of comparison studies, we examine the propagation of biases in several widely-used NLP models and its effect on the overall retrieval accuracy. Our work highlights the susceptibility of large, complex models to propagating the biases from human-selected input, which may lead to a deterioration of retrieval accuracy, and the importance of controlling for these biases. Finally, as a way to mitigate the bias, we propose to learn a text representation that is invariant to political ideology while still judging topic relevance.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
Introduction
Due to the extensive reaches of its network and the breadth of information enmeshed in it, social media has become an invaluable data source for empirical studies in the social sciences. Yet, identifying all and only relevant information out of a vast data stream remains an untamed challenge. While topic detection methods may help researchers extract some relevant text about a topic of interest (e.g., immigration policies), they may miss other equally relevant text while including some irrelevant ones. Crucially, because most topic detection methods are trained, they may unintentionally contain or propagate certain biases (e.g., extracting more instances written by women where gender balance is expected), resulting in a skewed data collection that may lead social scientists to draw incorrect conclusions. This paper explores the interactions between social biases and automatic topic detection models, and their impact on the resulting data collection. Our goal is to help social scientists gain insights about biases in text analytic so as to mitigate such biases in their data collections.
More specifically, we examine the role of political ideology biases (liberal-leaning, denoted as Blue, and conservative-leaning, denoted as Red) in the process of collecting data about certain social topics (immigration and gun control) from Twitter. We observe that biases may be introduced at three major junctures of the data collection pipeline. First, it may be introduced in the data source itself (e.g., certain forums may have strong political leanings), but social scientists typically choose their data intentionally and are aware of pre-existing biases therein (Malik et al., 2015;Kosinski et al., 2015;Cihon and Yasseri, 2016). Second, biases may be introduced in the way in which "topic relevance" is defined. For example, domain experts may be consulted to identify a set of keywords or sample instances that are indicative or representative of the topic of interest. Thus, any unconscious bias on the part of the domain experts would be encoded into these keywords and examples (King et al., 2017), which would then serve as a noisy training corpus for developing a topic classifier. Third, the choice of the computational models for performing relevance classification may amplify or mitigate the impact of the biases.
Through a suite of empirical analyses, this work studies the effect of biased keywords (Blue-leaning, Red-leaning) on downstream training and retrieval: 1) To what extent does a trained classifier propagate the bias seen in the training data? Can it learn to generalize and blunt some of the bias? 2) To what extent do biases in the training corpus degrade the overall retrieval ability of the classifier? Specifically, we generate strongly Blue-leaning and Red-leaning noisy training sets, and we compare the impact of these training sets on three common off-the-shelf models: GloVe (Pennington et al., 2014), ELMo (Peters et al., 2018, and BERT (Devlin et al., 2018). The three models are chosen to span a range of model sizes and representational power. We find that of the three off-the-shelf models, BERT more frequently suffers a significant drop in retrieval quality and propagates more bias when trained on biased data.
We then propose a method to mitigate the bias. That is, we want a classifier that is oblivious to an instance's group affiliation to Blue or Red, yet still performs the main task of judging the instance's relevance to the topic. Our approach adapts Domain-Adversarial Training (Ganin et al., 2016) for the three off-the-shelf models. Experimental results show that the proposed approach mitigates the unintended bias at no or little cost of retrieval accuracy as compared to the original models; in fact, the retrieval accuracy for the modified BERT is slightly boosted. The code and data for this project is available. 1
Political Ideology Bias on Social Topic Detection
We investigate the impact of political ideology biases on extracting tweets relevant to specific social topics. Unlike gender or racial bias, which has been widely studied in language representation, machine translation or relation extraction (Stanovsky et al., 2019;Gaut et al., 2020;Blodgett and O'Connor, 2017), there exists fewer work on political ideology biases. Political ideology biases on social topic detection may arise from the difference in language usages between political ideological groups. Prior studies have compared different language usages between political ideological groups such as conservatives and liberals in the US. Such differences are reflected in general linguistics patterns such as language complexity (Schoonvelde et al., 2019) and emotions associated with language (Wojcik et al., 2015). Moreover, while talking about the same topic, language devices, such as specific types of metaphors, are often found to be different, which are associated with the groups' distinct political background and moral concerns (Dehghani et al., 2011;Lakoff, 1995). These observations indicate that the information producers come from diverse political ideological backgrounds, and the selection of keywords is critical for obtaining balanced and representative data points. Therefore, we first examine the political ideology biases introduced from in human-selected keywords.
Data Source: Twitter
We focus on Twitter because it is a widely-used space for people to express their views on social topics. For our study, we rely on a prior work (Yang et al., 2017) that collected data from publicly posted tweets using official Twitter APIs during a time-frame close to the 2016 U.S. Presidential election. Two groups of users are identified -Clinton-supporters (Blue) and Trump-supporters (Red) -that are likely to have distinct political and ideological preferences. Group membership is defined as an exclusive follower; i.e., Twitter users who followed only one presidential election candidate but not the other. In their study, Yang et al. (2017) have validated the concept that exclusive followers make good proxies for group affiliations. Our final raw corpus used for this study consists of over 7 million tweets. More detail for our data collection is described in Section 4.1.
Quantifying Bias in Keywords
For any controversial topic, some useful keywords are necessarily going to be biased toward one group or another. Even taken as a set, the keywords that a human expert came up with may reflect the bias of that expert. A useful piece of information, therefore, is if we could quantify the level of bias in the keywords. It could inform the experimenters on whether they should recruit additional diverse experts to expand their keyword set.
Since we have the ground truth for the political group (Red/Blue) of each tweet, we could use a simple ratio between the numbers of tweets containing keyword x between one group with the other as the metric, but a better metric is to apply a Chi-square test because it takes deviation into consideration for estimating the probability. More specifically, we evaluate the bias of each keyword by two-tailed Pearson Chi-square Test (details in Appendix A). The root challenge of this quantifying methodology is that due to the sheer size of the raw corpus, we do not have the full ground truth for topic relevance (i.e., whether a tweet is about gun control or immigration for all tweets). A direct consequence is that we cannot identify all the high-precision keywords by brute-force; thus, our study also relies on humanchosen keywords. Even though keywords chosen by an expert may be biased, the ensemble of keywords from diverse experts is much less likely to be biased (King et al., 2017). Therefore, we approximate the ground truth for topic relevance by the ensemble of human-chosen keywords for the Chi-square test of each single keyword. We selected the topics of gun control and immigration from the ProCon website 2 because both have engaged enthusiastic political debates, with extremely conflicting stances and opinions from opposing political camps. Keywords are collected from diverse experts who are familiar with or have worked on these social topics in tweet corpus. There are 29 keywords for the immigration topic and 34 keywords for the gun control topic. We assign each keyword to the Blue-leaning, Red-leaning or unbiased (neutral) group by setting the confidence level of the Chi-square test equal to 99%. Table 1 shows the number of keywords in each group as well as some examples, which reveals that most (around 75%) expert-selected keywords actually have political ideology bias. Moreover, some keywords are extremely biased, such as "#NoBanNoWall" for the topic immigration and "#NoBillNoBreak" for the topic gun control (refer to Appendix B which shows the exact Z-test scores for each keyword). Our findings verify our hypothesis that the language usages by different political ideology groups are often found to be different, even while talking about the same topic. These observations suggest that a perfectly balanced selection of keywords or a fully representative set of data points of diverse political ideological camps may not be achievable in practice. Therefore, there is a pressing need to study how biases propagate through topic detection models when they are trained on biased keywords.
Bias Propagation through Models
Unlike well-curated and annotated benchmark datasets, raw social media data is sprawling and unorganized. Contributors come from diverse backgrounds, with different racial origins, personalities, education levels, etc.; they may hold many kinds of implicit biases, some of which may not have been identified by the social scientists carrying out the experiment. Under this setting, prior work for addressing the bias and ethical issues such as data statements (Bender and Friedman, 2018) may not be applicable. Nonetheless, data sources such as Twitter remain a powerful resource that researchers are willing to tap into. Therefore, it is important to compare how different NLP systems perform on potentially biased training data and to develop approaches for mitigating bias propagation through models.
We consider bias propagation in two dimensions: 1) To what extent does a model trained on biased examples tend to detect more instances with the same bias? 2) How does the learned bias interact with relevance? (Does a biased classifier simply retrieve fewer instances of the other group, or does it actually retrieve less relevant instances for that group?). We also want to determine whether certain types of NLP systems are more likely to propagate the bias. Given that NLP models are built with a diverse of architectures (transformers, RNN, etc.) and the number of trainable parameters varies from hundreds to billions, we define the type of NLP systems along their context representations and sizes.
Prior work shows that complex models, such as BERT, do quite well for many NLP applications. Multi-head attention allows BERT to be able to capture complex and fine-grained patterns for the target prediction. On the other hand, big complex models with numerous training parameters are more likely to be overfitted when there are not enough training data (Yin and Shen, 2018). Therefore, it is not obvious what might happen with biased-trained large complex models: do they succeed in using "real" patterns for the target task (e.g., predict relevant tweets in our case); or do they make use of the bias seen in the training data (e.g., capitalize on superficial patterns in the biased data) for reaching a minimum loss? Our work aims to answer this question by examining three representative NLP models under multiple, differently biased training sets.
Comparison between Different Off-the-shelf NLP Models
Our study is over two state-of-the-art NLP models, representing high performance approaches, and one simpler model, representing the benchmark. We compare three different topic detection models which are respectively built with BERT, ELMo, and GloVe. For predicting relevant tweets of a target topic, we fine-tune the BERT model with just one additional output layer. When we build topic detection models using ELMo and GloVe, we add a Bi-LSTM layer after ELMo/GloVe as the text encoder, then feed it forward through one output layer for predicting relevance. These three text encoder models have different architectures and sizes, shown as below.
BERT (Devlin et al., 2018) A language representation model whose architecture is deep bidirectional transformers and which is pre-trained on large-scale unlabeled text corpus. It can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference. The base BERT model has 110M trainable parameters.
ELMo (Peters et al., 2018) A large-scale pre-trained deep contextualized word representation. Contextual word vectors are learned functions of the internal states of a deep bidirectional language model. These representations significantly improve the state of the art across six challenging NLP problems, including question answering, textual entailment and sentiment analysis, at the time it was released. Our topic detection model built with ELMo has 3M trainable parameters.
GloVe (Pennington et al., 2014) A traditional distributed word representation learnt from global logbilinear regression model of word-word co-occurrence matrix. It could capture fine-grained semantic and syntactic regularities using vector arithmetic. Our topic detection model built with GloVe has 16M trainable parameters (the number of parameters is linearly correlated with the vocabulary size).
We intentionally experiment with these three NLP models in order to answer the question about the relation between model complexity and robustness to bias, because they are respectively good representatives of bidirectional transformers, bidirectional LSTM and single word vectors. Moreover, their parameter sizes are in different magnitudes.
Proposed Approach for Mitigating Bias Propagation
One promising explanation for bias propagation of ML models is that inductive bias in gradient descent methods results in the overestimation of the importance of moderately-predictive "weak" features if training data is biased and insufficient (Jayakumar et al., 2019). Due to the difference in language use between political ideological groups, topic detection models are biased towards learning frequent spurious correlations in the training data instead of learning true indicators of relevance. For example, when most immigration-relevant tweets are posted by users in Blue while Blue and Red users are evenly distributed in non-relevant tweets in the training data, the text classification systems may overestimate the importance of Blue users language features as the signal of relevance. This could result in a lose of retrieval accuracy for both Blue and Red tweets and especially a low recall for Red tweets during the test time.
The reason for bias propagation of ML models is close to domain adaptation, in which training and test data come from similar but different distributions. Ben-David (2007) suggests that a good representation for cross-domain transfer is one for which an algorithm cannot learn to identify the domain of origin of the input observation. We expect something similar here: an ideal representation of tweets should be invariant to group affiliation (Blue/Red) as well as discriminant to topic relevance. With this thought, we propose an approach inspired by a domain adaptation technique -Domain-Adversarial Neural Networks (Ganin et al., 2016) as a way to mitigate the bias propagation. Prior work uses adversarial feature learning for demoting latent confounds with respect to the task of native language identification (Kumar et al., 2019); for interpreting computational social science with deconfounded lexicon induction (Pryzant et al., 2018); or for preserving privacy by removing demographic attributes (Li et al., 2018). Xie (2017) demonstrates the effectiveness of adversarial feature learning on fair classifications. However the datasets they used (for predicting the savings, credit ratings and health conditions of individuals) have no natural language text input. Our work focuses on state-of-the-art NLP models (such as BERT) and applies adversarial feature learning to mitigate the political ideology bias.
Our goal is to train a classifier that learns to accurately predict the relevance, while ignoring superficial patterns biased with political group present in the training set. The architecture of our proposed model is shown in Figure 1. The input tweet x first goes through a text encoder e(x; θ e ) for getting a feature vector f x as the text representation. The encoder could be BERT, ELMo+BiLSTM or GloVe+BiLSTM, exactly same as what we describe in Section 3.1. Then the feature vector f x is fed into two one-layer feed forward neural networks: 1) r(f x ; θ r ) (FF in orange in Figure 1) for predicting whether x is relevant or not; 2) g(f x ; θ g ) (FF in yellow in Figure 1) for predicting whether x is posted by a Blue or Red user. As gradients back-propagate from the group prediction g heads to the encoder, we pass them through a gradient reversal layer (Ganin et al., 2016), which multiplies gradients by −1. If the cumulative loss of the relevance prediction is L r and that of the group classification is L g , then the loss which is implicitly used to train the encoder is L e = L r − αL g (with loss weighted by α), thereby encouraging the encoder to learn representations of the text which are not useful for predicting the political group. We use Cross-Entropy for computing L r and L g :
L r = 1 N i=N i=1 CE(r(e(x i )), y i ) L g = 1 N i=N i=1 CE(g(e(x i )), g i )
In this work we use those three off-the-shelf NLP models as text encoder for comparing directly with Section 3.1, but the proposed approach is applicable to other encoder models as well.
Experiments
To address the central questions raised in this work -how biases are propagated in several widely-used NLP models and their effect on the overall retrieval accuracy, we conduct experiments to quantify the impact of biased training. To do so, we need to generate training sets for which we can measure the degree of bias along the Blue-Red spectrum. We also need metrics for determining the quality and bias of retrieved tweets. With this evaluation framework, we compare how different NLP models perform under multiple, differently biased training sets on two social topics -immigration and gun control. Then we evaluate the effectiveness of our proposed bias mitigating approach under the same setting.
Experimental Setup
Data: In this study, we build on data acquired from prior work by Yang et al.(2017). They collected over 7 million tweets posted by the exclusive followers of Trump and Clinton within a nine-month period (between June 2016 and February 2017). We pre-process this Tweet corpus by removing emoji, website links and usernames. Then we split it into training and test set by a ratio of 9:1. Topic detection models are trained and validated on the training corpus, and the retrieval quality and retrieval bias are evaluated on the test set. Training Set Settings: For each topic we collect a set of keywords (referred as K total ) from experts and compute their bias scores (Section 2.2). Keywords that don't pass the Chi-square test at the confidence level equal to 99% are considered as biased. If the z-score of a biased keyword is positive, it is biased towards Blue; otherwise, it is biased towards Red. We refer the set of Blue-leaning keywords as K blue and Red-leaning keywords as K red . For each keyword in K blue (respectively K red ), we extract all tweets containing those keywords from the training corpus as "relevant" examples; we randomly select an equal number of tweets that don't contain any keyword as "irrelevant" examples. In this way we construct a noisy training dataset biased towards Blue (respectively Red). Similarly, we also construct a full training dataset with K total . We report the number of relevant tweets by this keyword approach in the training and test set in Table 2. Notice that for the topic of immigration the size of Blue-leaning training set is twice more than Red-leaning training set.
Model Settings: We use the publicly available versions of BERT (bert-base-uncased 3 ), ELMo (orginal 4 ) and GloVe (common crawl 830B300d 5 ) with the recommended parameter settings. The loss weight α for the adversarial training gradually increases from 0 to 1 along with the increasing of training batch number. When training our bias mitigating approach, we keep most hyper-parameters same as its correspondent original model, e.g., learning rate, epoch number, batch size, maximum sequence length, etc. For ELMo+ADV we tune the dropout parameter and report the one without RNN input dropout in Section 4.3.
Evaluation Metrics: While our data contains the ground truth for whether an instance belongs to Blue or Red, we do not know a priori whether it is relevant to some topic or not. To determine model performance on the dimension of relevance, we use precision at N , which computes the precision score of a set of instances from the top N prediction scores. The relevance is judged by crowdsourcing workers via Amazon Mechanical Turk. In order to evaluate the retrieval quality, we need to choose a reasonable N for the metric P@N: if N is too large or too small, then all models would have a very low or high precision score so that the comparison between them is not significant. We find that the number of relevant tweets extracted by K total from the test set is a good candidate for N. It is not too large because we expect there to be at least this many relevant tweets in the test set; and it is not too small, since keywords K red or K blue which are used for generating training sets are subsets of K total . As Table 2 Immigration P@3000
Gun control P@1000 Blue-leaning Train Red-leaning Train Blue-leaning Train Red-leaning Train All
Blue Table 3: Retrieval accuracy of different topic detection models trained on Blue-learning or Red-leaning training sets. Columns "All", "Blue" and "Red" respectively show the accuracy for all retrieved tweets, retrieved tweets posted by Blue users and retrieved tweets posted by Red users. Within each column of "All", the best model is bolded; if a model's performance is over 10% worse than the best one, then it is marked in red color.
shows, there are 3007 tweets and 1049 tweets which contain keywords from K total for immigration and gun control, respectively. Therefore, we use P@3000 and P@1000 (round 3007 and 1049 to hundred) as evaluation metric respectively for immigration and gun control. For reducing the annotation cost, we randomly select 100 samples from the top 3000 or 1000 for human annotation which could significantly represent the performance of models. To determine the level of bias in a model's predictions, we compute the Blue-versus-Red Log Odds Ratio to determine how likely a retrieved instance (one of the top N ) is from Blue instead of the Red. The first odds computes # of Blue instances: # of Red instances in top N ; the second odds computes the same odds in the non-retrieved instances.
Annotation Process: The ground-truth relevance of retrieved tweets is annotated by crowdsourcing workers via Amazon Mechanical Turk. Selected workers are well trained and carefully evaluated by qualification tests in order to make sure they know the coverage of each social topic (refer to Appendix C), for example immigration covers a broad set of sub-topics ranging from a specific policy (e.g., DACA), border security, birthright citizens, to labor market. We add gold standard instances to each annotation batch for monitoring the annotation quality. In addition, each instance is annotated by two annotators and a third person is involved if they don't meet an agreement. The average accuracy and inter-agreement of our annotations are both above 90%.
Results of Comparing Off-the-shelf NLP Models
The retrieval quality of different topic detection models trained on biased training sets is shown in Table 3. Models built with GloVe, ELMo and BERT are trained on Blue-leaning and Red-leaning training sets. In addition to the three off-the-shelf models, we also include a naive keyword approach as baseline which only retrieves tweets containing training keywords. Let's first look at columns "All" which show the accuracy for all retrieved tweets. For the both topics, it is not surprising that trained NLP models generally outperform the keyword-extraction baseline. This means that the models are able to learn some patterns besides the keywords and generalize to tweets which do not contain any keyword. For more easily comparing between models, the best model within the column is bolded; if a model's performance is over 10% worse than the best one, then it is marked in red color. Our experimental results show that ELMo-based model has the best overall retrieval quality; BERT-based model is the most negatively affected by the training bias. Next, we compare models' accuracy for retrieved tweets posted by Blue users and Red users by looking into columns "Blue" and "Red". In general, models have a better retrieval accuracy for tweets from the group towards which the training set is biased (as the second, third and fourth big column show), except for the first big column (when models are trained on the Blue-leaning set for the topic immigration, models have a better retrieval accuracy for Red tweets than Blue). We also report the retrieval accuracy for models trained on the full training set (constructed from K total ) in Table 4. The performance of different models are close to each other.
Next, we evaluate to what extent the political bias is propagated to the retrieved (predicted top N ) tweets by different NLP models. Blue-versus-Red Log Odds Ratio of different topic detection models trained on Blue-leaning or Red-leaning training sets are shown in Table 5. We use the bias in each training set as baseline. The closer to 0 Log Odds Ratio is, the less the political bias is propagated. Positive means leaning to Blue and negative means leaning to Red. Experimental results show that for both topics, NLP models are all able to mitigate political ideology bias from training data. Especially for models trained on Blue-leaning set of the topic gun control, the initial training set is highly biased towards Blue group with an Log Odds Ratio equal to 2.08, NLP models are able to mitigate 61% less bias. For comparing between models more easily, the best model within each column is bolded. We find that ELMo and GloVe-based models propagate the least of the bias; BERT-based model propagates the most of the bias seen in the training data. Taking both the retrieval accuracy and retrieval bias into consideration, we conclude that ELMo-based model is the most robust to training bias, while BERT-based model is the most negatively affected by the training bias. Our findings inform practitioners to choose the more robust model when training data is biased. Moreover, it is important to develop new approaches for mitigating the impact of bias, especially for BERT-based models.
Results of the Bias Mitigating Approach
We compare our bias mitigating approach with its original models on both retrieval accuracy and retrieval bias metrics. We report the performance of our bias mitigating approach in both a more realistic training scenario -models are trained on the full training data, and extremely biased cases -models are trained on our generated strongly Blue-leaning or Red-leaning training sets. The full training dataset is slightly biased towards Blue with a Blue-versus-Red Log Odds Ratio equal to 0.51 for the topic of immigration or 0.61 for the topic of gun control. Table 6 shows that our proposed BERT-based model (noted as BERT+ADV) improves the retrieval accuracy compared with the original one; for ELMo+ADV and GloVe+ADV the retrieval accuracy slightly reduces for most cases. Table 7 shows that our proposed models are very efficient for mitigating the bias propagation. In sum, experimental results demonstrate that our proposed approach succeeds to mitigate bias propagation, with no or little drop of retrieval accuracy. It works especially well for BERT-based models -mitigates the bias at the same time increases the retrieval accuracy.
Conclusion
We have studied the impact of political ideology biases in different types of topic detection models and demonstrated a domain adaptation approach as an effective way of mitigating the bias. Our experimental results suggest that an ELMo-based model is more robust to training bias, while a BERT-based model is more negatively affected by the training bias. Since the ELMo-based model has nearly 40 times fewer trainable parameters than BERT, we conjecture that big complex models are more likely to propagate the bias seen in the training set. Although we have found the proposed adaptation architecture to be helpful for the three models, especially BERT, in terms of mitigating some of the training bias, the approach still relies on some knowledge of the existence of the bias. This work offers a comparison point for future studies to evaluate the effect of bias in various predictive models and opens the door for further reducing the bias in topic detection applications.
Appendix B Political ideology bias of each keyword
The Z-test scores for each keyword are respectively shown in Table 8 for immigration and gun control.
Figure 1 :
1Model architecture of our proposed approach. Encoder could be BERT, ELMo+BiLSTM or GloVe+BiLSTM. The top Feed Forward NN (in orange) is the class label predictor. The bottom Feed Forward NN (in yellow) is the political group predictor.
Table 2 :
2Number of relevant tweets by the keyword approach in the training and test set.
Table 4 :
4Retrieval accuracy for models trained on the full training set (constructed from K total ).Immigration
Gun control
Blue-leaning Train Red-leaning Train Blue-leaning Train Red-leaning Train
Training set
0.81
-0.16
2.08
-0.29
GloVe
0.69
0.10
0.82
-0.08
ELMo
0.60
0.14
0.84
-0.02
BERT
0.62
0.14
0.94
0.05
Table 5 :
5Blue-versus-Red Log Odds Ratio of different topic detection models trained on Blue-leaning or
Red-leaning training sets. Within each column, the best model is bolded. The closer to 0, the better (less
bias). Positive means leaning to Blue; negative means leaning to Red.
Table 6 :
6Retrieval accuracy of original and our proposed models trained on full, Blue-learning or Redleaning training sets. Our proposed model is bolded if it outperforms its base model.Immigration
Gun control
Full
Blue-
leaning
Red-
leaning
Full
Blue-
leaning
Red-
leaning
GloVe
0.43
0.69
0.10
0.58
0.82
-0.08
GloVe+ADV 0.43
0.54
0.07
0.14
0.77
0.01
ELMo
0.45
0.60
0.14
0.31
0.84
-0.02
ELMo+ADV 0.34
0.52
-0.12
0.10
0.35
0.01
BERT
0.40
0.62
0.14
0.62
0.94
0.05
BERT+ADV 0.47
0.57
0.12
0.23
0.70
0.00
Table 7 :
7Blue-versus-Red Log Odds Ratio of original and our proposed models trained on full, Blueleaning or Red-leaning training sets. Our proposed model is bolded if it outperforms its base model. The closer to 0, the better (less bias). Positive means leaning to Blue; negative means leaning to Red.
Table 8 :
8Keywords and their z score for immigration and gun control. Keywords at the top are blueleaning; those in the middle are neutral; those at the bottom are red-leaning.
https://github.com/MeiqiGuo/COLING2020-BiasStudy
This website has organized and collected major arguments and researches relevant to controversial issues in the US, in which the information was arranged into pros and cons that reflect the opposite stances and ideas around the issues.
https://storage.googleapis.com/bert models 4 https://allennlp.org/elmo 5 https://nlp.stanford.edu/projects/glove/
AcknowledgementsThe authors would like to acknowledge the support from the DARPA UGB and AFOSR awards. Any opinions, findings, and conclusions or recommendations expressed in this material do not necessarily reflect the views of the funding sources.Appendix A Chi-square Test for evaluating keyword biasAssume T is the corpus of tweets, K is the keyword set, the bias dimension is towards either group G 1 or G 2 . ∀x ∈ K, the null hypothesis H 0 is that the probability of a keyword x appears in relevant tweets posted by people from group G 1 and group G 2 is the same. For our problem, the keyword x is unbiased if it passes the Chi-square Test with a specific confidence level; otherwise, x is biased. The bias level is measured by Z-statistic. The statistic test formula is shown as below:where n i is the number of relevant tweets posted by group G i , and n ix is the number of tweets containing keyword x and posted by group G i , ∀i ∈ {1, 2}. Here a tweet is considered as relevant if it contains at least one keyword from K. A cut-off score z cut is computed by the confidence level based on Z-test distribution. If |z x | < z cut , then the keyword x is considered as unbiased; if z x > z cut , then the keyword x is considered as biased towards group G 1 ; if z x < −z cut , then the keyword x is considered as biased towards group G 2 . In this way, keywords in the set K could be divided into three groups: G 1 -leaning, G 2 -leaning and neutral. Additionally, more is the |z x |, more biased is the keyword x.Appendix C Annotation GuidelinesC.1 Instructions for annotating the topic of immigrationA tweet is considered as relevant if it talks about anything that has to do with, but not limited to, the following issue categories: Borders, Birthright citizenship, Immigrant Crime, DACA and the DREAM Act, Deportation debate, Economic impact, Immigration quotas, Immigrants' rights and access to services, Labor Market -American workers and employers, Law enforcement, Muslin Ban/Travel Ban, Obama Iraq ban, Refugees, etc.
Analysis of representations for domain adaptation. Shai Ben-David, John Blitzer, Koby Crammer, Fernando Pereira, Advances in neural information processing systems. Shai Ben-David, John Blitzer, Koby Crammer, and Fernando Pereira. 2007. Analysis of representations for domain adaptation. In Advances in neural information processing systems, pages 137-144.
Data statements for natural language processing: Toward mitigating system bias and enabling better science. M Emily, Batya Bender, Friedman, Transactions of the Association for Computational Linguistics. 6Emily M Bender and Batya Friedman. 2018. Data statements for natural language processing: Toward mitigating system bias and enabling better science. Transactions of the Association for Computational Linguistics, 6:587- 604.
Racial disparity in natural language processing: A case study of social media african-american english. Lin Su, Brendan O' Blodgett, Connor, arXiv:1707.00061arXiv preprintSu Lin Blodgett and Brendan O'Connor. 2017. Racial disparity in natural language processing: A case study of social media african-american english. arXiv preprint arXiv:1707.00061.
A biased review of biases in twitter studies on political collective action. Peter Cihon, Taha Yasseri, Frontiers in Physics. 434Peter Cihon and Taha Yasseri. 2016. A biased review of biases in twitter studies on political collective action. Frontiers in Physics, 4:34.
Analyzing conservative and liberal blogs related to the construction of the 'ground zero mosque. Morteza Dehghani, Jonathan Gratch, Sonya Sachdeva, Kenji Sagae, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society33Morteza Dehghani, Jonathan Gratch, Sonya Sachdeva, and Kenji Sagae. 2011. Analyzing conservative and liberal blogs related to the construction of the 'ground zero mosque'. In Proceedings of the Annual Meeting of the Cognitive Science Society, volume 33.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805.
Domain-adversarial training of neural networks. Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, Victor Lempitsky, The Journal of Machine Learning Research. 171Yaroslav Ganin, Evgeniya Ustinova, Hana Ajakan, Pascal Germain, Hugo Larochelle, François Laviolette, Mario Marchand, and Victor Lempitsky. 2016. Domain-adversarial training of neural networks. The Journal of Machine Learning Research, 17(1):2096-2030.
Towards understanding gender bias in relation extraction. Andrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai Elsherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, William Yang Wang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineAssociation for Computational LinguisticsAndrew Gaut, Tony Sun, Shirlyn Tang, Yuxin Huang, Jing Qian, Mai ElSherief, Jieyu Zhao, Diba Mirza, Elizabeth Belding, Kai-Wei Chang, and William Yang Wang. 2020. Towards understanding gender bias in relation extraction. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2943-2953, Online, July. Association for Computational Linguistics.
Multiplicative interactions and where to find them. M Siddhant, Wojciech M Jayakumar, Jacob Czarnecki, Jonathan Menick, Jack Schwarz, Simon Rae, Yee Whye Osindero, Tim Teh, Razvan Harley, Pascanu, International Conference on Learning Representations. Siddhant M Jayakumar, Wojciech M Czarnecki, Jacob Menick, Jonathan Schwarz, Jack Rae, Simon Osindero, Yee Whye Teh, Tim Harley, and Razvan Pascanu. 2019. Multiplicative interactions and where to find them. In International Conference on Learning Representations.
Computer-assisted keyword and document set discovery from unstructured text. Gary King, Patrick Lam, Margaret E Roberts, American Journal of Political Science. 614Gary King, Patrick Lam, and Margaret E Roberts. 2017. Computer-assisted keyword and document set discovery from unstructured text. American Journal of Political Science, 61(4):971-988.
Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. Michal Kosinski, C Sandra, Matz, D Samuel, Vesselin Gosling, David Popov, Stillwell, American Psychologist. 706543Michal Kosinski, Sandra C Matz, Samuel D Gosling, Vesselin Popov, and David Stillwell. 2015. Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines. American Psychologist, 70(6):543.
Topics to avoid: Demoting latent confounds in text classification. Sachin Kumar, Shuly Wintner, A Noah, Yulia Smith, Tsvetkov, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Sachin Kumar, Shuly Wintner, Noah A Smith, and Yulia Tsvetkov. 2019. Topics to avoid: Demoting latent confounds in text classification. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 4144-4154.
Metaphor, morality, and politics, or, why conservatives have left liberals in the dust. George Lakoff, Social Research. George Lakoff. 1995. Metaphor, morality, and politics, or, why conservatives have left liberals in the dust. Social Research, pages 177-213.
Towards robust and privacy-preserving text representations. Yitong Li, Timothy Baldwin, Trevor Cohn, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Yitong Li, Timothy Baldwin, and Trevor Cohn. 2018. Towards robust and privacy-preserving text representations. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 25-30.
Population bias in geotagged tweets. M Momin, Hemank Malik, Constantine Lamba, Jürgen Nakos, Pfeffer, Ninth international AAAI conference on web and social media. Momin M Malik, Hemank Lamba, Constantine Nakos, and Jürgen Pfeffer. 2015. Population bias in geotagged tweets. In Ninth international AAAI conference on web and social media.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word represen- tation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettle- moyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365.
Deconfounded lexicon induction for interpretable social science. Reid Pryzant, Kelly Shen, Dan Jurafsky, Stefan Wagner, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersReid Pryzant, Kelly Shen, Dan Jurafsky, and Stefan Wagner. 2018. Deconfounded lexicon induction for inter- pretable social science. In Proceedings of the 2018 Conference of the North American Chapter of the Associa- tion for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1615-1625.
Liberals lecture, conservatives communicate: Analyzing complexity and ideology in 381,609 political speeches. Martijn Schoonvelde, Anna Brosius, Gijs Schumacher, Bert N Bakker, PloS one. 142208450Martijn Schoonvelde, Anna Brosius, Gijs Schumacher, and Bert N Bakker. 2019. Liberals lecture, conservatives communicate: Analyzing complexity and ideology in 381,609 political speeches. PloS one, 14(2):e0208450.
Evaluating gender bias in machine translation. Gabriel Stanovsky, Noah A Smith, Luke Zettlemoyer, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsGabriel Stanovsky, Noah A. Smith, and Luke Zettlemoyer. 2019. Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1679-1684, Florence, Italy, July. Association for Computational Linguistics.
P Sean, Arpine Wojcik, Jesse Hovasapian, Matt Graham, Motyl, H Peter, Ditto, Conservatives report, but liberals display, greater happiness. 347Sean P Wojcik, Arpine Hovasapian, Jesse Graham, Matt Motyl, and Peter H Ditto. 2015. Conservatives report, but liberals display, greater happiness. Science, 347(6227):1243-1246.
Controllable invariance through adversarial feature learning. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, Graham Neubig, Advances in Neural Information Processing Systems. Qizhe Xie, Zihang Dai, Yulun Du, Eduard Hovy, and Graham Neubig. 2017. Controllable invariance through adversarial feature learning. In Advances in Neural Information Processing Systems, pages 585-596.
Quantifying content polarization on twitter. Muheng Yang, Xidao Wen, Yu-Ru Lin, Lingjia Deng, 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC). IEEEMuheng Yang, Xidao Wen, Yu-Ru Lin, and Lingjia Deng. 2017. Quantifying content polarization on twitter. In 2017 IEEE 3rd International Conference on Collaboration and Internet Computing (CIC), pages 299-308. IEEE.
On the dimensionality of word embedding. Zi Yin, Yuanyuan Shen, Advances in Neural Information Processing Systems. Zi Yin and Yuanyuan Shen. 2018. On the dimensionality of word embedding. In Advances in Neural Information Processing Systems, pages 887-898.
Wonder what advice she got regarding her open border plan and especially her willingness to increase Syrian immigration?. This tweet talks about the open border plan and Syrian immigration. which is related to the topic of immigration under the categories of Border and RefugeesAn example for the option 1 -Relevant: "Wonder what advice she got regarding her open border plan and especially her willingness to increase Syrian immigration?" This tweet talks about the open border plan and Syrian immigration, which is related to the topic of immigration under the categories of Border and Refugees;
This tweet talks about a Syrian boy suffering a gas attack, which may be pointing to a war or terrorist event in Syria, not necessarily directly about an immigration issue. Instruction of some cases that may be more ambiguous: A tweet should be considered as relevant if it: 1) mentions several topics in addition to immigration. Terrified Syrian boy suffers suspected gas attack. I'm a woman that supports Trump to fix economy, immigration, school, military more. #MAGA3X"An example for the option 2 -Not Relevant: "'Will I die, miss?' Terrified Syrian boy suffers sus- pected gas attack." This tweet talks about a Syrian boy suffering a gas attack, which may be pointing to a war or terrorist event in Syria, not necessarily directly about an immigration issue. Instruction of some cases that may be more ambiguous: A tweet should be considered as relevant if it: 1) mentions several topics in addition to immigration: "I'm a woman that supports Trump to fix economy, immigration, school, military more. #MAGA3X";
Against! #muslimban"; 3) talks about immigration in other countries: "The #EU referendum has become a sinister attack on immigrants #Brexit #Xenophobia. is short with relevant hashtagsis short with relevant hashtags: "Against! #muslimban"; 3) talks about immigration in other countries: "The #EU referendum has become a sinister attack on immigrants #Brexit #Xenophobia.";
A tweet should be considered as irrelevant if it mentions a group of immigrant people such as Muslim, Syrian refugees but doesn't explicitly talk about immigration issues. Wonderful news, I will suffer being video taped shopping with my wife, while Muslim terrorists construct bombs. or "Syrian girl, 7, who tweeted from Aleppo meets Turkey's Erdogan by #ReutersA tweet should be considered as irrelevant if it mentions a group of immigrant people such as Muslim, Syrian refugees but doesn't explicitly talk about immigration issues: "Wonderful news, I will suffer being video taped shopping with my wife, while Muslim terrorists construct bombs", or "Syrian girl, 7, who tweeted from Aleppo meets Turkey's Erdogan by #Reuters."
This tweet talks about the fourth amendment which is irrelevant to gun control issues. Instruction of some cases that may be more ambiguous: A tweet should be considered as relevant if it: 1) mentions several topics in addition to gun control. which is not related to a gun control issue. 3This tweet talks about the 2nd amendment, which is related to the topic of gun control; 2) "I don't understand why we can't ban assault weapons. We all know they are only used for hunting people. #PrayForOrlando #guncontrolplease. I'm a woman that supports Trump to fix economy, immigration, school, gun control more."; 2) is short with relevant hashtags: "This is good. #NoBillNoBreakA tweet is considered as relevant if it talks about anything that has to do with, but not limited to, the following issue categories: the Second Amendment, Gun control laws, etc. Tweets which contain the following hashtags are probably relevant to gun control: #NoBillNoBreak, #WearOrange, #EndGunVio- lence, #DisarmHate, #molonlabe, etc. Some examples for the option 1 -Relevant: 1) "Standing up for the second amendment and carrying a firearm for self defense." This tweet talks about the 2nd amendment, which is related to the topic of gun control; 2) "I don't understand why we can't ban assault weapons. We all know they are only used for hunting people. #PrayForOrlando #guncontrolplease." This tweet talks about banning weapons and contains the hashtag "#guncontrolplease", which is relevant to the topic of gun control; 3) "Stay Strong , Represent the American people #NoBillNoBreak #DisarmHate." This tweet contains hashtags "#NoBillNoBreak" and "#DisarmHate" which are both relevant to gun control issues. Some examples for the option 2 -Not Relevant: 1) "Apple replaced the gun emoji with a water gun in iOS 10." This tweet talks about gun emoji, which is not related to a gun control issue; 2) "GUN GAME in MWR! WINTER CRASH EDITION!" This tweet talks about gun games, which is not related to a gun control issue; 3) "The Fourth Amendment protects you from unreasonable searches and seizures." This tweet talks about the fourth amendment which is irrelevant to gun control issues. Instruction of some cases that may be more ambiguous: A tweet should be considered as relevant if it: 1) mentions several topics in addition to gun control: "I'm a woman that supports Trump to fix economy, immigration, school, gun control more."; 2) is short with relevant hashtags: "This is good. #NoBillNoBreak.";
Love will always conquer hate. #PrayForOrlando #OrlandoShoot-ing"; "Same ppl who just yesterday were praying against #LGBTQ ppl are now praying for the #LGBT victims. #Hypocrisy #PrayForOrlando #p2 #Orlando"; "He unknowingly followed us to LA. After he raped a kid killed a dude in OR the LA 5-0 caught him jacking a car. Gun shots were exchanged. A tweet should be considered as irrelevant if it mentions a gun death event or a gun violence news, but the context is not about gun controlA tweet should be considered as irrelevant if it mentions a gun death event or a gun violence news, but the context is not about gun control: "Love will always conquer hate. #PrayForOrlando #OrlandoShoot- ing"; "Same ppl who just yesterday were praying against #LGBTQ ppl are now praying for the #LGBT victims. #Hypocrisy #PrayForOrlando #p2 #Orlando"; "He unknowingly followed us to LA. After he raped a kid killed a dude in OR the LA 5-0 caught him jacking a car. Gun shots were exchanged.", or "Turkey car bomb and gun attack on courthouse in Izmir".
| [
"https://github.com/MeiqiGuo/COLING2020-BiasStudy"
] |
[
"Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications",
"Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications"
] | [
"Haw-Shiuan Chang hschang@cs.umass.edu \nCICS\nUniversity of Massachusetts Amherst\n\n",
"Amol Agrawal amolagrawal@cs.umass.edu \nCICS\nUniversity of Massachusetts Amherst\n\n",
"Andrew Mccallum mccallum@cs.umass.edu \nCICS\nUniversity of Massachusetts Amherst\n\n"
] | [
"CICS\nUniversity of Massachusetts Amherst\n",
"CICS\nUniversity of Massachusetts Amherst\n",
"CICS\nUniversity of Massachusetts Amherst\n"
] | [] | Most unsupervised NLP models represent each word with a single point or single region in semantic space, while the existing multi-sense word embeddings cannot represent longer word sequences like phrases or sentences. We propose a novel embedding method for a text sequence (a phrase or a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. We introduce an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time. Our experiments show that the per-sentence codebook embeddings significantly improve the performances in unsupervised sentence similarity and extractive summarization benchmarks. In phrase similarity experiments, we discover that the multifacet embeddings provide an interpretable semantic representation but do not outperform the single-facet baseline.arXiv:2103.15330v2 [cs.CL] 29 Dec 2021Pavlick, E.; Rastogi, P.; Ganitkevitch, J.; Van Durme, B.; and Callison-Burch, C. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In ACL. Pennington, J.; Socher, R.; and Manning, C. 2014. GloVe: Global vectors for word representation. In EMNLP. Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep contextualized word representations. In NAACL-HLT. Qin, K.; Li, C.; Pavlu, V.; and Aslam, J. A. 2019. Adapting RNN Sequence Prediction Model to Multi-label Set Prediction. In NAACL-HLT. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In EMNLP-IJCNLP. Rezatofighi, S. H.; Kaskman, R.; Motlagh, F. T.; Shi, Q.; Cremers, D.; Leal-Taixé, L.; and Reid, I. | null | [
"https://arxiv.org/pdf/2103.15330v2.pdf"
] | 232,404,484 | 2103.15330 | a6f3505451d9ec1099871804cb342e93d1fd6cb3 |
Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications
Haw-Shiuan Chang hschang@cs.umass.edu
CICS
University of Massachusetts Amherst
Amol Agrawal amolagrawal@cs.umass.edu
CICS
University of Massachusetts Amherst
Andrew Mccallum mccallum@cs.umass.edu
CICS
University of Massachusetts Amherst
Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications
Most unsupervised NLP models represent each word with a single point or single region in semantic space, while the existing multi-sense word embeddings cannot represent longer word sequences like phrases or sentences. We propose a novel embedding method for a text sequence (a phrase or a sentence) where each sequence is represented by a distinct set of multi-mode codebook embeddings to capture different semantic facets of its meaning. The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space. We introduce an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time. Our experiments show that the per-sentence codebook embeddings significantly improve the performances in unsupervised sentence similarity and extractive summarization benchmarks. In phrase similarity experiments, we discover that the multifacet embeddings provide an interpretable semantic representation but do not outperform the single-facet baseline.arXiv:2103.15330v2 [cs.CL] 29 Dec 2021Pavlick, E.; Rastogi, P.; Ganitkevitch, J.; Van Durme, B.; and Callison-Burch, C. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In ACL. Pennington, J.; Socher, R.; and Manning, C. 2014. GloVe: Global vectors for word representation. In EMNLP. Peters, M. E.; Neumann, M.; Iyyer, M.; Gardner, M.; Clark, C.; Lee, K.; and Zettlemoyer, L. 2018. Deep contextualized word representations. In NAACL-HLT. Qin, K.; Li, C.; Pavlu, V.; and Aslam, J. A. 2019. Adapting RNN Sequence Prediction Model to Multi-label Set Prediction. In NAACL-HLT. Reimers, N.; and Gurevych, I. 2019. Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. In EMNLP-IJCNLP. Rezatofighi, S. H.; Kaskman, R.; Motlagh, F. T.; Shi, Q.; Cremers, D.; Leal-Taixé, L.; and Reid, I.
Introduction
Collecting manually labeled data is an expensive and tedious process for new or low-resource NLP applications. Many of these applications require the text similarity measurement based on the text representation learned from the raw text without any supervision. Examples of the representation include word embedding like Word2Vec (Mikolov et al. 2013) or GloVe (Pennington, Socher, and Manning 2014), sentence embeddings like skip-thoughts (Kiros et al. 2015), contextualized word embedding like ELMo (Peters et al. 2018) and BERT (Devlin et al. 2019) without fine-tuning.
The existing work often represents a word sequence (e.g., a sentence or a phrase) as a single embedding. However, when squeezing all the information into a single embedding (e.g., by averaging the word embeddings or using CLS embedding in BERT), the representation might lose some important information of different facets in the sequence.
Inspired by the multi-sense word embeddings (Lau et al. 2012;Neelakantan et al. 2014;Athiwaratkun and Wilson 2017;Singh et al. 2020), we propose a multi-facet representation that characterizes a phrase or a sentence as a fixed Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
The company acquired the real property to save tax . The distribution of all possible co-occurring words of real property (input phrase) in a pre-trained word embedding space
Our approach: During training, minimize the distance between the predicted cluster centers and the co-occurring words. During testing, directly predict the centers Backward pass: Update the model using backdrop
Forward pass: Predict cluster centers Figure 1: The input phrase real property is represented by K = 5 cluster centers. The previous work discovers the multiple senses by clustering the embedding of observed cooccurring words. Instead, our compositional model learns to predict the embeddings of cluster centers from the sequence of words in the input phrase so as to reconstruct the (unseen) co-occurring distribution well. number of embeddings, where each embedding is a clustering center of the words co-occurring with the input word sequence.
In this work, a facet refers to a mode of the co-occurring word distribution, which might be multimodal. For example, the multi-facet representation of real property is illustrated in Figure 1. Real property can be observed in legal documents where it usually means real estate, while real property can also mean a true characteristic in philosophic discussions. The previous unsupervised multi-sense embeddings discover those senses by clustering the observed neighboring words (e.g., acquired, save, and tax) and an important facet, a mode with high probability, could be represented by several close cluster centers. Notice that the approaches need to solve a distinct local clustering problem for each phrase in contrast with the topic modeling like LDA (Blei, Ng, and Jordan 2003), which clusters all the words in the corpus into a global set of topics.
In addition to a phrase, we can also cluster the nearby words of a sentence which appears frequently in the corpus. The cluster centers usually correspond to important aspects rather than senses (see an example in Figure 2) because a sentence usually has multiple aspects but only one sense. However, extending the clustering-based multi-sense word embeddings to long sequences such as sentences is difficult in practice due to two efficiency challenges. First, there are usually many more unique phrases and sentences in a corpus than there are words, while the number of parameters for clustering-based approaches is O(|V | × |K| × |E|), where |V | is the number of unique sequences, |K| is the number of clusters, and |E| is the embedding dimensions. Estimating and storing such a large number of parameters takes time and space. More importantly, much more unique sequences imply much fewer co-occurring words to be clustered for each sequence, especially for sentences. An effective model needs to overcome this sample efficiency challenge (i.e., sparseness in the co-occurring statistics), but clustering approaches often have too many parameters to learn the compositional meaning of each sequence without overfitting.
Nevertheless, the sentences (or phrases) sharing multiple words often lead to similar cluster centers, so we should be able to solve these local clustering problems using much fewer parameters to circumvent the challenges. To achieve the goal, we develop a novel Transformer-based neural encoder and decoder. As shown in Figure 1, instead of clustering co-occurring words beside an input sequence at test time as in previous approaches, we learn a mapping between the input sequence (i.e., phrases or sentences) and the corresponding cluster centers during training so that we can directly predict those cluster centers using a single forward pass of the neural network for an arbitrary unseen input sequence during testing.
To train the neural model that predicts the clustering centers, we match the sequence of predicted cluster centers and the observed set of co-occurring word embeddings using a non-negative and sparse permutation matrix. After the permutation matrix is estimated for each input sequence, the gradients are back-propagated to cluster centers (i.e., codebook embeddings) and to the weights of our neural model, which allows us to train the whole model end-to-end.
In the experiments, we evaluate whether the proposed multi-facet embeddings could improve the similarity measurement between two sentences, between a sentence and a document (i.e., extractive summarization), and between phrases. The results demonstrate multi-facet embeddings significantly outperforms the classic single embedding baseline when the input sequence is a sentence.
We also demonstrate several advantages of the proposed multi-facet embeddings over the (contextualized) embeddings of all the words in a sequence. First, we discover that our model tends to use more embeddings to represent an important facet or important words. This tendency provides an unsupervised estimation of word importance, which improves various similarity measurements between a sentence pair. Second, our model outputs a fixed number of facets by compressing long sentences and extending short sentences. In unsupervised extractive summarization, this ca-pability prevents the scoring function from biasing toward longer or shorter sentences. Finally, in the phrase similarity experiments, our methods capture the compositional meaning (e.g., a hot dog is a food) of a word sequence well and the quality of our similarity estimation is not sensitive to the choice of K, the number of our codebook embeddings.
Main Contributions
1. As shown in Figure 1, we propose a novel framework that predicts the cluster centers of co-occurring word embeddings to overcomes the sparsity challenges in our self-supervised training signals. This allows us to extend the idea of clustering-based multi-sense embeddings to phrases or sentences. 2. We propose a deep architecture that can effectively encode a sequence and decode a set of embeddings. We also propose non-negative sparse coding (NNSC) loss to train our neural encoder and decoder end-to-end. 3. We demonstrate how the multi-facet embeddings could be used in unsupervised ways to improve the similarity between sentences/phrases, infer word importance in a sentence, extract important sentences in a document. In Appendix B.1, we show that our model could provide asymmetric similarity measurement for hypernym detection. 4. We conduct comprehensive experiments in the main paper and appendix to show that multi-facet embedding is consistently better than classic single-facet embedding for modeling the co-occurring word distribution of sentences, while multi-facet phrase embeddings do not yield a clear advantage against the single embedding baseline, which supports the finding in Dubossarsky, Grossman, and Weinshall (2018).
Method
In this section, we first formalize our training setup and next describe our objective function and neural architecture. Our approach is visually summarized in Figure 2.
Self-supervision Signal
We express tth sequence of words in the corpus as I t = w xt ...w yt <eos>, where x t and y t are the start and end position of the input sequence, respectively, and <eos> is the end of sequence symbol. We assume neighboring words beside each input phrase or sentence are related to some facets of the sequence, so given I t as input, our training signal is to reconstruct a set of co-occurring words,
N t = {w xt−d t 1 , ...w xt−1 , w yt+1 , ...w yt+d t 2 }. 1
In our experiments, we train our multi-facet sentence embeddings by setting N t as the set of all words in the previous and the next sentence, and train multi-facet phrase embeddings by setting a fixed window size d t 1 = d t 2 = 5. Since there are not many co-occurring words for a long sequence (none are observed for unseen testing sequences), the goal of our model is to predict the cluster centers of the Figure 2: Our model for sentence representation. We represent each sentence as multiple codebook embeddings (i.e., cluster centers) predicted by our sequence to embeddings model. Our loss encourages the model to generate codebook embeddings whose linear combination can well reconstruct the embeddings of co-occurring words (e.g., music), while not able to reconstruct the negatively sampled words (i.e., the co-occurring words from other sentences). words that could "possibly" occur beside the text sequence rather than the cluster centers of the actual occurring words in N t (e.g., the hidden co-occurring distribution instead of green and underlined words in Figure 2). The cluster centers of an unseen testing sequence are predictable because the model could learn from similar sequences and their cooccurring words in the training corpus.
To focus on the semantics rather than syntax, we view the co-occurring words as a set rather than a sequence as in skip-thoughts (Kiros et al. 2015). Notice that our model considers the word order information in the input sequence I t , but ignores the order of the co-occurring words N t .
Non-negative Sparse Coding Loss
In a pre-trained word embedding space, we predict the cluster centers of the co-occurring word embeddings. The embeddings of co-occurring words N t are arranged into a ma-
trix W (N t ) = [w t j ] j=1...|Nt| with size |E| × |N t |,
where |E| is the dimension of pre-trained word embedding, and each of its columns w t j is a normalized word embedding whose 2-norm is 1. The normalization makes the cosine distance between two words become half of their squared Euclidean distance.
Similarly, we denote the predicted cluster centers c t k of the input sequence I t as a |E| × K matrix F (I t ) = [c t k ] k=1...K , where F is our neural network model and K is the number of clusters. We fix the number of clusters K to simplify the design of our prediction model and the unsuper-vised scoring functions used in the downstream tasks. When the number of modes in the (multimodal) co-occurring distribution is smaller than K, the model can output multiple cluster centers to represent a mode (e.g., the music facet in Figure 2 is represented by two close cluster centers). As a result, the performances in our downstream applications are not sensitive to the setting of K when K is larger than the number of facets in most input word sequences.
The reconstruction loss of k-means clustering in the word embedding space can be written as ||F
(I t )M − W (N t )|| 2 = j ||( k M k,j c t k ) − w t j || 2 ,
where M k,j = 1 if the jth word belongs to the k cluster and 0 otherwise. That is, M is a permutation matrix which matches the cluster centers and co-occurring words and allow the cluster centers to be predicted in an arbitrary order.
Non-negative sparse coding (NNSC) (Hoyer 2002) relaxes the constraints by allowing the coefficient M k,j to be a positive value but encouraging it to be 0. We adopt NNSC in this work because we observe that the neural network trained by NNSC loss generates more diverse topics than kmeans loss does. We hypothesize that it is because the loss is smoother and easier to be optimized for a neural network. Using NNSC, we define our reconstruction error as where λ is a hyper-parameter controlling the sparsity of M . We force the coefficient value M k,j ≤ 1 to avoid the neural network learning to predict centers with small magnitudes which makes the optimal values of M k,j large and unstable.
We adopt an alternating optimization strategy similar to the EM algorithm for k-means. At each iteration, our E-step estimates the permutation coefficient M Ot after fixing our neural model, while our M-step treats M Ot as constants to back-propagate the gradients of NNSC loss to our neural network. A pseudo-code of our training procedure could be found in Algorithm 1 in the appendix. Estimating the permutation between the prediction and ground truth words is often computationally expensive (Qin et al. 2019). Nevertheless, optimizing the proposed loss is efficient because for each training sequence I t , M Ot can be efficiently estimated using convex optimization (our implementation uses RM-Sprop (Tieleman and Hinton 2012)). Besides, we minimize the L2 distance, ||F (I t )M Ot −W (N t )|| 2 , in a pre-trained embedding space as in Kumar and Tsvetkov (2019);Li et al. (2019) rather than computing softmax.
To prevent the neural network from predicting the same global topics regardless of the input, our loss function for tth sequence is defined as
Lt(F ) = Er(F (It), W (Nt)) − Er(F (It), W (Nr t )), (2)
where N rt is a set of co-occurring words of a randomly sampled sequence I rt . In our experiment, we use SGD to solve F = arg min F t L t (F ). Our method could be viewed as a generalization of Word2Vec (Mikolov et al. 2013) that can encode the compositional meaning of the words and decode multiple embeddings.
Sequence to Embeddings
Our neural network architecture is similar to Transformerbased sequence to sequence (seq2seq) model (Vaswani et al. 2017). We use the same encoder T E(I t ), which transforms the input sequence into a contextualized embeddings [e x t ...e y t e <eos> ] = T E(wx t ...wy t <eos>),
where the goal of the encoder is to map the similar sentences, which are likely to have similar co-occurring word distribution, to similar contextualized embeddings. Different from the typical seq2seq model (Sutskever, Vinyals, and Le 2014;Vaswani et al. 2017), our decoder does not need to make discrete decisions because our outputs are a sequence of embeddings instead of words. This allows us to predict all the codebook embeddings in a single forward pass as in Lee et al. (2019) without requiring an expensive softmax layer or auto-regressive decoding. 2 To make different codebook embeddings capture different facets, we pass the embeddings of <eos>, e <eos> , to different linear layers L k before becoming the input of the decoder T D. The decoder allows the input embeddings to attend each other to model the dependency among the facets and attend the contextualized word embeddings from the encoder, e xt ...e yt e <eos> , to copy the embeddings of some keywords in the word sequence as our facet embeddings more easily. Specifically, the codebook embeddings F (It) = T D(L1(e <eos> )...LK (e <eos> ), e x t ...e y t e <eos> ). (4) We find that removing the attention on the e xt ...e yt e <eos> significantly deteriorates our validation loss for sentence representation because there are often too many facets to be compressed into a single embedding. On the other hand, the encoder-decoder attention does not significantly change the performance of phrase representation, so we remove the connection (i.e., encoder and decoder have the same architecture) in models for phrase representation. Notice that the framework is flexible. For example, we can encode the genre of the document containing the sentence if desired.
Experiments
Quantitatively evaluating the quality of our predicted cluster centers is difficult because the existing label data and metrics are built for global clustering. The previous multisense word embedding studies often show that multiple embeddings could improve the single word embedding in the unsupervised word similarity task to demonstrate its effectiveness. Thus, our goal of experiments is to discover when and how the multi-facet embeddings can improve the similarity measurement in various unsupervised semantic tasks upon the widely-used general-purpose representations, such as single embedding and (contextualized) word embeddings.
Experiment Setup
Our models only require the raw corpus and sentence/phrase boundaries, so we will only compare them with other unsupervised alternatives that do not require any manual labels or multi-lingual resources such as PPDB (Pavlick et al. 2015). To simplify the comparison, we also omit the comparison with the methods using character-level information such as fastText (Bojanowski et al. 2017) or bigram information such as Sent2Vec (Pagliardini, Gupta, and Jaggi 2018a).
It is hard to make a fair comparison with BERT (Devlin et al. 2019). Its masked language modeling loss is designed for downstream supervised tasks and preserves more syntax information which might be distractive in unsupervised semantic applications. Furthermore, BERT uses word piece tokenization while other models use word tokenization. Nevertheless, we still present the performances of the BERT Base model as a reference even though it is trained using more parameters, larger embedding size, larger corpus, and more computational resources compared with our models. Since we focus on unsupervised setting, we directly use the final hidden states of the BERT models without supervised finetuning in most of the comparisons. One exception is that we also report the performance of sentence-BERT (Reimers and Gurevych 2019) in a low-resource setting.
Our model is trained on English Wikipedia 2016 while the stop words are removed from the set of co-occurring words. In the phrase experiments, we only consider noun phrases, and their boundaries are extracted by applying simple regular expression rules to POS tags before training. We use Table 1: Examples of the codebook embeddings predicted by our models with different K. The embedding in each row is visualized by the three words whose GloVe embeddings have the highest cosine similarities (also presented) with the codebook embedding.
the cased version (840B) of GloVe embedding (Pennington, Socher, and Manning 2014) as the pre-trained word embedding space for our sentence representation and use the uncased version (42B) for phrase representation. 3 To control the effect of embedding size, we set the hidden state size in our transformers as the GloVe embedding size (300).
Limited by computational resources, we train all the models using one GPU (e.g., NVIDIA 1080 Ti) within a week. Because of the relatively small model size, we find that our models underfit the data after a week (i.e., the training loss is very close to the validation loss).
Qualitative Evaluation
The cluster centers predicted by our model are visualized in Table 1 (as using girl and lady to visualize the red cluster center in Figure 2). Some randomly chosen examples are also visualized in Appendix D.
The centers summarize the input sequence well and more codebook embeddings capture more fine-grained semantic facets of a phrase or a sentence. Furthermore, the embeddings capture the compositional meaning of words. For example, each word in the phrase civil order does not mean initiatives, army, or court, which are facets of the whole phrase. When the input is a sentence, we can see that the output embeddings are sometimes close to the embeddings of words
Unsupervised Sentence Similarity
We propose two ways to evaluate the multi-facet embeddings using sentence similarity tasks.
First way: Since similar sentences should have similar word distribution in nearby sentences and thus similar codebook embeddings, the codebook embeddings of a query sentence F u (S 1 q ) should be able to well reconstruct the codebook embeddings of its similar sentence F u (S 2 q ). We compute the reconstruction error of both directions and add them as a symmetric distance SC:
SC(S 1 q , S 2 q ) = Er( Fu(S 1 q ), Fu(S 2 q )) + Er( Fu(S 2 q ), Fu(S 1 q )),(5)where F u (S q ) = [ c q k ||c q k || ] k=1.
..K is a matrix of normalized codebook embeddings and Er function is defined in equation 1. We use the negative distance to represent similarity.
Second way: One of the main challenges in unsupervised sentence similarity tasks is that we do not know which words are more important in each sentence. Intuitively, if one word in a query sentence is more important, the chance of observing related/similar words in the nearby sentences should be higher. Thus, we should pay more attention to the words in a sentence that have higher cosine similarity with its multifacet embeddings, a summary of the co-occurring word distribution. Specifically, our importance/attention weighting for all the words in the query sentence S q is defined by
a q = max(0, W (Sq) T Fu(Sq)) 1,(6)
where 1 is an all-one vector. We show that the attention vector (denoted as Our a in Table 2) could be combined with various scoring functions and boost their performances. As a baseline, we also report the performance of the attention weights derived from the k-means loss rather than NNSC loss and call it Our a (k-means). Setup: STS benchmark (Cer et al. 2017) is a widely used sentence similarity task. We compare the correlations between the predicted semantic similarity and the manually labeled similarity. We report Pearson correlation coefficient, which is strongly correlated with Spearman correlation in all our experiments. Intuitively, when two sentences are less similar to each other, humans tend to judge the similarity based on how similar their facets are. Thus, we also compare the performances on the lower half of the datasets where their ground truth similarities are less than the median similarity in the dataset, and we call this benchmark STSB Low.
A simple but effective way to measure sentence similarity is to compute the cosine similarity between the average (contextualized) word embedding (Milajevs et al. 2014 Table 2: Pearson correlation (%) in the development and test sets in the STS benchmark. The performances of all sentence pairs are indicated as All. Low means the performances on the half of sentence pairs with lower similarity (i.e., STSB Low). Our c means our codebook embeddings and Our a means our attention vectors. * indicates a supervised method. † indicates the methods which use training distribution to approximate testing distribution. The best score with and without † are highlighted.
In order to deemphasize the syntax parts of the sentences, Arora, Liang, and Ma (2017) propose to weight the word w in each sentence according to α α+p(w) , where α is a constant and p(w) is the probability of seeing the word w in the corpus. Following its recommendation, we set α to be 10 −4 in this paper. After the weighting, we remove the first principal component of all the sentence embeddings in the training data as suggested by Arora, Liang, and Ma (2017) and denote the method as SIF. The post-processing requires an estimation of testing embedding distribution, which is not desired in some applications, so we also report the performance before removing the principal component, which is called Prob_avg.
We also test word mover's distance (WMD) (Kusner et al. 2015), which explicitly matches every word in a pair of sentences. As we do in Prob_avg, we apply α α+p(w) to WMD to down-weight the importance of functional words, and call this scoring function as Prob_WMD. When using Our a, we multiple our attention vector with the weights of every word (e.g., α α+p(w) for Prob_avg and Prob_WMD). To motivate the unsupervised setting, we present the performance of sentence-BERT (Reimers and Gurevych 2019) that are trained by 100 sentence pairs. We randomly sample the sentence pairs from a data source that is not included in STSB (e.g., headlines in STS 2014), and report the testing performance averaged across all the sources from STS 2012 to 2016. More details are included in Appendix B.2.
Results: In Figure 3, we first visualize our attention weights in equation 6 and our output codebook embeddings for a pair of similar sentences from STSB to intuitively explain why modeling co-occurring distribution could improve the similarity measurement.
Many similar sentences might use different word choices or using extra words to describe details, but their possible nearby words are often similar. For example, appending in the garage to A man is lifting weights does not significantly change the facets of the sentences and thus the word garage receives relatively a lower attention weight. This makes its similarity measurement from our methods, Our c and Our a, closer to the human judgment than other baselines.
In Table 2, Our c SC, which matches between two sets of facets, outperforms WMD, which matches between two sets of words in the sentence, and also outperforms BERT Avg, especially in STSB Low. The significantly worse performances from Skip-thought Cosine justify our choice of ignoring the order in the co-occurring words.
All the scores in Our * K10 are significantly better than Our * K1, which demonstrates the co-occurring word distribution is hard to be modeled well using a single embedding. Multiplying the proposed attention weighting consistently boosts the performance in all the scoring functions especially in STSB Low and without relying on the generalization assumption of the training distribution. Finally, using k-means loss, Our a (k-means) K10, significantly degrades the performance compared to Our a K10, which justify the proposed NNSC loss. In Appendix B.2, our methods are compared with more baselines using more datasets to test the effectiveness of multi-facet embeddings and our design
Unsupervised Extractive Summarization
The classic representation of a sentence uses either a single embedding or the (contextualized) embeddings of all the words in the sentence. In this section, we would like to show that both options are not ideal for extracting a set of sentences as a document summary. Table 1 indicates that our multiple codebook embeddings of a sentence capture its different facets well, so we represent a document summary S as the union of the multifacet embeddings of the sentences in the summary R
(S) = ∪ T t=1 { F u (S t )}, where { F u (S t )} is the set of column vectors in the matrix F u (S t ) of sentence S t .
A good summary should cover multiple facets that well represent all topics/concepts in the document (Kobayashi, Noguchi, and Yatsuka 2015). The objective can be quantified as discovering a summary S whose multiple embeddings R(S) best reconstruct the distribution of normalized word embedding w in the document D (Kobayashi, Noguchi, and Yatsuka 2015). That is,
arg max S w∈D α α + p(w) max s∈R(S) w T s,(7)
where α α+p(w) is the weights of words we used in the sentence similarity experiments (Arora, Liang, and Ma 2017). We greedily select sentences to optimize equation 7 as in Kobayashi, Noguchi, and Yatsuka (2015).
Setup: We compare our multi-facet embeddings with other alternative ways of modeling the facets of sentences. A simple way is to compute the average word embedding as a single-facet sentence embedding. 4 This baseline is labeled as Sent Emb. Another way is to use the (contextualized) embedding of all the words in the sentences as different facets of the sentences. Since longer sentences have more words, we normalize the gain of the reconstruction similarity by the sentence length. The method is denoted as W Emb. We also test the baselines of selecting random sentences (Rnd) and first 3 sentences (Lead-3) in the document.
The results on the testing set of CNN/Daily Mail (Hermann et al. 2015;See, Liu, and Manning 2017) are compared using F1 of ROUGE (Lin and Hovy 2003) in Table 3. R-1, R-2, and Len mean ROUGE-1, ROUGE-2, and average summary length, respectively. All methods choose 3 sentences by following the setting in Zheng and Lapata (2019). Unsup, No Sent Order means the methods do not use the sentence order information in CNN/Daily Mail.
In CNN/Daily Mail, the unsupervised methods which access sentence order information such as Lead-3 have performances similar to supervised methods such as RL (Celikyilmaz et al. 2018). To evaluate the quality of unsupervised sentence embeddings, we focus on comparing the unsupervised methods which do not assume the first few sentences form a good summary.
Results: In Table 3, predicting 100 clusters yields the best results. Notice that our method greatly alleviates the computational and sample efficiency challenges, which allows us to set cluster numbers K to be a relatively large number.
The results highlight the limitation of classic representations. The single sentence embedding cannot capture its multiple facets. On the other hand, if a sentence is represented by the embeddings of its words, it is difficult to eliminate the bias of selecting longer or shorter sentences and a facet might be composed by multiple words (e.g., the input sentence in Table 1 describes a service, but there is not a single word in the sentence that means service).
Unsupervised Phrase Similarity
Recently, Dubossarsky, Grossman, and Weinshall (2018) discovered that the multiple embeddings of each word may not improve the performance in word similarity benchmarks even if they capture more senses or facets of polysemies. Since our method can improve the sentence similarity estimation, we want to see whether multi-facet embeddings could also help the phrase similarity estimation.
In addition to SC in equation 5, we also compute the average of the contextualized word embeddings from our transformer encoder as the phrase embedding. We find that the cosine similarity between the two phrase embeddings is a good similarity estimation, and the method is labeled as Ours Emb.
Setup: We evaluate our phrase similarity using SemEval 2013 task 5(a) English (Korkontzelos et al. 2013) and Turney 2012 (Turney 2012). The task of SemEval 2013 is to distinguish similar phrase pairs from dissimilar phrase pairs. In Turney (5), given each query bigram, each model predicts the most similar unigram among 5 candidates, and Turney (10) adds 5 more negative phrase pairs by pairing the reverse of the query bigram with the 5 unigrams.
Results: The performances are presented in Table 4. Ours (K=1) is usually slightly better than Ours (K=10), and the result supports the finding of Dubossarsky, Grossman, and Weinshall (2018). We hypothesize that unlike sentences, most of the phrases have only one facet/sense, and thus can Table 4: Performance of phrase similarity tasks. Every model is trained on a lowercased corpus. In SemEval 2013, AUC (%) is the area under the precision-recall curve of classifying similar phrase pairs. In Turney, we report the accuracy (%) of predicting the correct similar phrase pair among 5 or 10 candidate pairs. The results with † are taken from Yu and Dredze (2015).
be modeled by a single embedding well. In Appendix B.1, the hypernym detection results also support this hypothesis. Even though being slightly worse, the performances of Ours (K=10) remain strong compared with baselines. This implies that the similarity performances are not sensitive to the number of clusters as long as sufficiently large K is used because the model is able to output multiple nearly duplicated codebook embeddings to represent one facet (e.g., using two centers to represent the facet related to company in Figure 1). The flexibility alleviates the issues of selecting K in practice. Finally, the strong performances in Turney (10) verify that our encoder respects the word order when composing the input sequence.
Related Work
Topic modeling (Blei, Ng, and Jordan 2003) has been extensively studied and widely applied due to its interpretability and flexibility of incorporating different forms of input features (Mimno and McCallum 2008). Cao et al. (2015); Srivastava and Sutton (2017) demonstrate that neural networks could be applied to discover semantically coherent topics. Instead of optimizing a global topic model, our goal is to efficiently discover different sets of topics/clusters on the words beside each (unseen) phrase or sentence.
Recently, Gupta et al. (2019) and Gupta et al. (2020) discover that global clustering could improve the representation of sentences and documents. In our work, we show that a local clustering could be used in several downstream applications, including word importance estimation for measuring sentence similarity. Whether combining global clustering and local clustering could lead to a further improvement is an interesting future research direction.
Sparse coding on word embedding space is used to model the multiple facets of a word (Faruqui et al. 2015;Arora et al. 2018), and parameterizing word embeddings using neural networks is used to test hypothesis (Han et al. 2018) and save storage space (Shu and Nakayama 2018). Besides, to capture asymmetric relations such as hypernyms, words are represented as single or multiple regions in Gaussian embeddings (Vilnis and McCallum 2015; Athiwaratkun and Wil-son 2017) rather than a single point. However, the challenges of extending these methods to longer sequences are not addressed in these studies.
One of our main challenges is to design a loss for learning to predict cluster centers while modeling the dependency among the clusters. This requires a matching step between two sets and computing the distance loss after the matching (Eiter and Mannila 1997). One popular loss is called Chamfer distance, which is widely adopted in the autoencoder models for point clouds (Yang et al. 2018a;Liu et al. 2019), while more sophisticated matching loss options are also proposed (Stewart, Andriluka, and Ng 2016; Balles and Fischbacher 2019). The goal of the previous studies focuses on measuring symmetric distances between the ground truth set and predicted set (usually with an equal size), while our loss tries to reconstruct the ground truth set using much fewer codebook embeddings.
Other ways to achieve the permutation invariant loss for neural networks include sequential decision making ( In contrast, our goal is to efficiently predict a set of cluster centers that can well reconstruct the set of observed instances rather than directly predicting the observed instances.
Conclusions
In this work, we propose a framework for learning the cooccurring distribution of the words surrounding a sentence or a phrase. Even though there are usually only a few words that co-occur with each sentence, we demonstrate that the proposed models can learn to predict interpretable cluster centers conditioned on an (unseen) sentence.
In the sentence similarity tasks, the results indicate that the similarity between two sets of multi-facet embeddings well correlates with human judgments, and we can use the multi-facet embeddings to estimate the word importance and improve various widely-used similarity measurements in a pre-trained word embedding space such as GloVe. In a single-document extractive summarization task, we demonstrate multi-facet embeddings significantly outperform classic unsupervised sentence embedding or individual word embeddings. Finally, the results of phrase similarity tasks suggest that a single embedding might be sufficient to represent the co-occurring word distribution of a phrase. performance computing equipment obtained under a grant from the Collaborative R&D Fund managed by the Massachusetts Technology Collaborative, in part by the National Science Foundation (NSF) grant numbers DMR-1534431 and IIS-1514053.
Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
Ethics Statement
We propose a novel framework, neural architecture, and loss to learn multi-facet embedding from the co-occurring statistics in NLP. In this study, we exploit the co-occurring relation between a sentence and its nearby words to improve the sentence representation. In our follow-up studies, we discover that the multi-facet embeddings could also be used to learn other types of co-occurring statistics. For example, we can use the co-occurring relation between a scientific paper and its citing paper to improve paper recommendation methods in Bansal, Belanger, and McCallum (2016). Paul, Chang, and McCallum (2021) use the co-occurring relation between a sentence pattern and its entity pair to improve relation extraction in Verga et al. (2016). Chang et al. (2021) use the co-occurring relation between a context paragraph and its subsequent words to control the topics of language generation. In the future, the approach might also be used to improve the efficiency of document similarity estimation (Luan et al. 2020).
On the other hand, we improve the sentence similarity and summarization tasks in this work using the assumption that important words are more likely to appear in the nearby sentences. The assumption might be violated in some domains and our method might degrade the performances in such domains if the practitioner applies our methods without considering the validity of the assumption.
A Structure of Appendix
We conduct more comprehensive experiments and analyses in Section B. The details of our method and experiments (e.g., training algorithm, preprocessing, and hyperparameter settings) are presented in Section C, and we visualize more codebook embeddings and the derived attention weights of the sentences in Section D.
B More Experiments
In the main paper, we show that multi-facet embeddings can improve the estimation of symmetric relations like similarity. To know whether they are also useful in asymmetric relations like entailment, we test our method on a hypernym detection dataset in Section B.1.
Due to the page limits, we cannot present all of our results in the main paper, so we put more comprehensive analyses for sentence similarity tasks in Section B.2, for extractive summarization in Section B.3, and for phrase similarity tasks in Section B.4. We also present the results of BERT Large model in Section B.5 as a reference. Section B.6 and B.7 provide some motivating examples for a sentence similarity task and for the extractive summarization, respectively.
B.1 Unsupervised Hypernymy Detection
We apply our model to HypeNet (Shwartz, Goldberg, and Dagan 2016), an unsupervised hypernymy detection dataset, based on the assumption that the co-occurring words of a phrase are often less related to some of its hyponyms. For instance, animal is a hypernym of brown dog. flies is a cooccurring word of animal which is less related to brown dog.
Accordingly, the predicted codebook embeddings of a hyponym S hypo q (e.g., brown dog), which cluster the embeddings of co-occurring words (e.g., eats), often reconstruct the embeddings of its hypernym S hyper q (e.g., animal) better than the other way around (e.g., the embedding of flies cannot reconstruct the embeddings of brown dog well). That is, Er( F u (S hypo q ), W (S hyper q )) is smaller than Er( F u (S hyper q ), W (S hypo q ))). Based on the assumption, our asymmetric scoring function is defined as
Diff(S hyper q , S hypo q ) = Er( Fu(S hyper q ), W (S hypo q )) − Er( Fu(S hypo q ), W (S hyper q )).(8)
where Er function is defined in equation 1. The AUC of detecting hypernym among other relations and accuracy of detecting the hypernym direction are compared in Table 5. Our methods outperform baselines, which only provide symmetric similarity measurement, and Ours (K=1) performs similarly compared with Ours (K=10).
B.2 More Analysis on Sentence Similarity
We design more experiments and present the results in Table 6 and Table 7 in order to answer the following research questions.
1. Is ignoring the order of co-occurring words effective in emphasizing the semantic side of the sentences? Table 6: The Pearson correlation (%) in STS benchmarks. w2v means Word2Vec. Our * (k-means) means using the kmeans loss rather than the NNSC loss. Our * (LSTM) means replacing the transformers in our encoder with bi-LSTM and replacing our transformer decoder with LSTM. Other abbreviations and symbols share the same meaning in Table 2.
To answer this question, we replace our transformer encoder with bi-LSTM and our transformer decoder with LSTM. Then, this architecture becomes very similar to skipthought (Kiros et al. 2015) except that skip-thoughts decodes a sequence instead of a set, and we ignore the word order in the nearby sentences. As we can see in Table 6, Our c (LSTM) K10 SC performs much better than Skip-thought Cosine, which compute the cosine similarity between their sentence embeddings. This result further justifies our approach of ignoring the order of co-occurring words in our NNSC loss. 2. Is our word importance estimation generally useful for composing (contextualized) word embedding models?
We cannot apply our attention weights (i.e., Our a) to BERT because BERT uses word piece tokenization. Instead, we use the top layer of ELMo (Peters et al. 2018) as the contextualized word embedding, apply α α+p(w) weighting multiplied with our attention weights in equation 6 . The results in Table 6 show that the performance of ELMo Prob_avg could also be boosted by our attention weighting even though our model is trained on GloVe semantic space. The importance weights from multiple embeddings can also help boost the performance of a version of Sent2Vec (Pagliardini, Gupta, and Jaggi 2018b) that uses only unigram information.
3. Could our model be trained on word embedding space other than GloVe?
First, we train Word2Vec (Mikolov et al. 2013) (denoted as w2v) on the Wikipedia 2016 corpus. We then train our multi-facet embeddings to fit the Word2Vec embedding of co-occurring words in the Wikipedia 2016 corpus. The results in Table 6 show that Our a (w2v) K10 improves the performance using different scoring functions as we did in GloVe space.
4. How well could clustering-based multi-facet embeddings perform on long text sequences such as sentences?
Lots of the testing sentences in the STS benchmark are not observed in our training corpus. To test clustering-based multi-facet embeddings, we first average word embedding in every sentence into sentence embedding, and for each testing query sentence, we perform approximated nearest neighbor search using KDTree (Bentley 1975) to retrieve 1000 most similar sentences. Then, we remove the stop words in the 1000 sentences and perform NNSC clustering on the rest of the words. Finally, we compute SC distance between two sets of cluster centers derived from testing sentence pairs and denote the baseline as NNSC clustering K10 SC in Table 6.
The testing time of this baseline is much slower than the proposed method due to the need for the nearest neighbor search, and its performance is also much worse. This result justifies our approach of predicting clustering centers directly to generate multi-facet embeddings.
How much better is NNSC loss compared with kmeans loss?
In the method section, we mention that we adopt NNSC rather than k-means in our loss because k-means loss cannot generate diverse cluster centers in all of the neural architectures (including transformers and bi-LSTMs) we tried. We hypothesize that the k-means loss does not stably encourage predicted clusters to play different roles for reconstructing the embeddings of observed co-occurring words. We present the much worse results of the model using k-means loss in Table 6 to justify our usage of NNSC in our loss.
6. Could our method improve the similarity estimation of all kinds of datasets?
In Table 7, we compare the performance before and after applying our attention weights in the English part of STS 2012 (Agirre et al. 2012), 2013 (Agirre et al. 2013), 2014(Agirre et al. 2014), 2015(Agirre et al. 2015, and 2016 (Agirre et al. 2016). We categorize each of the dataset in different years based on either its source (forum, news, definition, caption, and education) or its characteristic (out of domain or similar).
Out of domain means the testing sentences are very different from our training corpus, Wikipedia 2016. deft-news from STS 2014 is included in this category because all the sentences in the dataset are lowercased. Similar means there are lots of pairs in the datasets whose two sentences have almost the identical meaning.
From the Table 7, we can see that GloVe Prob_avg and GloVe Prob_WMD perform well compare with other baselines, and the attention weights from our multi-facet embedding stably boost GloVe Prob_avg and GloVe Prob_WMD except in the categories education, out of domain, and similar. Thus, we recommend adopting our method when the source of training and testing sentences are not too different from each other, and the task is not to identify duplicated sentences.
7. Are supervised methods such as sentence-BERT sensitive to the training data? Table 8 compares the performance of sentence-BERT (Reimers and Gurevych 2019) trained on different data sources. We observe that the performance of sentence-BERT could be degraded when the distribution of training data is very different from that of testing data. For example, Sentence-BERT also does not perform well when the training sentence pairs tend to be similar with each other (e.g., in postediting and SMTeuroparl) or come from a writing style that is different from the style of testing sentence pairs (e.g., tweet-news and answers-students).
Furthermore, a supervised model trained by a limited amount of labels could perform worse than the unsupervised alternatives. For example, on STSB Dev, the weighted average of word embedding (Prob_avg) outputted by the BERT base model outperforms the sentence-BERT trained by 100 labels on average. Sentence-BERT model trained by SMTeuroparl is even worse than just averaging all the contextualized word embeddings in BERT on STSB Test.
B.3 Summarization Comparison Given the Same Summary Length
In Section 3.4, we compare our methods with other baselines when all the methods choose the same number of sentences. We suspect that the bad performances for W Emb (*) methods (i.e., representing each sentence using the embedding of words in the sentence) might come from the tendency of selecting shorter sentences. To verify the hypothesis, we plot the R-1 performance of different unsupervised summarization methods that do not use the sentence order information versus the sentence length in Figure 4. In the figure, we first observe that Ours (K=100) significantly outperforms W Emb (GloVe) and Sent Emb (GloVe) when summaries have similar length. In addition, we find that W Emb (*) usually outperforms Sent Emb (*) when comparing the summaries with a similar length. Notice that this comparison might not be fair because W Emb (*) are allowed to select more sentences given the same length of summary and it might be easier to cover more topics in the document using more sentences. In practice, preventing choosing many short sentences might be preferable in an extractive summarization if fluency is an important factor.
Nevertheless, suppose our goal is simply to maximize the ROUGE F1 score given a fixed length of the summary without accessing the ground truth summary and sentence order information. In that case, the figure indicates that Ours (K=100) significantly outperform W Emb (GloVe) and is the best choice when the summary length is less than around 50 words and W Emb (BERT) becomes the best method for a longer summary. The BERT in this figure is the BERT base model. The mixed results suggest that combining our method with BERT might be a promising direction to get the best performance in this task (e.g., use contextualized word embedding from BERT as our pre-trained word embedding
B.4 Experiments on More Phrase Similarity Datasets
We conduct the phrase similarity experiments on two recently proposed datasets, BiRD (Asaadi, Mohammad, and Kiritchenko 2019), and WikiSRS (Newman-Griffis, Lai, and Fosler-Lussier 2018), which contain ground truth phrase similarities derived from human annotations. BiRD and WikiSRS-Rel measure the relatedness of phrases and WikiSRS-Sim measures the similarity of phrases. The phrases are proper nouns in WikiSRS and are mostly common nouns in BiRD. Since the main goal of WikiSRS is to test the entity representation, we also test the different models trained on the corpus without lowercasing all the words. The results are presented in Table 9. The multi-facet embedding performs similarly compared with single-facet embedding and is better than other baselines. This result confirms our findings in the main paper that the phrase similarity performance is not sensitive to the number of clusters K.
B.5 Comparison with BERT Large
In Table 12, we compare the size and running time of different models for sentence representation. As mentioned in Section 3.1, our model has fewer parameters than the BERT base model and uses much fewer computational resources for training, so we only present the BERT Base performance in the experiment sections. Nevertheless, we still wonder how well BERT large can perform in these unsupervised semantic tasks, so we compare our method with BERT Large in Table 13, Table 14, Table 15, Table 16. As we can see, BERT large is usually better than BERT base in the similarity tasks but performs worse in the hypernym detection task. The BERT's performance gains in similarity tasks might imply that training a larger version of our model might be a promising future direction.
B.6 Motivating Examples in Sentence Similarity
In order to further understand when and why our methods perform well, we present some sentences pairs from the MSRvid dataset in STS 2012 in Table 17 and 18 on which our methods perform well.
In Table 17, the first two sentence pairs have relatively high similarities but a lower ratio of overlapping words, so the baseline based on average word embedding (i.e., Avg) underestimates the similarities. Softly removing the stop words (i.e., Prob_avg) alleviates the problem, but the inverse frequency of words do not completely align with the importance of words in the sentences.
We visualize our predicted word importance and codebook embeddings in Table 18. Combining the estimated word importance with the inverse word frequency (i.e., Prob_avg + Our a) improves the performance. Finally, computing the similarity between the codebook embeddings (i.e., Our c) leads to the best results. The reason of the improvement might be that the unimportant words in the sentence often do not significantly affect the co-occuring word distribution. Take the second sentence pair as an example, mentioning "with the big eyes" does not change the sentence's meaning and facets too much.
On the contrary, the last sentence pair in Table 17 has a low similarity but relatively higher word overlapping. Our model could infer that riding a horse is very different from riding an elephant because their co-occurring word distributions are different. The appearance of riding a horse implies that we are more likely to observe a race topic in nearby sentences, but riding an elephant increases the chance of seeing a movie topic instead.
B.7 Motivating Examples in Extractive Summarization
In Table 19, we show the top three sentences that different methods choose to summarize a story about a photographer, Erik Johansson, and his artwork. In this document, Lead-3 does not cover its main points because this article starts with a preamble. Our method selects the first sentence as a good summary because it highlights the main character of the story, Erik Johansson, and his art style. The selected sentences contain the aspects that cover several topics in the whole document.
Average word embedding baselines, Sent_Emb (GloVe) and Sent_Emb (BERT), select the sentences that focus on describing how his artwork is created. Nevertheless, the sentences are hard to understand without the context in the article. We hypothesize that the methods tend to avoid selecting the sentences with diverse aspects because after averaging the word embeddings, the resulting single embedding is not close to the embedding of words in the documents.
Finally, W_Emb (GloVe) and W_Emb (BERT) tend to select shorter sentences because we normalize the objective function by the sentence lengths. It is hard to remove the bias Table 9: Performances of phrase similarity tasks. In BiRD and WikiSRS, the correlation coefficient (%) between the predicted similarity and the ground truth similarity is presented. C Experimental Details
C.1 Training
The training algorithm of non-negative sparse coding (NNSC) loss can be seen in Algorithm 1. Given the computational resource constraints, we keep our model simple enough to have the training loss nearly converged after 1 or 2 epoch(s). Since training takes a long time, we do not finetune the hyper-parameters in our models. We use a much smaller model than BERT but the architecture details in our transformer and most of its hyper-parameters are the same as those used in BERT. The sparsity penalty weights on coefficient matrix λ in equation 1 is set to be 0.4. The maximal sentence size is set to be 50, and we ignore the sentences longer than that. The maximal number of co-occurring words is set to be 30 (after removing the stop words), and we sub-sample the words if there are more words in the previous and next sentence. All words occurring less than 100 times in the training corpus are mapped to <unk>.
The number of dimensions in transformers is set to be 300. For sentence representation, dropout on attention is 0.1. Its number of transformer layers on the decoder side is 5 for K = 10, and the number of transformer layers on the decoder side is set to be 1 for K = 1 because we do not need to model the dependency of output codebook embeddings. For phrase representation, the number of transformer layers on the decoder side is 2, and the dropout on attention is 0.5.
All the architecture and hyperparameters (except the number of codebook embeddings) in our models are determined by the validation loss of the self-supervised co-occurring word reconstruction task in equation 2 . The number of codebook embeddings K is chosen by the performance of training data in each task, but we observe that the performances are usually not sensitive to the numbers as long as K is large enough as shown in our phrase experiments. Furthermore, we suspect that the slight performance drops of models with too large K might just be caused by the fact that larger K needs longer training time and 1 week of training is insufficient to make the model converge.
We use RegexpParser in NLTK (Bird, Klein, and Loper 2009) to detect the phrase boundary. We use the grammar NP: <JJ.*>*<VBG>*<NN.*>+. The sentence boundaries are detected using the rule-based pipeline in spaCy 6 and POS tags are also detected using spaCy.
The lowercased list we use for removing stop words includes @-@, =, <eos>, <unk>, disambiguation, etc, etc., -, @card@, ∼, -, _, @,ˆ, &, *, <, >, (, ), \, |, {, }, ], [, :, ;, ', ", /, ?, !, " ., 't, 'd, 'll, 's, 'm, 've, a, about, above, after, again, against, all, am, an, and, any, are, aren, as, at, be, because, been, before, being, below, between, both, but, by, can, cannot, could, couldn, did, didn, do, does, doesn, doing, don, down, during, each, few, for, from, further, had, hadn, has, hasn, have, haven, having, he, her, here, here, hers, herself, him, himself, his, how, how, i, if, in, into, is, isn, it, it, its, itself, let, me, more, most, mustn, my, myself, no, nor, not, of, off, on, once, only, or, other, ought, our, ours, ourselves, out, over, own, same, she, should, shouldn, so, some, such, than, that, the, their, theirs, them, themselves, then, there, these, they, this, those, through, to, too, under, until, up, very, was, wasn, we, were, weren, what, when, where, which, while, who, whom, why, with, won, would, wouldn, you, your, yours, yourself, yourselves. Table 5.
(I t ) Compute M Ot = arg min M ||F (I t )M − W (N t )|| 2 + λ||M || 1 ∀k, j, 0 ≤ M k,j ≤ 1, Compute M Rt = arg min M ||F (I t )M − W (N rt )|| 2 + λ||M || 1 ∀k, j, 0 ≤ M k,
C.2 Testing
The dataset sizes for sentence representation and phrase representation are summarized in Table 10 and Table 11, respectively. In our phrase experiments, we report the test sets of SemEval 2013 and Turney. For Turney dataset, we follow the evaluation setup of Yu and Dredze (2015); Huang, Ji et al. (2017), which ignores two unigram candidates being contained in the target phrase, because the original setup (Turney 2012) is too difficult for unsupervised methods to get a meaningful score (e.g., the accuracy of GloVe Avg is 0 in the original setting). For skip-thoughts, the hidden embedding size is set to be 600. To make the comparison fair, we retrain the skipthoughts in Wikipedia 2016 for 2 weeks.
D Randomly Sampled Examples
We visualize the predicted codebook embeddings and the attention weights computed using equation 6 from 10 ran-domly selected sentences in our validation set (so most of them are unseen in our training corpus).
The first line of each example is always the preprocessed input sentence, where <unk> means an out-of-vocabulary placeholder. The attention weights are visualized using a red background. If one word is more likely to be similar to the words in the nearby sentences, it will get more attention and thus highlighted using a darker red color.
The format of visualized embeddings is similar to Table 1. Each row's embedding is visualized by the nearest five neighbors in a GloVe embedding space and their cosine similarities to the codebook embedding.
Other immobilizing devices such as a Kendrick Selected Sentence NA Swedish photographer , Erik Johansson , spends months photographing images to build up to the finished picture . Ground Truth NA Each image is made up of hundreds of separate shots and painstakingly detailed work by the expert retoucher . NA Erik , 30 , said : ' Can I put this very weird idea in a photograph and make it look like it was just captured ? ' 1
Thought the black and blue dress was an optical illusion ? Lead-3 2 It 's nothing compared to these mind -boggling pictures by a Swedish photographer , artist , and Photoshop genius . 3
Erik Johansson , 30 , who is based in Berlin , Germany , says he does n't capture moments , but instead captures ideas . 6
Swedish photographer , artist , and Photoshop genius , Erik Johansson , has created mind -boggling photos like this inside -out house that look different on each glance . Our c (K=10) 42 Reverse Opposite is mind -bendig as , with an MC Escher drawing , the car seems both on and under the bridge at the same time . 1
Thought the black and blue dress was an optical illusion ? 46
Although one photo can consist of lots of different images merged into one , he always wants it to look like it could have been captured as a whole picture .
Sent_Emb (GloVe) 25
He cites Rene Magritte , Salvador Dali and MC Escher as artistic influences . 18
Using Photoshop , he turned the running paint into rolling fields and superimposed a photograph of a house on to the cardboard model , adding a photo of a water wheel to complete the fantastical and dramatic shot of a dreamy , bucolic landscape that seems to be falling over a cliff . 41 he said . W_Emb (GloVe) 23 But there are tons of inspiration online . 22
' I think I get more inspiration from paintings rather than photos .
Sent_Emb (BERT)
18
Using Photoshop , he turned the running paint into rolling fields and superimposed a photograph of a house on to the cardboard model , adding a photo of a water wheel to complete the fantastical and dramatic shot of a dreamy , bucolic landscape that seems to be falling over a cliff . 41 he said . 12
He said : ' It 's the challenge : can I put this very weird idea in a photograph and make it look like it was just captured ? ' 5
Scroll down for video . W_Emb (BERT) 30
In Closing Out , interiors and exterior meld as one in this seemingly simple tableau . 12
He said : ' It 's the challenge : can I put this very weird idea in a photograph and make it look like it was just captured ? ' The station building is located in the district of <unk> . These services operate on the Eifel
Welleck et al. 2018), mixture of experts (Yang et al. 2018b; Wang, Cho, and Wen 2019), beam search (Qin et al. 2019), predicting the permutation using a CNN (Rezatofighi et al. 2018), Transformers (Stern et al. 2019; Gu, Liu, and Cho 2019; Carion et al. 2020) or reinforcement learning (Welleck et al. 2019).
Agirre, E.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; and Guo, W. 2013. * SEM 2013 shared task: Semantic textual similarity. In * SEM. Agirre, E.; Diab, M.; Cer, D.; and Gonzalez-Agirre, A. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In SemEval. , Y.; and Ma, T. 2017. A Simple but Toughto-beat Baseline for Sentence Embeddings. In ICLR. Asaadi, S.; Mohammad, S. M.; and Kiritchenko, S. 2019. Big BiRD: A Large, Fine-Grained, Bigram Relatedness Dataset for Examining Semantic Composition. In NAACL-HLT. Athiwaratkun, B.; and Wilson, A. 2017. Multimodal Word Distributions. In ACL.Balles, L.; and Fischbacher, T. 2019. Holographic and other Point Set Distances for Machine Learning. URL https://openreview.net/forum?id=rJlpUiAcYX. Bansal, T.; Belanger, D.; and McCallum, A. 2016. Ask the GRU: Multi-task Learning for Deep Text Recommendations. In RecSys. Bentley, J. L. 1975. Multidimensional binary search trees used for associative searching. Communications of the ACM 18(9): 509-517. Bird, S.; Klein, E.; and Loper, E. 2009. Natural language processing with Python: analyzing text with the natural language toolkit. " O'Reilly Media, Inc.". Blei, D. M.; Ng, A. Y.; and Jordan, M. I. 2003. Latent dirichlet allocation. Journal of machine Learning research 3(Jan): 993-1022. Bojanowski, P.; Grave, E.; Joulin, A.; and Mikolov, T. 2017. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics 5: 135-146. Cao, Z.; Li, S.; Liu, Y.; Li, W.; and Ji, H. 2015. A novel neural topic model and its supervised extension. In AAAI. Carion, N.; Massa, F.; Synnaeve, G.; Usunier, N.; Kirillov, A.; and Zagoruyko, S. 2020. End-to-End Object Detection with Transformers. arXiv preprint arXiv:2005.12872 . Celikyilmaz, A.; Bosselut, A.; He, X.; and Choi, Y. 2018. Deep Communicating Agents for Abstractive Summarization. In NAACL-HLT. Cer, D.; Diab, M.; Agirre, E.; Lopez-Gazpio, I.; and Specia, L. 2017. SemEval-2017 Task 1: Semantic Textual Similarity Multilingual and Crosslingual Focused Evaluation. In SemEval-2017. Chang, H.-S.; Yuan, J.; Iyyer, M.; and McCallum, A. 2021. Changing the Mind of Transformers for Topically-Controllable Language Generation. In EACL. Devlin, J.; Chang, M.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT. Dubossarsky, H.; Grossman, E.; and Weinshall, D. 2018. Coming to your senses: on controls and evaluation sets in polysemy research. In EMNLP. Gu, J.; Liu, Q.; and Cho, K. 2019. Insertion-based decoding with automatically inferred generation order. Transactions of the Association for Computational Linguistics 7: 661-676. Gupta, V.; Saw, A.; Nokhiz, P.; Gupta, H.; and Talukdar, P. 2019. Improving document classification with multi-sense embeddings. In ECAI. Gupta, V.; Saw, A.; Nokhiz, P.; Netrapalli, P.; Rai, P.; and Talukdar, P. 2020. P-SIF: Document embeddings using partition averaging. In AAAI. Han, R.; Gill, M.; Spirling, A.; and Cho, K. 2018. Conditional Word Embedding and Hypothesis Testing via Bayesby-Backprop. In EMNLP. Hermann, K. M.; Kocisky, T.; Grefenstette, E.; Espeholt, L.; Kay, W.; Suleyman, M.; and Blunsom, P. 2015. Teaching machines to read and comprehend. In NeurIPS. Hoyer, P. O. 2002. Non-negative Sparse Coding. In Proceedings of the 12th IEEE Workshop on Neural Networks for Signal Processing. Huang, L.; Ji, H.; et al. 2017. Learning Phrase Embeddings from Paraphrases with GRUs. In Proceedings of the First Workshop on Curation and Applications of Parallel and Comparable Corpora. Kiros, R.; Zhu, Y.; Salakhutdinov, R. R.; Zemel, R.; Urtasun, R.; Torralba, A.; and Fidler, S. 2015. Skip-thought vectors. In NeurIPS. Kobayashi, H.; Noguchi, M.; and Yatsuka, T. 2015. Summarization based on embedding distributions. In EMNLP. Korkontzelos, I.; Zesch, T.; Zanzotto, F. M.; and Biemann, C. 2013. Semeval-2013 task 5: Evaluating phrasal semantics. In SemEval 2013. Kumar, S.; and Tsvetkov, Y. 2019. Von Mises-Fisher Loss for Training Sequence to Sequence Models with Continuous Outputs. In ICLR. Kusner, M.; Sun, Y.; Kolkin, N.; and Weinberger, K. 2015. From word embeddings to document distances. In ICML. Lau, J. H.; Cook, P.; McCarthy, D.; Newman, D.; and Baldwin, T. 2012. Word sense induction for novel sense detection. In EACL. Lee, J.; Lee, Y.; Kim, J.; Kosiorek, A. R.; Choi, S.; and Teh, Y. W. 2019. Set transformer: A framework for attentionbased permutation-invariant neural networks. In ICML. Li, L. H.; Chen, P. H.; Hsieh, C.-J.; and Chang, K.-W. 2019. Efficient Contextual Representation Learning With Continuous Outputs. Transactions of the Association for Computational Linguistics 7: 611-624. Luan, Y.; Eisenstein, J.; Toutanova, K.; and Collins, M. 2020. Sparse, Dense, and Attentional Representations for Text Retrieval. arXiv preprint arXiv:2005.00181 . Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; and Dean, J. 2013. Distributed representations of words and phrases and their compositionality. In NeurIPS. Milajevs, D.; Kartsaklis, D.; Sadrzadeh, M.; and Purver, M. 2014. Evaluating Neural Word Representations in Tensor-Based Compositional Settings. In EMNLP. Mimno, D. M.; and McCallum, A. 2008. Topic Models Conditioned on Arbitrary Features with Dirichlet-multinomial Regression. In UAI. Neelakantan, A.; Shankar, J.; Passos, A.; and McCallum, A. 2014. Efficient Non-parametric Estimation of Multiple Embeddings per Word in Vector Space. In EMNLP. Newman-Griffis, D.; Lai, A. M.; and Fosler-Lussier, E. 2018. Jointly Embedding Entities and Text with Distant Supervision. In Proceedings of the 3rd Workshop on Representation Learning for NLP (Repl4NLP). Pagliardini, M.; Gupta, P.; and Jaggi, M. 2018a. Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. In NAACL-HLT. Pagliardini, M.; Gupta, P.; and Jaggi, M. 2018b. Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. In NAACL. Paul, R.; Chang, H.-S.; and McCallum, A. 2021. Multi-facet Universal Schema. In EACL. and cardinality using deep neural networks. arXiv preprint arXiv:1805.00613 . See, A.; Liu, P. J.; and Manning, C. D. 2017. Get To The Point: Summarization with Pointer-Generator Networks. In ACL. Shu, R.; and Nakayama, H. 2018. Compressing Word Embeddings via Deep Compositional Code Learning. In ICLR. Shwartz, V.; Goldberg, Y.; and Dagan, I. 2016. Improving Hypernymy Detection with an Integrated Path-based and Distributional Method. In ACL. Singh, S. P.; Hug, A.; Dieuleveut, A.; and Jaggi, M. 2020. Context mover's distance & barycenters: Optimal transport of contexts for building representations. In International Conference on Artificial Intelligence and Statistics. Srivastava, A.; and Sutton, C. A. 2017. Autoencoding Variational Inference For Topic Models. In ICLR. Stern, M.; Chan, W.; Kiros, J.; and Uszkoreit, J. 2019. Insertion Transformer: Flexible Sequence Generation via Insertion Operations. In ICML. Stewart, R.; Andriluka, M.; and Ng, A. Y. 2016. End-to-end people detection in crowded scenes. In CVPR. Sutskever, I.; Vinyals, O.; and Le, Q. V. 2014. Sequence to sequence learning with neural networks. In NeurIPS. Tieleman, T.; and Hinton, G. 2012. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural networks for machine learning 4(2): 26-31. Turney, P. D. 2012. Domain and function: A dual-space model of semantic relations and compositions. Journal of Artificial Intelligence Research . Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In NeurIPS. Verga, P.; Belanger, D.; Strubell, E.; Roth, B.; and McCallum, A. 2016. Multilingual Relation Extraction using Compositional Universal Schema. In NAACL-HLT. Vilnis, L.; and McCallum, A. 2015. Word Representations via Gaussian Embedding. In ICLR. Wang, T.; Cho, K.; and Wen, M. 2019. Attention-based mixture density recurrent networks for history-based recommendation. In Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data. Welleck, S.; Brantley, K.; Daumé III, H.; and Cho, K. 2019. Non-Monotonic Sequential Text Generation. In ICML. Welleck, S.; Yao, Z.; Gai, Y.; Mao, J.; Zhang, Z.; and Cho, K. 2018. Loss Functions for Multiset Prediction. In NeurIPS. Yang, Y.; Feng, C.; Shen, Y.; and Tian, D. 2018a. Foldingnet: Point cloud auto-encoder via deep grid deformation. In CVPR. Yang, Z.; Dai, Z.; Salakhutdinov, R.; and Cohen, W. W. 2018b. Breaking the softmax bottleneck: A high-rank RNN language model. In ICLR.
Figure 4 :
4Comparing the F1 of ROUGE-1 score on unsupervised methods that do not access the sentence order information in CNN/Daily Mail. space).
<unk> Device or a backboard can be used to stabilize the remainder of the spinal column . ) <eos> --------K=10-----0.726 chronic 0.715 disease 0.692 polymeric 0.674 hydrophilic 0.644 hydrophobic 0.636 backboard 0.899 hoop 0.555 dunks 0.547 column 0.898 columns 0.771 Column 0.584
came in , she was always bound to be loud , and boisterous . Carrie got along well with most of the waitresses , most especially Vera Louise Gorman -Novak , and Alice Hyatt . <eos> --------K=10-----0.750 Carolyn 0.749 Joanne 0.740 endearing 0.670 downright 0.643 demeanor 0.632 Vera 0.953 Aloe 0.639 vera 0.585 Method Index
Beautiful music starts . The girl sings into a microphone . <eos> A star is born on the stage .……
Transformer Encoder (TE)
L1
Distinct linear layers
for each input position
LK
Transformer Decoder (TD)
……
Sequence to Embeddings F(.)
Codebook
Embeddings F(It)
Nonnegative Sparse Coding Loss (Not Required for Testing)
Input Sentence (It)
Co-occurring Words (Nt)
≅
x
x
Co-occurring
Words W(Nt)
≆
Random
Words W(Nr t )
song
music
albums
television
girl
lady
Beautiful
microphone
describe
star
actor
starts
born
begin
A Pre-trained Word
Embedding Space
Coefficient
matrix (M O t)
Coefficient
matrix (M R t)
boy
stage
). The scoring function is labeled as Avg. Besides, we test the sentence embedding from BERT and from skipthought (Kiros et al. 2015) (denoted as CLS and Skipthought Cosine, respectively).Sentences
A man is lifting weights in a garage .
A man is lifting weights .
e1 | can 0.872, even 0.851, should 0.850
e1 | can 0.865, either 0.843, should 0.841
e2 | front 0.762, bottom 0.742, down 0.714
e2 | front 0.758, bottom 0.758, sides 0.691
e3 | lifting 0.866, lift 0.663, Lifting 0.621
e3 | lifting 0.847, lift 0.635, Lifting 0.610
e4 | garage 0.876, garages 0.715, basement 0.707
e4 | lifting 0.837, lift 0.652, weights 0.629
Output
e5 | decreasing 0.677, decreases 0.655, negligible 0.649
e5 | decreasing 0.709, decreases 0.685, increases 0.682
Embeddings e6 | weights 0.883, Weights 0.678, weight 0.665
e6 | weights 0.864, weight 0.700, Weights 0.646
e7 | cylindrical 0.700, plurality 0.675, axial 0.674
e7 | annular 0.738, cylindrical 0.725, circumferential 0.701
e8 | configurations 0.620, incorporating 0.610, utilizing 0.605 e8 | methods 0.612, configurations 0.610, graphical 0.598
e9 | man 0.872, woman 0.682, men 0.672
e9 | sweating 0.498, cardiovascular 0.494, dehydration 0.485
e10 | man 0.825, men 0.671, woman 0.653
e10 | man 0.888, woman 0.690, men 0.676
Figure 3: Comparison of our attention weights and the output embeddings between two similar sentences from STSB. A darker
red indicates a larger attention value in equation 6 and the output embeddings are visualized using the same way in Table 1.
Method
Dev
Test
Score
Model
All Low All Low
Cosine
Skip-thought
43.2 28.1 30.4 21.2
CLS
BERT
9.6
-0.4
4.1
0.2
Avg
62.3 42.1 51.2 39.1
SC
Our c K1
55.7 43.7 47.6 45.4
Our c K10
63.0 51.8 52.6 47.8
WMD
GloVe
58.8 35.3 40.9 25.4
Our a K1
63.1 43.3 47.5 34.8
Our a K10
66.7 47.4 52.6 39.8
Prob_WMD
GloVe
75.1 59.6 63.1 52.5
Our a K1
74.4 60.8 62.9 54.4
Our a K10
76.2 62.6 66.1 58.1
Avg
GloVe
51.7 32.8 36.6 30.9
Our a K1
54.5 40.2 44.1 40.6
Our a K10
61.7 47.1 50.0 46.5
Prob_avg
GloVe
70.7 56.6 59.2 54.8
Our a K1
68.5 56.0 58.1 55.2
Our a K10
72.0 60.5 61.4 59.3
SIF †
GloVe
75.1 65.7 63.2 58.1
Our a K1
72.5 64.0 61.7 58.5
Our a K10
75.2 67.6 64.6 62.2
Our a (k-means) K10 71.5 62.3 61.5 57.2
sentence-BERT (100 pairs)*
71.2 55.5 64.5 58.2
Table 3 :
3The ROUGE F1 scores of different methods on CNN/Daily Mail dataset. The results with † are taken from Zheng and Lapata (2019). The results with * are taken from Celikyilmaz et al. (2018).choices more comprehensively.
Table 5 :
5Hypernym detection performances in the develop-
ment and test set of HypeNet. AUC (%) refers to the area
under precision and recall curve, which measures the quality
of retrieving hypernym phrases. Acc (%) means the accuracy
of predicting specificity given a pair of hypernym phrases.
Method
Dev
Test
Score
Model
All Low All Low
Cosine
Skip-thought
43.2 28.1 30.4 21.2
Avg
ELMo
65.6 47.4 54.2 44.1
Prob_avg
ELMo
70.3 54.6 60.4 54.2
Our a (GloVe) K1
69.3 54.1 60.8 55.8
Our a (GloVe) K10
70.5 55.9 61.1 56.6
Avg
BERT
62.3 42.1 51.2 39.1
Prob_avg
72.1 57.0 57.8 55.1
Avg
Sent2Vec
71.9 51.2 63.6 46.0
Our a (GloVe) K10
76.1 62.9 71.5 62.7
Our a (GloVe) K1
72.0 56.1 66.8 55.7
SC
NNSC clustering K10 38.6 37.8 25.4 38.9
Our c (w2v) K10
54.7 38.8 43.9 36.0
Our c (k-means) K10 37.8 25.9 29.5 19.7
Our c (LSTM) K10
58.9 49.2 49.8 46.4
Our c (GloVe) K10
63.0 51.8 52.6 47.8
Prob_WMD
w2v
72.9 56.6 62.1 54.0
Our a (w2v) K10
73.6 60.1 63.5 57.8
Prob_avg
w2v
68.3 53.7 54.3 50.9
Our a (w2v) K10
68.3 56.8 55.1 53.1
SIF †
w2v
70.5 56.9 59.4 54.7
Our a (w2v) K10
71.6 60.9 61.3 57.6
Prob_WMD
GloVe
75.1 59.6 63.1 52.5
Our a (k-means) K10 72.5 57.9 60.3 49.9
Our a (LSTM) K10
76.3 63.2 65.8 57.4
Our a (GloVe) K10
76.2 62.6 66.1 58.1
Prob_avg
GloVe
70.7 56.6 59.2 54.8
Our a (k-means) K10 66.6 53.4 55.8 51.8
Our a (LSTM) K10
71.7 60.1 61.3 58.3
Our a (GloVe) K10
72.0 60.5 61.4 59.3
SIF †
GloVe
75.1 65.7 63.2 58.1
Our a (k-means) K10 71.5 62.3 61.5 57.2
Our a (LSTM) K10
74.6 66.9 64.3 60.9
Our a (GloVe) K10
75.2 67.6 64.6 62.2
Table 7 :
7Comparing Pearson correlation (%) of different unsupervised methods from STS 2012 to STS 2016. We highlight the
best performance in each of the three blocks.
Table 8 :
8The Pearson correlation (%) of sentence-BERT on STS benchmark. The sentence-BERT is initialized by the BERT base model and trained by 100 samples in each data source. All results are the average of three runs. The order of rows is determined by their performance on the test set of STSB.
Table 10 :
10Dataset sizes for sentence representations.Similarity
Hypernym
SemEval 2013 Turney2012 BiRD
WikiSRS
HypeNet
Test
Test
Test
Sim Rel
Val
Test
7,814
1,500
3,345 688 688 3,534 17,670
Table 11 :
11Dataset sizes for phrase representations.Method
Hidden size #Parameters Testing Time
K=1
300
6.7M
9 ms
K=10
300
13.7M
18 ms
BERT Base
768
86.0M
18 ms
BERT Large
1024
303.9M
65 ms
Table 12 :
12Comparison of model sizes. The number of parameters does not include the word embedding layer. We show the test time required for a batch with 50 sentences using one 1080Ti GPU. of selecting shorter or longer sentences because each sentence is represented by a different number of embeddings.
Prob_WMD 76.2 62.6 66.1 58.1Method
Dev
Test
Model
Score
All Low All Low
BERT Base
Prob_avg
72.1 57.0 57.8 55.1
BERT Large
Prob_avg
74.3 61.0 65.0 60.0
Our a (GloVe) K10
Prob_avg
72.0 60.5 61.4 59.3
Table 13 :
13Compare BERT Large with Ours inTable 2.Method
R-1 R-2 Len
BERT Base
W Emb
31.2 11.2 44.9
Sent Emb 32.3 10.6 91.2
BERT Large
W Emb
31.1 11.0 46.8
Sent Emb 32.7 10.9 86.5
Our c (K=100)
Bases
35.0 12.8 92.9
Table 14 :
14Compare BERT Large with Ours inTable 3.Lowercased
Uppercased
Table 15 :
15Compare BERT Large with Ours in Table 4. Algorithm 1: Training using NNSC loss Input : Training corpus, sequence boundaries, and pre-trained word embedding. Output: F Initialize F foreach I t , W (N t ), W (N rt ) in training corpus do Run forward pass on encoder and decoder to compute F
j ≤ 1, Run forward pass to compute L t in equation 2 Treat M Ot and M Rt as constants, update F by backpropagation endMethod
Dev
Test
AUC Acc AUC Acc
BERT Base (Avg)
25.6
50
25.6
50
BERT Large (Avg) 20.2
50
20.1
50
Ours (K=1)
29.3 82.7 29.6 81.0
Table 16 :
16Compare BERT Large with Ours in
Prob_avg + Our a Prob_avg Avg A turtle walks over the ground .A large turtle crawls in the grass . The animal with the big eyes is eating . A slow loris is eating .Sentence 1
Sentence 2
Score
Score Rank Among 1500 Pairs
GT
GT
Our c 3.75
326
400
638
717
761
2.60
690
611
1001
1223
1370
A man is riding on a horse .
A woman is riding an elephant .
1.53
1021
869
722
549
540
Table 17 :
17Motivating examples for a sentence similarity task. The sentences are image captions from MSRvid dataset in STS 2012. GT means ground truth. All our methods here set K = 10. Sentences A turtle walks over the ground . A large turtle crawls in the grass .Sentence 1
Sentence 2
Table 18 :
18The predicted word importance and codebook embeddings on sentences fromTable 17. The way of visualization is the same as that in Section D.
Table 19 :
19Motivating examples for extractive summarization. The sentences come from a document in the validation set of CNN/Daily Mail. Index indicates the sentence order in the document. Ground truth means the summary from humans. The sentences in each method are ranked by its selection order. For example, our method selects the 6th sentence in the document first.
The self-supervised signal is a generalization of the loss for prediction-based word embedding like Word2Vec(Mikolov et al. 2013). They are the same when the input sequence length |It| is 1.
The decoder can also be viewed as another Transformer encoder which attends the output of the first encoder and models the dependency between predicted cluster centers.
nlp.stanford.edu/projects/glove/ in the input sentence, which explains why attending the contextualized word embeddings in our decoder could improve the quality of the output embeddings.
Although equation 7 weights each word in the document, we find that the weighting α α+p(w) does not improve the sentence representation when averaging the word embeddings.
The number is different from the one reported in Asaadi, Mohammad, and Kiritchenko (2019) because we use the uncased version (42B), the embedding space our model is trained on, and they use the cased version (840B).
spacy.io/
AcknowledgementsWe thank Ao Liu and Mohit Iyyer for many helpful discussions and Nishant Yadav for suggesting several related work. We also thank the anonymous reviewers for their constructive feedback.This work was supported in part by the Center for Data Science and the Center for Intelligent Information Retrieval, in part by the Chan Zuckerberg Initiative under the project Scientific Knowledge Base Construction, in part using high
E Agirre, C Banea, C Cardie, D Cer, M Diab, A Gonzalez-Agirre, W Guo, I Lopez-Gazpio, M Maritxalar, R Mihalcea, G Rigau, L Uria, J Wiebe, Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. SemEvalAgirre, E.; Banea, C.; Cardie, C.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; Guo, W.; Lopez-Gazpio, I.; Maritx- alar, M.; Mihalcea, R.; Rigau, G.; Uria, L.; and Wiebe, J. 2015. Semeval-2015 task 2: Semantic textual similarity, en- glish, spanish and pilot on interpretability. In SemEval.
Semeval-2014 task 10: Multilingual semantic textual similarity. E Agirre, C Banea, C Cardie, D Cer, M Diab, A Gonzalez-Agirre, W Guo, R Mihalcea, G Rigau, J Wiebe, In SemEvalAgirre, E.; Banea, C.; Cardie, C.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; Guo, W.; Mihalcea, R.; Rigau, G.; and Wiebe, J. 2014. Semeval-2014 task 10: Multilingual seman- tic textual similarity. In SemEval.
Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. E Agirre, C Banea, D Cer, M Diab, A Gonzalez-Agirre, R Mihalcea, G Rigau, J Wiebe, In SemEvalAgirre, E.; Banea, C.; Cer, D.; Diab, M.; Gonzalez-Agirre, A.; Mihalcea, R.; Rigau, G.; and Wiebe, J. 2016. Semeval- 2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. In SemEval.
Railway ( <unk> ). 10Railway ( <unk> ) . <eos> --------K=10--------
. Alfred <unk>, <unk> , Alfred <unk> <unk> ( born January 22 ,
Ghana and a leading member of the National Democratic Congress. 10Ghana and a leading member of the National Democratic Congress . <eos> --------K=10--------
John , History of New England 1630 -1649. New York , NYCharles Scribner 's Sons10John . Winthrop 's Journal , History of New England 1630 -1649 . New York , NY : Charles Scribner 's Sons , 1908 . <eos> --------K=10--------
The commune is represented in the Senate by. The commune is represented in the Senate by
As a result , the Hellfire Club believed that it would be in their best interests to summon the Phoenix and merge it with Jean Grey via a ritual. 10As a result , the Hellfire Club believed that it would be in their best interests to summon the Phoenix and merge it with Jean Grey via a ritual . <eos> --------K=10--------
Skeletal problems , infection , and tumors can also affect the growth of the leg , sometimes giving rise to a one -sided bow -<unk>. 10Skeletal problems , infection , and tumors can also affect the growth of the leg , sometimes giving rise to a one -sided bow -<unk> . <eos> --------K=10--------
Australian musician Jack Carty ( rugby union ) ( born 1992 ) , rugby union player from Ireland John Carty ( disambiguation ). Jack Carty may refer to : Jack Carty ( musician ) ( born 1987 ). 10<unk> is composed of the : Jack Carty may refer to : Jack Carty ( musician ) ( born 1987 ) , Australian musician Jack Carty ( rugby union ) ( born 1992 ) , rugby union player from Ireland John Carty ( disambiguation ) <eos> --------K=10--------
and service -sector core. --------K=1-------10--------K=1-------- and service -sector core . <eos> --------K=10--------
| [] |
[
"Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning",
"Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning"
] | [
"Xiaobao Wu xiaobao002@e.ntu.edu.sg ",
"Anh Tuan Luu ",
"Xinshuai Dong \nCarnegie Mellon University\n\n",
"Nanyang ",
"\nTechnological University\n\n"
] | [
"Carnegie Mellon University\n",
"Technological University\n"
] | [
"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing"
] | To overcome the data sparsity issue in short text topic modeling, existing methods commonly rely on data augmentation or the data characteristic of short texts to introduce more word co-occurrence information. However, most of them do not make full use of the augmented data or the data characteristic: they insufficiently learn the relations among samples in data, leading to dissimilar topic distributions of semantically similar text pairs. To better address data sparsity, in this paper we propose a novel short text topic modeling framework, Topic-Semantic Contrastive Topic Model (TSCTM). To sufficiently model the relations among samples, we employ a new contrastive learning method with efficient positive and negative sampling strategies based on topic semantics. This contrastive learning method refines the representations, enriches the learning signals, and thus mitigates the sparsity issue. Extensive experimental results show that our TSCTM outperforms state-ofthe-art baselines regardless of the data augmentation availability, producing high-quality topics and topic distributions. 1 | 10.48550/arxiv.2211.12878 | [
"https://www.aclanthology.org/2022.emnlp-main.176.pdf"
] | 253,801,822 | 2211.12878 | 6891e9421af1c45907bc9a5586bb7cecdf26e66d |
Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning
2760 December 7-11, 2022
Xiaobao Wu xiaobao002@e.ntu.edu.sg
Anh Tuan Luu
Xinshuai Dong
Carnegie Mellon University
Nanyang
Technological University
Mitigating Data Sparsity for Short Text Topic Modeling by Topic-Semantic Contrastive Learning
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
the 2022 Conference on Empirical Methods in Natural Language Processing27482760 December 7-11, 2022
To overcome the data sparsity issue in short text topic modeling, existing methods commonly rely on data augmentation or the data characteristic of short texts to introduce more word co-occurrence information. However, most of them do not make full use of the augmented data or the data characteristic: they insufficiently learn the relations among samples in data, leading to dissimilar topic distributions of semantically similar text pairs. To better address data sparsity, in this paper we propose a novel short text topic modeling framework, Topic-Semantic Contrastive Topic Model (TSCTM). To sufficiently model the relations among samples, we employ a new contrastive learning method with efficient positive and negative sampling strategies based on topic semantics. This contrastive learning method refines the representations, enriches the learning signals, and thus mitigates the sparsity issue. Extensive experimental results show that our TSCTM outperforms state-ofthe-art baselines regardless of the data augmentation availability, producing high-quality topics and topic distributions. 1
Introduction
Topic models aim to discover the latent topics of a document collection and infer the topic distribution of each document in an unsupervised fashion (Blei et al., 2003). Due to the effectiveness and interpretability, topic models have been popular for decades with various downstream applications (Ma et al., 2012;Mehrotra et al., 2013;Boyd-Graber et al., 2017). However, despite the success on long texts, current topic models generally cannot handle well short texts, such as tweets, headlines, and comments (Yan et al., 2013). The reason lies in that topic models rely on word co-occurrence information to infer latent topics, but such information is x (i) april fool's jokes range from hilarious to disastrous x (i) + april sucker ' s laugh range from hilarious to disastrous x ( ) april fools' joke goes wrong for cleveland woman x (j) should airlines create separate sections for kids, larger fliers? (a) Examples of short texts.
x (i) +
x ( ) x (j) x (i) 0.479 0.377 0.922
(b) NQTM x (i) + x ( ) x (j)
x ( extremely scarce in short texts (Qiang et al., 2020). This issue, referred to as data sparsity, can hinder state-of-the-art topic models from discovering high-quality topics and thus has attracted much attention.
To overcome the data sparsity issue, traditional wisdom can be mainly categorized into two lines: (i) Augment datasets with more short texts containing similar semantics (Phan et al., 2008;Jin et al., 2011;Chen and Kao, 2015). This way can feed extra word co-occurrence information to topic models. (ii) Due to the limited context, many short texts in the same collection, such as tweets from Twitter, tend to be relevant, sharing similar topic semantics (Qiang et al., 2020); to leverage this data characteristic, models such as DMM (Yin and Wang, 2014;Li et al., 2016) and state-of-the-art NQTM (Wu et al., 2020b) learn similar topic distributions from relevant samples. These two lines of thought have been shown to achieve good performance and mitigate data sparsity to some extent.
However, existing short text topic models neither make full use of the augmented data nor the crucial data characteristic. To begin with, an augmented text is expected to have a similar topic distribution as the original text since they share similar topic semantics, but existing approaches tend to overlook this important relation between samples. As shown in Figure 1b, text x (i) and its augmented view x (i) + have similar topic semantics, but their topic distributions inferred by NQTM are far from similar. Moreover, guided by the aforementioned data characteristic, state-of-the-art methods like NQTM attempt to learn similar topic distributions for relevant samples, yet they could inappropriately do so. Figure 1b shows that text x (i) and x ( ) are relevant, but their learned topic distributions are dissimilar; x (i) and x (j) are irrelevant, but theirs are similar. In a word, current approaches insufficiently model the relations among samples in data, which hinders fully addressing the data sparsity issue.
To better mitigate data sparsity, we in this paper propose Topic-Semantic Contrastive Topic Model (TSCTM), a novel short text topic modeling framework that unifies both cases with and without data augmentation. To be specific, TSCTM makes full use of relations among samples with a novel topicsemantic contrastive learning method. In the case without data augmentation, TSCTM effectively samples positive and negative text pairs based on topic semantics. In the case with data augmentation, TSCTM also smoothly incorporates the relations between augmented and original samples, enabling better utilization of data augmentation. Through the novel contrastive learning method, TSCTM sufficiently models the relations among samples, which enriches the learning signals, refines the learning of representations, and thus mitigates the data sparsity issue (see Figure 1c for an illustration). We summarized the main contributions of this paper as follows:
• We follow a contrastive learning perspective and propose a novel contrastive learning method with efficient positive and negative pairs sampling strategies to address the data sparsity issue in short text topic modeling.
• We propose a novel short text topic modeling framework, Topic-Semantic Contrastive Topic Model (TSCTM), which is the first such framework that concerns both cases with and without data augmentation.
• We validate our method with extensive experiments where TSCTM effectively mitigates data sparsity and consistently surpasses stateof-the-art baselines, producing high-quality topics and topic distributions.
Related Work
Topic Modeling Based on classic long text topic models (Hofmann, 1999;Blei et al., 2003;Lee et al., 2020), various probabilistic topic models for short texts have been proposed (Yan et al., 2013;Yin and Wang, 2014;Li et al., 2016;Wu and Li, 2019). They use Gibbs Sampling (Griffiths and Steyvers, 2004) , 2016, 2017Srivastava and Sutton, 2017;Card et al., 2018;Nan et al., 2019;Dieng et al., 2020;Wu et al., 2020aWu et al., ,b, 2021. Among those methods, the most related one to this paper is NQTM (Wu et al., 2020b). Although NQTM also uses vector quantization to aggregate the short texts with similar topics, however, we note that our method differs significantly in that: (i) Our TSCTM framework uses the novel topic-semantic contrastive learning method that fully considers the relations among samples with effective positive and negative sampling strategies, while NQTM only considers the relations between samples with similar semantics. (ii) Our TSCTM framework can adapt to the case with data augmentation by sufficiently modeling the relations brought by augmented samples, achieving higher performance gains, while NQTM cannot fully incorporate such relations.
Contrastive Learning The idea of contrastive learning is to measure the similarity relations of sample pairs in a representation space (Hadsell et al., 2006;Oh Song et al., 2016;Hjelm et al., 2018;Van den Oord et al., 2018;Frosst et al., 2019;Wang et al., 2019;He et al., 2020;Wang and Isola, 2020). It has been widely explored in the visual field, such as image classification (Chen et al., 2020;Khosla et al., 2020), objective detection (Xie et al., 2021), and image segmentation (Zhao et al., 2021). For text data, some studies use contrastive loss (Gao et al., 2021;Nguyen and Luu, 2021) by sampling salient words from texts to build positive samples, but they could be inappropriate for short text topic modeling due to the limited context of short texts (shown in Sec. 5.1).
In contrast, our new framework can discover effective samples for learning contrastively based on the topic semantics and can smoothly adapt to the case with augmentations, both of which better fit the short text modeling context.
Methodology
In this section, we first review the background of topic modeling. Then we introduce topic-semantic contrastive learning, a novel approach for short text topic modeling. Finally, we put this contrastive learning into the topic modeling context and propose our Topic-Semantic Contrastive Topic Model.
Notations and Problem Setting
Our notations and problem setting of topic modeling follow LDA (Blei et al., 2003). Consider a collection of N documents {x (1) , . . . , x (N ) } with V unique words, i.e., vocabulary size. We require to discover K topics from the collection. Each topic is interpreted as its relevant words and defined as a distribution over all words (topic-word distribution): β k ∈ R V . Then, β = (β 1 , . . . , β K ) ∈ R V ×K is the topic-word distribution matrix. A topic model also infers what topics a document contains, i.e., the topic distribution of a document, denoted as θ ∈ ∆ K . 2
Topic-Semantic Contrastive Learning
The core difference between our TSCTM and a conventional topic model lies in that we employ the novel topic-semantic contrastive learning method to model the relations among samples. As such, the learning signals are enriched through sufficiently modeling the relations among texts to address the data sparsity issue. Figure 2 illustrates our topicsemantic contrastive learning method.
Encoding Short Texts
To employ our topic-semantic contrastive learning, the first step is to encode short text inputs into a semantic space and obtain the corresponding representations and topic distributions. Specifically, we employ an encoder neural network f Θ with parameter Θ to encode short text x (i) and get its representation h (i) = f Θ (x (i) ). The topic distribution of x (i) is denoted as θ (i) and is computed by normalizing h (i) into a probability simplex with a softmax function as θ (i) = softmax(h (i) ). Note 2 Here ∆K denotes a probability simplex defined as ∆K = that we train topic distribution θ (i) with a topic modeling objective, which will be introduced later.
{θ ∈ R K + | K k=1 θ k = 1}.
Positive Pairs for Contrastive Learning
To utilize the vital characteristic of short texts (many short texts in a collection like Twitter tend to share similar topics due to the limited context), we propose to find those semantically similar texts and model them as positive pairs to each other for contrastive learning. Therefore, we can employ a contrastive learning objective to align those semantically similar texts in terms of representations and thus topic distributions.
However, it is non-trivial to find those semantically similar texts as positive pairs. Some previous methods like CLNTM (Nguyen and Luu, 2021) samples salient words to build positive pairs for long texts, but this way does not fit short texts well due to the extremely limited context (shown in Sec. 5.1). Differently, DMM (Yin and Wang, 2014;Li et al., 2016) follows a clustering process to aggregate short texts with similar topics, but lacks the flexibility of model design as it requires model-specific derivations for parameter inference. As such, we propose to employ vector quantization (van den Oord and Vinyals, 2017) to find positive pairs for short texts.
Specifically, as shown in Figure 2, we first quantize topic distribution θ (i) to the closest embedding vector, and its quantized topic distribution θ (i) q is computed as:
θ (i) q = e q(θ (i) ) (1) q(θ (i) ) = argmin k θ (i) − e k 2 .(2)
Here, (e 1 , e 2 , . . . , e K ) ∈ R K×K are K predefined embedding vectors, and q(·) ∈ {1, . . . , K} outputs the index of the quantized embedding vector. These embedding vectors are initialized as different onehot vectors before training to ensure that they are far away from each other for distinguishable quantization (Wu et al., 2020b). We then model the short texts with the same quantization indices as positive pairs, as follows:
{x (i) , x ( ) } where q(θ ( ) ) = q(θ (i) ). (3)
This is because topic distributions of short texts with similar semantics are learned to be quantized to the same embedding vectors.
Negative Pairs for Contrastive Learning
We first explain why we need to push negative pairs away from each other. Then we propose a novel semantic-based negative sampling strategy to sample semantically effective negative pairs.
Why Negative Pairs? We also need negative pairs to sufficiently model the relations among samples. Pulling close semantically similar short texts provides additional learning signals to address data sparsity, however two texts with different semantics can sometimes be wrongly viewed as a positive pair, leading to less distinguishable representations (see Figure 1b). To mitigate this issue, we propose to find negative pairs in the data and explicitly push them away, so we can sufficiently model the relations among samples to better improve topic modeling for short texts. The use of negative pairs can also be supported from an information-theoretical perspective following Wang and Isola (2020): pushing away negative pairs facilitates uniformity, thus maximizing the mutual information of the representations of positive pairs. Otherwise, if we only pull close positive pairs, chances are high that all the representations will collapse towards each other and become less distinguishable. In a word, pulling close positive pairs and pushing away negative pairs are both vital for better representations and topic distributions, and they together justify the use of contrastive learning to regularize the learning of short text topic models (see empirical support in Sec. 5.1 and 5.2).
Semantic-based Negative Sampling Conventional contrastive learning methods such as He et al. (2020); Chen et al. (2020) simply take different samples as negative pairs. This is reasonable in the context of long text topic modeling as different samples in a long text dataset have sufficiently various contexts to contain different topics. However, for short text topic modeling, many samples actually share similar topics as the aforementioned data characteristic. Therefore, simply taking different samples as negative pairs can wrongly push away semantically similar pairs, which hampers topic modeling performance (shown in Sec. 5.2).
To overcome this issue, we here propose a neat and novel semantic-based negative sampling strategy. Similar to our positive pair sampling strategy, we sample negative pairs according to the quantization result as in Eq. (2). Specifically, two texts are expected to contain different topics semantics if their topic distributions are quantized to different embedding vectors; thus we take such a pair of texts as a negative pair {x (i) , x (j) }:
{x (i) , x (j) } where q(θ (j) ) = q(θ (i) ). (4)
Our negative sampling strategy better aligns with the characteristic of short texts, and does not introduce complicated preprocessing steps or additional modules, which simplifies the architecture and eases computational cost.
Topic-Semantic Contrastive Objective
We have positive and negative pairs through our sampling strategies defined in Eq. (3) and Eq. (4). Now as illustrated in Figure 2, we formulate our topic-semantic contrastive (TSC) objective following Van den Oord et al. (2018):
L TSC (x (i) ) = − log exp (g(h (i) , h ( ) )) j exp (g(h (i) , h (j) )) , where j ∈ {j |q(θ (j ) ) = q(θ (i) )} and ∈ { |q(θ ( ) ) = q(θ (i) )}.(5)
In Eq. (5), g(·, ·) can be any score function to measure the similarity between two representations, and we follow Wu et al. (2018) to employ the cosine similarity as g(a, b) = cos(a, b)/τ where τ is a hyper-parameter controlling the scale of the score. This objective pulls close the representations of positive pairs (h (i) , h ( ) ) and pushes away the representations of negative pairs (h (i) , h (j) ). Thus this provides more learning signals to topic modeling by correctly capturing the relations among samples, which alleviates the data sparsity issue.
Topic-Semantic Contrastive Topic Model
Now we are able to combine the topic-semantic contrastive objective with the objective of short text topic modeling to formulate our Topic-Semantic Contrastive Topic Model (TSCTM).
Short Text Topic Modeling Objective We follow the framework of AutoEncoder to design our topic modeling objective. As the input short text x (i) is routinely transformed into Bag-of-Words, its reconstruction is modeled as sampling from a multinomial distribution: Mult(softmax(βθ
(i) q )) (Miao et al., 2016). Here, θ (i)
q is the quantized topic distribution for reconstruction, and β is a learnable parameter to model the topic-word distribution matrix. Then, the expected log-likelihood is proportional to x (i) log(softmax(βθ (i) q )) (Srivastava and Sutton, 2017). Therefore, we define the objective for short text topic modeling (TM) as:
L TM (x (i) ) = −x (i) log(softmax(βθ (i) q )) + sg(θ (i) ) − θ (i) q 2 + λ sg(θ (i) q ) − θ (i) 2 (6)
where the first term measures the reconstruction error between the input and reconstructed text. The last two terms refer to minimizing the distance between the topic distribution θ (i) and quantized topic distribution θ (i) q respectively weighted by λ (van den Oord and Vinyals, 2017). Here sg(·) denotes a stop gradient operation that prevents gradients from back-propagating to its inputs.
Overall Learning Objective of TSCTM The overall learning objective of TSCTM is a combination of Eq. (6) and Eq. (5), as:
L TM (x (i) ) + λ TSC L TSC (x (i) ),(7)
where λ TSC is a hyper-parameter controlling the weight of topic-semantic contrastive objective. This learning objective can learn meaningful representations from data and further refine the representations through modeling the relations among samples to enrich learning signals, which mitigates the data sparsity issue and improves the topic modeling performance of short texts.
Learning with Data Augmentation
In this section, we adapt our Topic-Semantic Contrastive Topic Model to the case where data augmentation is available to fully utilize the introduced augmentations. Incorporating Data Augmentation Let x (i) + denote one augmented view of x (i) . As our augmentation techniques can ensure that x (i) and x (i) + share similar topic semantics as much as possible (details about how we augment data will be introduced in Sec. 4.2), we explicitly consider x (i) and x (i) + as a positive pair. Besides, we consider x (i) and x (j) + as a negative pair if x (i) and x (j) are so. This is because if x (i) and x (j) possess dissimilar topic semantics, then x (i) and x (j) + should as well. Taking these two points into consideration, as shown in Figure 2, we formulate our topic semantic contrastive objective with data augmentation as
L TSC (x (i) , x (i) + ) = − log exp (g(h (i) , h (i) + )) D + λ original − log exp (g(h (i) , h ( ) )) D , D = j exp (g(h (i) , h (j) )) + exp (g(h (i) , h (j) + )) ,
where j ∈ {j |q(θ (j ) ) = q(θ (i) )} and ∈ { |q(θ ( ) ) = q(θ (i) )}.
Here λ original is a weight hyper-parameter of the contrastive objective for the positive pairs in the original dataset. Compared to Eq. (5), this formulation additionally incorporates the relation between positive pair x (i) , x Overall Learning Objective of TSCTM with Data Augmentation Combining Eq. (6) with augmented data and Eq. (8), we are able to formulate the final learning objective of TSCTM with data augmentation as follows:
L TM (x (i) ) + L TM (x (i) + ) + λ TSC L TSC (x (i) , x (i) + ),(9)
where we jointly reconstruct the positive pair semantic contrastive objective with augmented samples. Accordingly, our method smoothly adapts to the case with data augmentation.
x (i) , x(i)
Experimental Setting
In this section, we conduct comprehensive experiments to show the effectiveness of our method.
Datasets
We employ the following benchmark short text datasets in our experiments: (i) TagMyNews title contains news titles released by Vitale et al.
with 7 annotated labels like "sci-tech" and "entertainment". (ii) AG News includes news divided into 4 categories like "sports" and "business" (Zhang et al., 2015). We use the subset provided by Rakib et al. (2020). (iii) Google News is from Yin and Wang (2014) with 152 categories. We preprocess datasets with the following steps (Wu et al., 2020b): (i) tokenize texts with nltk; 3 (ii) convert characters to lower cases; (iii) filter out illegal characters; (iv) remove texts with length less than 2; (v) filter out low-frequency words. The dataset statistics are reported in Table 1. 3 https://www.nltk.org/
Data Augmentation Techniques
To generate augmented texts, we follow Zhang et al. (2021) and employ two simple and effective techniques: WordNet Augmenter and Contextual Augmenter. 4 WordNet Augmenter substitutes words in an input text with their synonymous selected from the WordNet database (Ma, 2019). Then, Contextual Augmenter leverages the pre-trained language models such as BERT (Devlin et al., 2018) to find the top-n suitable words of the input text for insertion or substitution (Kobayashi, 2018;Ma, 2019). To retain the original semantics as much as possible, we only change 30% words and also filter low-frequency words following Zhang et al. (2021). With these augmentation techniques, we can sufficiently retain original semantics and meanwhile bring in more word-occurrence information to alleviate the data sparsity of short texts.
Baseline Models
We compare our method with the following stateof-the-art baseline models: (i) ProdLDA (Srivastava and Sutton, 2017) 5 , a neural topic model based on the standard VAE with a logistic normal distribution as an approximation of Dirichlet tal results in the two cases: without and with data augmentation as follows.
Without Data Augmentation In the case without data augmentation, only original datasets are used for all models in the experiments, and our TSCTM uses Eq. (7) as the objective function. The results are reported in the upper part of Table 2. We see that TSCTM surpasses all baseline models in terms of both coherence (C V ) and diversity (T U ) under 50 and 100 topics across all datasets. Besides, it is worth mentioning that our TSCTM significantly outperforms NQTM and CLNTM. NQTM insufficiently models the relations among samples since it only considers texts with similar semantics, and CLNTM samples salient words from texts for contrastive learning, which is ineffective for short texts with limited context. In contrast, our TSCTM can discover effective samples for learning contrastively based on the topic semantics, which sufficiently models the relations among samples, thus achieving higher performance. Note that examples of discovered topics are in Appendix B. These results show that TSCTM is capable of producing higher-quality topics with better coherence and diversity.
With Data Augmentation In the case with data augmentation, we produce augmented texts to enrich datasets for all models through the techniques mentioned in Sec. 4.2, so all models are under the same data condition for fair comparisons. Note that our TSCTM uses Eq. (9) as the objective function in this case. The results are summarized in the lower part of Table 2. We have the following observations: (i) Data augmentation can mitigate the data sparsity issue of short text topic modeling to some extent. consistently achieves better topic quality performance. As shown in Table 2, we see that TSCTM reaches the best C V and T U scores compared to baseline models under 50 and 100 topics. This shows that our method can better leverage augmentations through the new topic-semantic contrastive learning to further alleviate data sparsity and improve short text topic modeling.
The above results demonstrate that TSCTM can adapt to both cases with or without data augmentation, effectively overcoming the data sparsity challenge and producing higher-quality topics.
Ablation Study
We conduct an ablation study that manifests the effectiveness and necessity of our topic-semantic contrastive learning method. As shown in Table 3, our TSCTM significantly outperforms the traditional contrastive learning (Chen et al., 2020) (w/ traditional contrastive). This shows the better effectiveness of our novel topic-semantic contrastive learning with the new positive and negative sampling strategies. Besides, if without modeling negative pairs (w/o negative pairs), the coherence (C V ) and diversity (T U ) performance both greatly degrades, e.g., from 0.479 to 0.397 and from 0.969 to 0.503 on AG News. This is because only modeling positive pairs makes the representations all collapse together and become less distinguishable, which hinders the learning of topics and leads to repetitive and less coherent topics (see also Sec. 3.2.3). Moreover, Table 3 shows that the coherence performance is hampered in the case without positive pairs (w/o positive pairs). The reason lies in that the method cannot capture the relations between positive pairs to further refine representations, and thus the inferred topics become less coherent. These results show the effectiveness and necessity of the positive and negative sampling strategies of our topic-semantic contrastive learning method.
Short Text Clustering
Apart from topic quality, we evaluate the quality of inferred topic distributions through short text clustering following Wang et al. (2022). Specifically, we use the most significant topics in the learned topic distributions of short texts as their cluster assignments. Then, we employ the commonlyused clustering metrics, Purity and NMI (Manning et al., 2008) to measure the clustering performance as Wang et al. (2022). Note that our goal is not to achieve state-of-the-art clustering performance but to compare the quality of learned topic distributions. Table 4 shows that the clustering performance of our model is generally the best over baseline models concerning both Purity and NMI. This demonstrates that our model can infer more accurate topic distributions of short texts.
Short Text Classification
In order to compare extrinsic performance, we conduct text classification experiments as a downstream task of topic models (Nguyen and Luu, 2021). In detail, we use the learned topic distributions by different models as features and train SVM classifiers to predict the class of each short text. We use the labels from the adopted datasets. Figure 3 shows that our TSCTM consistently achieves the best classification performance compared to baseline models. Note that the p-values of significance tests are all less than 0.05. This shows that the learned topic distributions of our model are more discriminative and accordingly can be better employed in the text classification downstream task.
Analysis of Topic Distributions
In this section we analyze the learned topic distributions of short texts to evaluate the modeling of relations among samples. Figure 4 illustrates the t-SNE (van der Maaten and Hinton, 2008) visualization for the learned topic distributions of original and augmented short texts by ProdLDA, NQTM, and our TSCTM. It shows that the topic distributions learned by our TSCTM are more aggregated together and well separately scattered in the space, in terms of only original short texts or both original and augmented short texts. In addition, we report the cosine similarity between the topic distributions of original and augmented short texts in Table 5. Their similarity should be high since they have similar semantics. We see that TSCTM has the highest similarity among all models. These are because TSCTM can sufficiently model the relations among samples with the novel topic-semantic contrastive learning, which refines the representations and thus topic distributions. These results can further demonstrate the effectiveness of our proposed topic-semantic contrastive learning method.
Conclusion
In this paper, we propose TSCTM, a novel and unified method for topic modeling of short texts. Our method with the novel topic-semantic contrastive learning can refine the learning of representations through sufficiently modeling the relations among texts, regardless of the data augmentation availability. Experiments show our model effectively alleviates the data sparsity issue and consistently outperforms state-of-the-art baselines, generating high-quality topics and deriving useful topic distributions of short texts.
Limitations
Our method achieves promising performance to mitigate data sparsity for short text topic modeling, but we believe that there are two limitations to be explored for future works: (i) More data augmentation techniques may be studied to further improve short text topic modeling performance. (ii) The possible metadata of short texts, like authors, hashtags, and sentiments, can be considered to further assist the modeling of relations.
Jordan L Boyd-Graber, Yuening Hu, David Mimno, et al. 2017. Applications of topic models, volume 11. now Publishers Incorporated.
Models
Examples of Topics
ProdLDA perrish apps chart giraffe cleared lash mary tyrese fill fundamentalist blog mistake duel reduce sleet giraffe animation tradition stress freezing major giraffe offence moment halo lifetime jim sharing draft congo NQTM kanye west confirms yeezus adidas leaf album rant concert kravitz kim west kanye invited brody jenner kardashian wedding beautiful invite kanye james video west bound recreate kimye franco shot music TSCTM giraffe congo poaching forgotten habitat ape okapi bonobo specie endangered frozen disney animation idina menzel kristen animated melt fairy bell adidas nike partnership summer lenny kravitz confirms cruel kanye album
A Model Implementation
We conduct experiments on NVIDIA GPU, and it takes less than 0.5 GPU hours to train our model on each dataset. For our model, the encoder network f Θ is a two-layer MLP with softplus as the activation function, same as Wu et al. (2020b), and we use Adam (Kingma and Ba, 2014) to optimize model parameters. We run our model for 200 epochs with learning rate as 0.002 following Srivastava and Sutton (2017), and λ as 0.1 following van den Oord and Vinyals (2017).
B Examples of Discovered Topics
Following Nan et al. (2019); Wu et al. (2020b), we randomly select some examples of discovered topics by ProdLDA, NQTM, and our TSCTM from Google News for qualitative study since they perform relatively better among baselines. As shown in Table 6, ProdLDA produces several redundant topics including "giraffe", and these topics are less informative as they are associated with irrelevant words like "fundamentalist" and "animation". NQTM also has repetitive topics about "kanye". In contrast, our TSCTM only generates one coherent topic about "animation", "kanye", and "giraffe" with relevant words. For example, the topic of TSCTM is more focused on "animation" with "disney", the movie name "frozen" and its theme song singer "idina menzel".
Figure 1 :
1(a): Examples of short texts from TagMyNews title dataset. Text x (i) + is an augmented view of x (i) , and x (i) and x ( ) are relevant while x (i) and x (j) are irrelevant. (b, c): Heat map of cosine similarity between learned topic distributions. The similarities of our TSCTM are more reasonable than NQTM.
Figure 2 :
2Illustration of the proposed topic-semantic contrastive learning. It refines the learning of representations through modeling the relations of samples according to their topic semantics (only solid line circles exist when without data augmentation).
to each other and the relation between negative pair x (i) , x
Figure 3 :
3Text classification results with topic distributions learned by topic models.
Figure 4
4: t-SNE visualization of learned topic distributions of original ( ) and augmented (•) short texts. Compared to ProdLDA and NQTM, the points of TSCTM are better aggregated and separately scattered in the space.
Table 1 :
1Dataset statistics.
Table 2 :
2Topic coherence (C V ) and diversity (T U ) results under 50 and 100 topics (K=50 and K=100). Without Data Augmentation means only original datasets are used, and With Data Augmentation means the augmented texts are used to enrich datasets for each model, so all models are evaluated in the same data conditions under two scenarios. The best scores are in bold.
Table 3 :
3Ablation study of removing positive and negative pairs in the TSCTM (w/o negative pairs and w/o positive pairs), and using the traditional contrastive loss (w/ traditional contrastive). The best scores are in bold. between texts and topics which both are represented with embeddings. Note that the differences between NQTM and our method are described in Sec. 2. The implementation detail of our method can be found in Appendix A.prior. (ii) WLDA (Nan et al., 2019), a Wasserstein
AutoEncoder (Tolstikhin et al., 2018) based topic
model. (iii) CLNTM (Nguyen and Luu, 2021) 6 ,
a recent topic model with contrastive learning de-
signed for long texts, which samples salient words
of texts as positive samples. (iv) NQTM (Wu et al.,
2020b) 7 , a state-of-the-art neural short text topic
model with vector quantization. (v) WeTe (Wang
et al., 2022) 8 , a recent state-of-the-art method us-
ing conditional transport distance to measure the
reconstruction error 5 Experimental Result
5.1 Topic Quality Evaluation
Evaluation Metric Following Nan et al. (2019);
Wang et al. (2022), we evaluate the quality of dis-
covered topics from two perspectives: (i) Topic
Coherence, meaning the words in a topic should
be coherent. We adopt the widely-used Coherence
Value (C V , Röder et al., 2015) following Wu et al.
(2020b). We use external Wikipedia documents 9 as
its reference corpus to estimate the co-occurrence
probabilities of words. (ii) Topic Diversity, mean-
ing the topics should be distinct from each other in-
stead of being repetitive. We use Topic Uniqueness
(T U , Nan et al., 2019) which measures the pro-
portions of unique words in the discovered topics.
Hence a higher T U score indicates the discovered
topics are more diverse. With these two metrics,
we can comprehensively evaluate topic quality. We
run each model 5 times and report the experimen-
6 https://github.com/nguyentthong/CLNTM
7 https://github.com/bobxwu/NQTM
8 https://github.com/wds2014/WeTe
9 https://github.com/dice-group/
Palmetto
Table 2
2shows that the topic quality of several baseline models is improved with augmentations compared to the case without. (ii) TSCTM can better utilize augmentations andModel
TagMyNews title
AG News
Google News
Purity
NMI
Purity NMI Purity NMI
ProdLDA 0.260
0.002
0.773 0.267 0.089 0.137
WLDA
0.363
0.058
0.583 0.148 0.411 0.608
CLNTM 0.266
0.008
0.408 0.097 0.099 0.136
NQTM
0.595
0.231
0.800 0.310 0.555 0.753
WeTe
0.487
0.180
0.713 0.307 0.301 0.560
TSCTM 0.610
0.239
0.811 0.317 0.563 0.766
Table 4 :
4Text clustering results of Purity and NMI. The best scores of each dataset are highlighted in bold.
Table 5 :
5Cosine similarity between topic distributions of original and augmented short texts. The highest are in bold.
R Devon Hjelm, Alex Fedorov, Samuel Lavoie-Marchildon, Karan Grewal, Phil Bachman, Adam Trischler, and Yoshua Bengio. 2018. Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Thomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence, pages 289-296. Morgan Kaufmann Publishers Inc. Ou Jin, Nathan N Liu, Kai Zhao, Yong Yu, and Qiang Yang. 2011. Transferring topical knowledge from auxiliary long texts for short text clustering. In Proceedings of the 20th ACM international conference on Information and knowledge management, pages 775-784. ACM. Yishu Miao, Edward Grefenstette, and Phil Blunsom. 2017. Discovering discrete latent topics with neural variational inference. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 2410-2419. JMLR. org. Yishu Miao, Lei Yu, and Phil Blunsom. 2016. Neural variational inference for text processing. In International conference on machine learning, pages 1727-1736. Feng Nan, Ran Ding, Ramesh Nallapati, and Bing Xiang. 2019. Topic modeling with Wasserstein autoencoders. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6345-6381, Florence, Italy. Association for Computational Linguistics. Thong Nguyen and Anh Tuan Luu. 2021. Contrastive learning for neural topic model. Advances in Neural Information Processing Systems, 34. Hyun Oh Song, Yu Xiang, Stefanie Jegelka, and Silvio Savarese. 2016. Deep metric learning via lifted structured feature embedding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4004-4012. Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from largescale data collections. In Proceedings of the 17th international conference on World Wide Web, pages 91-100. ACM. Jipeng Qiang, Zhenyu Qian, Yun Li, Yunhao Yuan, and Xindong Wu. 2020. Short text topic modeling techniques, applications, and performance: a survey. IEEE Transactions on Knowledge and Data Engineering. Md Rashadul Hasan Rakib, Norbert Zeh, Magdalena Jankowska, and Evangelos Milios. 2020. Enhancement of short text clustering by iterative classification. In International Conference on Applications of Natural Language to Information Systems, pages 105-117. Springer. Danilo Jimenez Rezende, Shakir Mohamed, and Daan Wierstra. 2014. Stochastic backpropagation and approximate inference in deep generative models. In Proceedings ofthe 31th International Conference on Machine Learning. Michael Röder, Andreas Both, and Alexander Hinneburg. 2015. Exploring the space of topic coherence measures. In Proceedings of the eighth ACM international conference on Web search and data mining, pages 399-408. ACM. Akash Srivastava and Charles Sutton. 2017. Autoencoding variational inference for topic models. In ICLR. Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. 2018. Wasserstein autoencoders. In International Conference on Learning Representations. Aaron Van den Oord, Yazhe Li, and Oriol Vinyals. 2018. Representation learning with contrastive predictive coding. arXiv e-prints, pages arXiv-1807. Aaron van den Oord and Oriol Vinyals. 2017. Neural discrete representation learning. In Advances in Neural Information Processing Systems, pages 6306-6315. Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(Nov):2579-2605. Daniele Vitale, Paolo Ferragina, and Ugo Scaiella. 2012. Classification of short texts by deploying topical annotations. In European Conference on Information Retrieval, pages 376-387. Springer. Dongsheng Wang, Dandan Guo, He Zhao, Huangjie Zheng, Korawat Tanwisuth, Bo Chen, and Mingyuan Zhou. 2022. Representing mixtures of word embeddings with mixtures of topic embeddings. In International Conference on Learning Representations. Tongzhou Wang and Phillip Isola. 2020. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. In International Conference on Machine Learning, pages 9929-9939. PMLR. Xun Wang, Xintong Han, Weilin Huang, Dengke Dong, and Matthew R Scott. 2019. Multi-similarity loss with general pair weighting for deep metric learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5022-5030. Yiming Wang, Ximing Li, Xiaotang Zhou, and Jihong Ouyang. 2021. Extracting topics with simultaneous word co-occurrence and semantic correlation graphs: Neural topic modeling for short texts. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 18-27. Xiaobao Wu and Chunping Li. 2019. Short Text Topic Modeling with Flexible Word Patterns. In International Joint Conference on Neural Networks. Wu, Chunping Li, and Yishu Miao. 2021. Discovering topics in long-tailed corpora with causal intervention. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 175-185, Online. Association for Computational Linguistics. Xiaobao Wu, Chunping Li, Yan Zhu, and Yishu Miao. 2020a. Learning Multilingual Topics with Neural Variational Inference. In International Conference on Natural Language Processing and Chinese Computing.Dallas Card, Chenhao Tan, and Noah A Smith. 2018.
Neural Models for Documents with Metadata. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), volume 1, pages 2031-2040.
Guan-Bin Chen and Hung-Yu Kao. 2015. Word co-
occurrence augmented topic model in short text. In
International Journal of Computational Linguistics
& Chinese Language Processing, Volume 20, Num-
ber 2, December 2015-Special Issue on Selected Pa-
pers from ROCLING XXVII.
Ting Chen, Simon Kornblith, Mohammad Norouzi,
and Geoffrey Hinton. 2020. A simple framework for
contrastive learning of visual representations. In In-
ternational conference on machine learning, pages
1597-1607. PMLR.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2018. Bert: Pre-training of deep
bidirectional transformers for language understand-
ing. arXiv preprint arXiv:1810.04805.
Adji B Dieng, Francisco JR Ruiz, and David M Blei.
2020. Topic modeling in embedding spaces. Trans-
actions of the Association for Computational Lin-
guistics, 8:439-453.
Nicholas Frosst, Nicolas Papernot, and Geoffrey Hin-
ton. 2019. Analyzing and improving representations
with the soft nearest neighbor loss. In International
conference on machine learning, pages 2012-2020.
PMLR.
Tianyu Gao, Xingcheng Yao, and Danqi Chen. 2021.
Simcse: Simple contrastive learning of sentence em-
beddings. arXiv preprint arXiv:2104.08821.
Thomas L Griffiths and Mark Steyvers. 2004. Find-
ing scientific topics. Proceedings of the National
academy of Sciences, 101(suppl 1):5228-5235.
Raia Hadsell, Sumit Chopra, and Yann LeCun. 2006.
Dimensionality reduction by learning an invariant
mapping. In 2006 IEEE Computer Society Confer-
ence on Computer Vision and Pattern Recognition
(CVPR'06), volume 2, pages 1735-1742. IEEE.
Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and
Ross Girshick. 2020. Momentum contrast for unsu-
pervised visual representation learning. In Proceed-
ings of the IEEE/CVF conference on computer vi-
sion and pattern recognition, pages 9729-9738.
Xiaobao
Table 6 :
6Top 10 related words of discovered topics from Google News. Repetitive words are underlined.
Our code is available at https://github.com/ bobxwu/TSCTM.
https://github.com/makcedward/nlpaug 5 https://github.com/akashgit/ autoencoding_vi_for_topic_models
AcknowledgementWe want to thank all anonymous reviewers for their helpful comments.
Variational inference: A review for statisticians. M David, Alp Blei, Jon D Kucukelbir, Mcauliffe, Journal of the American statistical Association. 112518David M Blei, Alp Kucukelbir, and Jon D McAuliffe. 2017. Variational inference: A review for statisti- cians. Journal of the American statistical Associa- tion, 112(518):859-877.
Latent dirichlet allocation. M David, Blei, Y Andrew, Michael I Jordan Ng, Journal of Machine Learning Research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. Journal of Ma- chine Learning Research, 3(Jan):993-1022.
Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, arXiv:2004.11362Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. arXiv preprintPrannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, and Dilip Krishnan. 2020. Supervised contrastive learning. arXiv preprint arXiv:2004.11362.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, arXiv:1412.6980arXiv preprintDiederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Autoencoding variational bayes. P Diederik, Max Kingma, Welling, The International Conference on Learning Representations (ICLR). Diederik P Kingma and Max Welling. 2014. Auto- encoding variational bayes. In The International Conference on Learning Representations (ICLR).
Contextual augmentation: Data augmentation by words with paradigmatic relations. Sosuke Kobayashi, NAACL-HLT. Sosuke Kobayashi. 2018. Contextual augmentation: Data augmentation by words with paradigmatic re- lations. In NAACL-HLT (2).
Prior-aware composition inference for spectral topic models. Moontae Lee, David Bindel, David Mimno, PMLRInternational Conference on Artificial Intelligence and Statistics. Moontae Lee, David Bindel, and David Mimno. 2020. Prior-aware composition inference for spectral topic models. In International Conference on Artificial In- telligence and Statistics, pages 4258-4268. PMLR.
Topic modeling for short texts with auxiliary word embeddings. Chenliang Li, Haoran Wang, Zhiqian Zhang, Aixin Sun, Zongyang Ma, Proceedings of the 39th International ACM SIGIR conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR conference on Research and Development in Information RetrievalACMChenliang Li, Haoran Wang, Zhiqian Zhang, Aixin Sun, and Zongyang Ma. 2016. Topic modeling for short texts with auxiliary word embeddings. In Pro- ceedings of the 39th International ACM SIGIR con- ference on Research and Development in Informa- tion Retrieval, pages 165-174. ACM.
Topic extraction from extremely short texts with variational manifold regularization. Machine Learning. Ximing Li, Yang Wang, Jihong Ouyang, Meng Wang, 110Ximing Li, Yang Wang, Jihong Ouyang, and Meng Wang. 2021. Topic extraction from extremely short texts with variational manifold regularization. Ma- chine Learning, 110(5):1029-1066.
Edward Ma, Nlp augmentation. Edward Ma. 2019. Nlp augmentation.
Topic-driven reader comments summarization. Zongyang Ma, Aixin Sun, Quan Yuan, Gao Cong, Proceedings of the 21st ACM international conference on Information and knowledge management. the 21st ACM international conference on Information and knowledge managementACMZongyang Ma, Aixin Sun, Quan Yuan, and Gao Cong. 2012. Topic-driven reader comments summariza- tion. In Proceedings of the 21st ACM international conference on Information and knowledge manage- ment, pages 265-274. ACM.
Introduction to Information Retrieval. D Christopher, Prabhakar Manning, Hinrich Raghavan, Schütze, Cambridge University PressNew York, NY, USAChristopher D Manning, Prabhakar Raghavan, and Hin- rich Schütze. 2008. Introduction to Information Re- trieval. Cambridge University Press, New York, NY, USA.
Improving LDA topic models for microblogs via tweet pooling and automatic labeling. Rishabh Mehrotra, Scott Sanner, Wray Buntine, Lexing Xie, Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. the 36th international ACM SIGIR conference on Research and development in information retrievalACMRishabh Mehrotra, Scott Sanner, Wray Buntine, and Lexing Xie. 2013. Improving LDA topic models for microblogs via tweet pooling and automatic label- ing. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 889-892. ACM.
Short text topic modeling with topic distribution quantization and negative sampling decoder. Xiaobao Wu, Chunping Li, Yan Zhu, Yishu Miao, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)OnlineXiaobao Wu, Chunping Li, Yan Zhu, and Yishu Miao. 2020b. Short text topic modeling with topic distri- bution quantization and negative sampling decoder. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1772-1782, Online.
Unsupervised feature learning via nonparametric instance discrimination. Zhirong Wu, Yuanjun Xiong, X Stella, Dahua Yu, Lin, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionZhirong Wu, Yuanjun Xiong, Stella X Yu, and Dahua Lin. 2018. Unsupervised feature learning via non- parametric instance discrimination. In Proceedings of the IEEE conference on computer vision and pat- tern recognition, pages 3733-3742.
Detco: Unsupervised contrastive learning for object detection. Enze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, Ping Luo, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionEnze Xie, Jian Ding, Wenhai Wang, Xiaohang Zhan, Hang Xu, Peize Sun, Zhenguo Li, and Ping Luo. 2021. Detco: Unsupervised contrastive learning for object detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 8392-8401.
A biterm topic model for short texts. Xiaohui Yan, Jiafeng Guo, Yanyan Lan, Xueqi Cheng, Proceedings of the 22nd international conference on World Wide Web. the 22nd international conference on World Wide WebACMXiaohui Yan, Jiafeng Guo, Yanyan Lan, and Xueqi Cheng. 2013. A biterm topic model for short texts. In Proceedings of the 22nd international conference on World Wide Web, pages 1445-1456. ACM.
A dirichlet multinomial mixture model-based approach for short text clustering. Jianhua Yin, Jianyong Wang, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMJianhua Yin and Jianyong Wang. 2014. A dirich- let multinomial mixture model-based approach for short text clustering. In Proceedings of the 20th ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 233-242. ACM.
Supporting clustering with contrastive learning. Dejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen Mckeown, Ramesh Nallapati, O Andrew, Bing Arnold, Xiang, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDejiao Zhang, Feng Nan, Xiaokai Wei, Shang-Wen Li, Henghui Zhu, Kathleen McKeown, Ramesh Nallap- ati, Andrew O Arnold, and Bing Xiang. 2021. Sup- porting clustering with contrastive learning. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 5419-5430.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in neural information processing systems. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.
Contrastive learning for label efficient semantic segmentation. Xiangyun Zhao, Raviteja Vemulapalli, Philip Andrew Mansfield, Boqing Gong, Bradley Green, Lior Shapira, Ying Wu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionXiangyun Zhao, Raviteja Vemulapalli, Philip An- drew Mansfield, Boqing Gong, Bradley Green, Lior Shapira, and Ying Wu. 2021. Contrastive learning for label efficient semantic segmentation. In Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, pages 10623-10633.
| [
"https://github.com/nguyentthong/CLNTM",
"https://github.com/bobxwu/NQTM",
"https://github.com/wds2014/WeTe",
"https://github.com/dice-group/",
"https://github.com/makcedward/nlpaug",
"https://github.com/akashgit/"
] |
[
"MASKER: Masked Keyword Regularization for Reliable Text Classification",
"MASKER: Masked Keyword Regularization for Reliable Text Classification"
] | [
"Seung Jun Moon \nKorea Advanced Institute of Science and Technology\nSouth Korea\n",
"Sangwoo Mo ",
"Kimin Lee \nKorea Advanced Institute of Science and Technology\nSouth Korea\n\nUniversity of California\nBerkeleyUSA\n",
"† ",
"Jaeho Lee \nKorea Advanced Institute of Science and Technology\nSouth Korea\n",
"Jinwoo Shin jinwoos@kaist.ac.krkiminlee@berkeley.edu \nKorea Advanced Institute of Science and Technology\nSouth Korea\n"
] | [
"Korea Advanced Institute of Science and Technology\nSouth Korea",
"Korea Advanced Institute of Science and Technology\nSouth Korea",
"University of California\nBerkeleyUSA",
"Korea Advanced Institute of Science and Technology\nSouth Korea",
"Korea Advanced Institute of Science and Technology\nSouth Korea"
] | [] | Pre-trained language models have achieved state-of-the-art accuracies on various text classification tasks, e.g., sentiment analysis, natural language inference, and semantic textual similarity. However, the reliability of the fine-tuned text classifiers is an often underlooked performance criterion. For instance, one may desire a model that can detect out-of-distribution (OOD) samples (drawn far from training distribution) or be robust against domain shifts. We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context. In particular, we find that (a) OOD samples often contain indistribution keywords, while (b) cross-domain samples may not always contain keywords; over-relying on the keywords can be problematic for both cases. In light of this observation, we propose a simple yet effective fine-tuning method, coined masked keyword regularization (MASKER), that facilitates context-based prediction. MASKER regularizes the model to reconstruct the keywords from the rest of the words and make low-confidence predictions without enough context. When applied to various pre-trained language models (e.g., BERT, RoBERTa, and ALBERT), we demonstrate that MASKER improves OOD detection and cross-domain generalization without degrading classification accuracy. | 10.1609/aaai.v35i15.17601 | [
"https://arxiv.org/pdf/2012.09392v1.pdf"
] | 229,298,052 | 2012.09392 | bdfe6051558414589f8b8b2e0fea596833e845bb |
MASKER: Masked Keyword Regularization for Reliable Text Classification
Seung Jun Moon
Korea Advanced Institute of Science and Technology
South Korea
Sangwoo Mo
Kimin Lee
Korea Advanced Institute of Science and Technology
South Korea
University of California
BerkeleyUSA
†
Jaeho Lee
Korea Advanced Institute of Science and Technology
South Korea
Jinwoo Shin jinwoos@kaist.ac.krkiminlee@berkeley.edu
Korea Advanced Institute of Science and Technology
South Korea
MASKER: Masked Keyword Regularization for Reliable Text Classification
Pre-trained language models have achieved state-of-the-art accuracies on various text classification tasks, e.g., sentiment analysis, natural language inference, and semantic textual similarity. However, the reliability of the fine-tuned text classifiers is an often underlooked performance criterion. For instance, one may desire a model that can detect out-of-distribution (OOD) samples (drawn far from training distribution) or be robust against domain shifts. We claim that one central obstacle to the reliability is the over-reliance of the model on a limited number of keywords, instead of looking at the whole context. In particular, we find that (a) OOD samples often contain indistribution keywords, while (b) cross-domain samples may not always contain keywords; over-relying on the keywords can be problematic for both cases. In light of this observation, we propose a simple yet effective fine-tuning method, coined masked keyword regularization (MASKER), that facilitates context-based prediction. MASKER regularizes the model to reconstruct the keywords from the rest of the words and make low-confidence predictions without enough context. When applied to various pre-trained language models (e.g., BERT, RoBERTa, and ALBERT), we demonstrate that MASKER improves OOD detection and cross-domain generalization without degrading classification accuracy.
Introduction
Text classification (Aggarwal and Zhai 2012) is a classic yet challenging problem in natural language processing (NLP), having a broad range of applications, including sentiment analysis (Bakshi et al. 2016), natural language inference (Bowman et al. 2015), and semantic textual similarity (Agirre et al. 2012). Recently, Devlin et al. (2019) have shown that fine-tuning a pre-trained language model can achieve stateof-the-art performances on various text classification tasks without any task-specific architectural adaptations. Thereafter, numerous pre-training and fine-tuning strategies to improve the classification accuracy further have been proposed (Liu et al. 2019;Lan et al. 2020;Sanh et al. 2019;Clark et al. 2020;Sun et al. 2019;Mosbach, Andriushchenko, and Klakow 2020;Zhang et al. 2020). However, a vast majority Figure 1: Out-of-distribution (OOD) and cross-domain examples, where class 'Apple' is the original domain. The OOD sample contains the word 'apple' (red) but in a different context. The cross-domain sample does not share the words (e.g., 'Tim Cook') with the original domain, but it still contains some clues (yellow) to guess the correct class.
of the works have focused on evaluating the accuracy of the models only and overlooked their reliability (Hendrycks et al. 2020), e.g., robustness to out-of-distribution (OOD) samples drawn far from the training data (or in-distribution samples).
While pre-trained language models are known to be robust in some sense (Hendrycks et al. 2020), we find that fine-tuned models suffer from the over-reliance problem, i.e., making predictions based on only a limited number of domain-specific keywords instead of looking at the whole context. For example, consider a classification task of 'Apple' visualized in Figure 1. If the most in-distribution samples contain the keyword 'Apple,' the fine-tuned model can predict the class solely based on the existence of the keyword. However, a reliable classifier should detect that the sentence "I ate an apple this morning" is an out-of-distribution sample (Hendrycks and Gimpel 2017;Shu, Xu, and Liu 2017;Tan et al. 2019). On the other hand, the sentence "Tim Cook said that . . . " should be classified as the topic 'Apple' although it does not contain the keyword 'Apple' and the keyword 'Tim Cook' is not contained in the training samples. In other words, the reliable classifier should learn decision rules that , one can see that OOD classes often contain the same keywords from the (similar but different) in-distribution classes, e.g., see 'Iron' and 'Autos.' In (c), one can see that the keywords in one domain do not perfectly align to the other domain, e.g., see 'IMDB (neg) ' and 'Food (pos).' generalize across domains (Fei and Liu 2015;Bhatt, Semwal, and Roy 2015;Bhatt, Sinha, and Roy 2016). This problematic phenomenon frequently happens in realworld datasets. To verify this, we extract the keywords from Amazon 50 class reviews (Chen and Liu 2014) dataset and sentiment analysis datasets (IMDB (Maas et al. 2011); SST-2 (Socher et al. 2013); Fine Food (McAuley and Leskovec 2013)), following the attention-based scheme illustrated in Section 2.1. Figure 2 shows the frequency of the keywords selected from the source class in the target class. Figure 2a shows that the keywords are often strongly tied with the class, which leads the model to learn a shortcut instead of the context. Figure 2b shows the results where the source and target classes are different classes of the Amazon reviews dataset. Here, OOD classes often contain the same keywords from the in-distribution classes, e.g., the class 'Autos' contains the same keywords as the class 'Iron.' On the other hand, Figure 2c shows the results where both source and target classes are sentiments ('pos' and 'neg') classes in IMDB, SST-2, and Fine Food datasets. While the same sentiment shares the same keywords, the alignment is not perfect; e.g., 'IMDB (neg)' and 'Food (pos)' contain the same keywords.
Contribution
We propose a simple yet effective fine-tuning method coined masked keyword regularization (MASKER), which handles the over-reliance (on keywords) problem and facilitates the context-based prediction. In particular, we introduce two regularization techniques: (a) masked keyword reconstruction and (b) masked entropy regularization. First, (a) forces the model to predict the masked keywords from understanding the context around them. This is inspired by masked language modeling from BERT (Devlin et al. 2019), which is known to be helpful for learning context. Second, (b) penalizes making high-confidence predictions from "cut-out-context" sentences, that non-keywords are randomly dropped, in a similar manner of Cutout (DeVries and Taylor 2017) used for reg-ularizing image classification models. We also suggest two keyword selection schemes, each relying on dataset statistics and attention scores. We remark that all proposed techniques of MASKER can be done in an unsupervised manner.
We demonstrate that MASKER, applied to the pre-trained language models: BERT (Devlin et al. 2019), RoBERTa (Liu et al. 2019), and ALBERT (Lan et al. 2020), significantly improves the OOD detection and cross-domain generalization performance, without degrading the classification accuracy. We conduct OOD detection experiments on 20 Newsgroups (Lang 1995), Amazon 50 class reviews (Chen and Liu 2014), Reuters (Lewis et al. 2004), IMDB (Maas et al. 2011), SST-2 (Socher et al. 2013), and Fine Food (McAuley and Leskovec 2013) datasets, and cross-domain generalization experiments on sentiment analysis (Maas et al. 2011;Socher et al. 2013;McAuley and Leskovec 2013), natural language inference (Williams, Nangia, and Bowman 2017), and semantic textual similarity tasks. In particular, our method improves the area under receiver operating characteristic (AUROC) of BERT from 87.0% to 98.6% for OOD detection under 20 Newsgroups to SST-2 task, and reduce the generalization gap from 19.2% to 10.9% for cross-domain generalization under Fine Food to IMDB task.
Related Work
Distribution shift in NLP. The reliable text classifier should detect distribution shift, i.e., test distribution is different from the training distribution. However, the most common scenarios: OOD detection and cross-domain generalization are relatively under-explored in NLP domains (Hendrycks et al. 2020;Marasović 2018). Hendrycks et al. (2020) found that pre-trained models are robust to the distribution shift compared to traditional NLP models. We find that the pre-trained models are not robust enough, and we empirically show that pre-trained models are still relying on undesirable dataset bias. Our method further improves the generalization performance, applied to the pre-trained models.
Shortcut bias. One may interpret the over-reliance problem as a type of shortcut bias (Geirhos et al. 2020), i.e., the model learns an easy-to-learn but not generalizable solution, as the keywords can be considered as a shortcut. The shortcut bias is investigated under various NLP tasks (Sun et al. 2019), e.g., natural language inference (McCoy, Pavlick, and Linzen 2019), reasoning comprehension (Niven and Kao 2019), and question answering (Min et al. 2019). To our best knowledge, we are the first to point out that the over-reliance on keywords can also be a shortcut, especially for text classification. We remark that the shortcut bias is not always harmful as it can be a useful feature for in-distribution accuracy. However, we claim that they can be problematic for unexpected (i.e., OOD) samples, as demonstrated in our experiments.
Debiasing methods. Numerous debiasing techniques have been proposed to regularize shortcuts, e.g., careful data collection (Choi et al. 2018;Reddy, Chen, and Manning 2019), bias-tailored architecture (Agrawal et al. 2018), and adversarial regularization (Clark, Yatskar, and Zettlemoyer 2019;Minderer et al. 2020;Nam et al. 2020). However, most prior work requires supervision of biases, i.e., the shortcuts are explicitly given. In contrast, our method can be viewed as an unsupervised debiasing method, as our keyword selection schemes automatically select the keywords.
Masked Keyword Regularization
We first introduce our notation and architecture setup; then propose the keyword selection and regularization approaches in Section 2.1 and Section 2.2, respectively.
Notation. The text classifier f : x → y maps a document x to the corresponding class y ∈ {1, . . . , C}. The document x is a sequence of tokens t i ∈ V, i.e., x = [t 1 , . . . , t T ] where V is the vocabulary set and T is the length of the document. Here, the full corpus D = {(x, y)} is a collection of all documents, and the class-wise corpus D c = {(x, y) ∈ D | y = c} is a subset of D of class c. The keyword set K ⊂ V is the set of vocabularies which mostly affects to the prediction. 1 The keyword k = [k 1 , . . . , k L ] of the document x is given by k = [t i ∈ x | t i ∈ K], where L ≤ T is the number of keywords in the document x.
Architecture. We assume the pre-trained language model follows the bi-directional Transformer (Vaswani et al. 2017) architecture, widely used in recent days (Devlin et al. 2019;Liu et al. 2019;Lan et al. 2020). They consist of three components: embedding network, document classifier, and token-wise classifier. Given document x, the embedding network produces (a) a document embedding (for an entire document), and (b) token embeddings, which correspond to each input token. The document and token-wise classifier predict the class of document and tokens, respectively, from the corresponding embeddings. For the sake of simplicity, we omit the shared embedding network and denote the document and token-wise classifier as f doc : x → y and f tok : x = [t 1 , . . . , t T ] → s = [s 1 , . . . , s T ], respectively, where s i ∈ V is a target token corresponds to t i . 1 Chosen by our proposed keyword selection (Section 2.1). Figure 3: Top 10 keywords chosen from the frequency-based and attention-based selection schemes under the Amazon 50 class reviews dataset. The frequency-based scheme chooses uninformative words (e.g., '305'), while the attention-based scheme chooses more informative ones (e.g., 'watch').
Keyword Selection Schemes
We consider two keyword selection schemes, based on the dataset statistics (model-free) and trained models. While the former is computationally cheaper, the latter performs better; hence, one can choose its purpose.
Frequency-based. We first choose the keywords using the relative frequency of the words in the dataset. Specifically, we use the term frequency-inverse document frequency (TF-IDF; Robertson (2004)) metric, which measures the importance of the token by comparing the frequency in the target documents (term frequency) and the entire corpus (inverse document frequency). Here, the keywords are defined as the tokens with the highest TF-IDF scores. Formally, let X c be a large document that concatenates all tokens in a class-wise corpus D c , and D = [X 1 , . . . , X C ] be a corpus of such large documents. Then, the frequency-based score of token t is given by
s freq (t) = max c∈{1,...,C} tf(t, X c ) · idf(t, D)(1)
where tf(t, X) = 0.5 + 0.5 · n t,X / max{n t ,X : t ∈ X}, idf(t, D) = log(|D|/|{X ∈ D : t ∈ X}|), and n t,x is number of token t in document x. Note that the frequency-based selection is model-agnostic and easily computed, but does not reflect the contribution of the words to the prediction. Attention-based. We also choose the keywords using the model attention as it is a more direct and effective way to measure the importance of words on model prediction. To this end, we first train a model with a standard approach using the cross-entropy loss L CE , which leads the model to suffer from the over-reliance (on keywords) issue. Our idea is to use the attention values of the model for choosing the keywords. Here, the keywords are defined as the tokens with the highest attention values. Formally, let a = [a 1 , . . . , a T ] ∈ R T be attention values of the document embedding, where a i corresponds to the input token t i . Then, the attention-based score of token t is given by
s attn (t) = (x,y)∈D 1 n t,x i∈{1,...,T } I(t i = t) · a i a(2)
where I is an indicator function and · is 2 -norm. We choose the keywords by picking the top K tokens according to the scores in Eq. (1) and Eq. (2) for each selection scheme, respectively. We also test the class-balanced version, i.e., pick the top K/C tokens for each class, but the class-agnostic one performed better.
Comparison of the selection schemes. We observe that the frequency-based scheme often selects uninformative keywords that uniquely appears in some class. In contrast, the attention-based scheme selects more general keywords that actually influence the prediction. Figure 3 shows the keywords chosen by both selection schemes: the frequency-based scheme chooses uninformative words such as '305' and 'forerunner,' while the attention-based scheme chooses more informative ones such as 'watch' or 'chips.'
Regularization via Keyword Masking
Using the chosen keywords, we propose two regularization techniques to reduce the over-reliance issue and facilitate the model to look at the contextual information.
Masked keyword reconstruction. To enforce the model to look at the surrounding context, we guide the model to reconstruct the keywords from keyword-masked documents. Note that it resembles the masked language model (Devlin et al. 2019), but we mask the keywords instead of random words. Masked keyword reconstruction only regularizes sentences with keywords, and we omit the loss for ones without any keywords. Formally, letk be a random subset of the full keyword k (selected as in Section 2.1), that each element is chosen with probability p independently. We maskk from the original document x and get the masked documentx = x−k. Then, the masked keyword reconstruction (MKR) loss is
L MKR (x, v) := i∈index(k) L CE (f tok (x) i , v i )(3)
where index(k) is the index of the keywordsk with respect to the original document x, v i is the index of the keywords with respect to the vocabulary set. We remark that the reconstruction part is essential; we also test simply augmenting the masked documents, i.e., L CE (f doc (x), y), but it performed worse. Choosing proper keywords is also crucial; attentionbased keywords performs better than frequency-based or random keywords, as shown in Table 1 and Table 3. Masked entropy regularization. Furthermore, we regularize the prediction of the context-masked documents, that context (non-keyword) words are randomly dropped. The model should not classify the context-masked documents correctly as they lost the original context. Formally, let c be a randomly chosen subset of the full context words c = x − k, where each element is chosen with probability q independently. We mask c from the original document x and get the context-masked document x = x − c. Then, the masked entropy regularization (MER) loss is L MER ( x) := D KL (U(y)||f doc ( x)) (4) where D KL is the KL-divergence and U(y) is a uniform distribution. We remark that MER does not degrade the classification accuracy since it regularizes non-realistic contextmasked sentences, rather than full documents. Table 1 shows that MER does not drop the classification accuracy in original domain, while Table 3 and Table 4 show that MER improves the cross-domain accuracy. On the other hand, MER differs from the prior sentence-level objectives, e.g., next sentence prediction (Devlin et al. 2019), as our goal is to regularize shortcuts, not learning better in-domain representation.
To sum up, the final objective is given by L total = L CE + λ MKR L MKR + λ MER L MER (5) where λ MKR and λ MER are hyperparameters for the MKR and MER losses, respectively. Figure 4 visualizes the proposed losses, and the overall procedure is in Appendix B.
Experiments
We demonstrate the effectiveness of our proposed method, MASKER. In Section 3.1, we describe the experimental setup. In Section 3.2 and 3.3, we present the results on OOD detection and cross-domain generalization, respectively.
Experimental setup
We demonstrate the effectiveness of MASKER, applied to the pre-trained models: BERT (Devlin et al. 2019), RoBERTa (Liu et al. 2019) and ALBERT (Lan et al. 2020). We choose 10 × C keywords in a class agnostic way, where C is the number of classes. We drop the keywords and contexts with probability p = 0.5 and q = 0.9 for all our experiments. We use λ MKR = 0.001 and λ MER = 0.001 for OOD detection, and same λ MKR = 0.001 but λ MER = 0.0001 for cross-domain generalization, as the entropy regularization gives more gain for reliability than accuracy (Pereyra et al. 2017). We modify the hyperparameter settings of the pre-trained models (Devlin et al. 2019;Liu et al. 2019), specified in Appendix A. 1-vs-rest classifier. Complementary to MASKER, we use 1-vs-rest classifier (Shu, Xu, and Liu 2017) as it further im-proves the reliability (see Table 1 and Table 3). Intuitively, 1-vs-rest classifier can reject all classes (all prediction scores are low); hence detect OOD samples well.
Baselines. We mainly compare MASKER with vanilla fine-tuning of pre-trained models (Hendrycks et al. 2020), with extensive ablation study (see Table 1 and Table 3). Additionally, we compare with residual ensemble (Clark, Yatskar, and Zettlemoyer 2019), applied to the same pre-trained models. Residual ensemble trains a debiased model by fitting the residual from a biased model. We construct a biased dataset by subsampling the documents that contain keywords. To benchmark the difficulty of the task, we also report the classic non-Transformer models, e.g., one-class support vector machine (OC-SVM, Schölkopf et al. (2000)), OpenMax (Bendale and Boult 2016), and DOC (Shu, Xu, and Liu 2017).
OOD Detection
We use the highest softmax (or sigmoid) output of the model as confidence score for OOD detection task. We use 20 Newsgroups (Lang 1995) and Amazon 50 class reviews (Chen and Liu 2014) (Maas et al. 2011), and SST-2 (Socher et al. 2013) datasets for out-of-distribution. Table 1 shows an ablation study on MASKER under the Amazon reviews dataset with a split ratio of 25%. All components of MASKER contribute to OOD detection. Note that MASKER does not degrade the classification accuracy while improving OOD detection. Also, the attention-based selection performs better than the frequency-based or random selection, which implies the importance of selecting suitable keywords. Recall that the attention-based scheme selects the keywords that contribute to the prediction, while the frequency-based scheme often chooses domain-specific keywords that are not generalizable across domains. Table 2 shows the results on various OOD detection scenarios, comparing the vanilla fine-tuning, residual ensemble, and MASKER. Notably, MASKER shows the best results in all cases. In particular, MASKER improves the area under receiver operating characteristic (AUROC) from 87.0% to 98.6% on 20 Newsgroups to SST-2 task. We find that residual ensemble shows inconsistent gains: it often shows outstanding results (e.g., Newsgroup to SST-2) but sometimes fails (e.g., Amazon to Fine Food). In contrast, MASKER shows consistent improvement over the vanilla fine-tuning.
In Figure 5a and Figure 5b, we visualize the t-SNE (Maaten and Hinton 2008) plots on the document embeddings of BERT and MASKER, under the Amazon reviews dataset with a split ratio of 25%. Blue and red points indicate in-and out-of-distribution samples, respectively. Unlike the samples that are entangled in the vanilla BERT, MASKER clearly distinguishes the OOD samples. Table 3 shows an ablation on MASKER study under the sentiment analysis task. The results are consistent with OOD detection, e.g., all components contribute to cross-domain generalization. Notably, while MER is not helpful for the original domain accuracy (see Table 1), it improves the crossdomain accuracy for most settings. In particular, MASKER improves the cross-domain accuracy from 75.6% to 80.0% for Fine Food to SST-2 task. We analyze the most influential keywords (see Appendix D) and find that MASKER extracts the sentiment-related (i.e., generalizable) keywords (e.g., 'astonishing') while the vanilla BERT is biased to some domain-specific words (e.g., 'moonlight'). Table 4 presents the results on sentiment analysis, natural language inference, and semantic textual similarity tasks. We compare MASKER with the vanilla fine-tuning and residual ensemble. The residual ensemble helps cross-domain generalization, but the gain is not significant and often degrades the original domain accuracy. This is because the keywords can be useful features for classification. Hence, naively removing (or debiasing) those features may lose the information. In Table 4: Accuracy (%) of original domain and cross-domain on (a) sentiment analysis, (b) natural language inference, and (c) semantic textual similarity tasks, respectively. The reported results are averaged over three trials for sentiment analysis and semantic textual similarity, and a single trial for natural language inference. Bold denotes the best results among the three methods, and bracket denotes the relative gain of MASKER over the vanilla model. contrast, MASKER facilitates contextual information rather than removing the keyword information, which regularizes the over-reliance in a softer manner.
Cross-domain Generalization
In Figure 5c and Figure 5d, we provide the t-SNE plots on the document embeddings of BERT and MASKER, under the Fine Food to STS-2 task. Blue and red points indicate original and cross-domain samples, respectively. MASKER better entangles the same classes in training and test datasets (of the different domains) while BERT fails to do so.
Conclusion
The reliability of text classifiers is an essential but underexplored problem. We found that the over-reliance on some keywords can be problematic for out-of-distribution detection and generalization. We propose a simple yet effective fine-tuning method, coined masked keyword regularization (MASKER), composed of two regularizers and keyword selection schemes to address this issue. We demonstrate the effectiveness of MASKER under various scenarios.
A Experimental Details
A.1 Training Details Following Devlin et al. (2019); Liu et al. (2019), we select the best hyperparameters from the search space below. We choose learning rate from {1e−5, 2e−5, 5e−5} and batch size from {16,32}. We halve the learning rate for the embedding layers of MASKER since the regularizer for fits to the classifier, and directly updating the embedding layers can be unstable. We also use the batch size of 4 for random word reconstruction due to the large vocabulary size. We use the Adam (Kingma and Ba 2015) optimizer for all experiments. We train vanilla BERT and ALBERT for 3∼4 epochs, and RoBERTa for 10 epochs following Devlin et al. (2019) and Liu et al. (2019), respectively. For MASKER, we train BERT+MASKER, ALBERT+MASKER for 6∼8 epochs, and train RoBERTa+MASKER for 12 epochs. We remark that all the models are trained until convergence. Since MER cannot be directly applied to the regression tasks (e.g., STS-B), we only use MKR for such settings.
A.2 Dataset Details
We use the pre-defined train and test splits if they exists: for IMDB (Maas et al. 2011), SST-2 (Socher et al. 2013, MNLI (Williams, Nangia, and Bowman 2017), and STS-B datasets. If pre-defined splits not exists, we randomly divide the dataset with 70:30 ratio, using them for train and test splits, respectively: for 20 Newsgroups (Lang 1995), Amazon 50 class reviews (Chen and Liu 2014), and Fine Food (McAuley and Leskovec 2013) datasets. We do not use any pre-processing methods, e.g., removing headers.
A.3 Evaluation Metrics
Let TP, TN, FP, and FN denotes true positive, true negative, false positive, and false negative, respectively. We use the following metrics for OOD detection:
• Area under the receiver operating characteristic curve (AUROC). The ROC curve is a graph plotting true positive rate (TPR) = TP / (TP+FN) against the false positive rate (FPR) = FP / (FP+TN) by varying a threshold. AUROC measures the area under the ROC curve.
• Equal error rate (EER). EER is the error rate when the confidence threshold is located where FPR is the same with the false negative rate (FNR) = FN / (TP+FN).
• Detection accuracy. Measures the maximum classification probability among all possible threshold sets.
• TNR at TPR 80%. Measures true negative rate (TNR) = TN / (FP+TN) when TPR = 80%.
AUROC measures the overall performance varying thresholds, and the other three metrics measure the performance for some fixed threshold. Figure 7 visualizes the overall procedure of MASKER, including the keyword selection scheme (attention-based selection using a vanilla model), masked keyword reconstruction, and masked entropy regularization. Table 5 presents the additional OOD detection results, including the split settings of 20 Newsgroups and Amazon reviews datasets (Shu, Xu, and Liu 2017). Our method consistently outperforms the baseline methods. Table 6 presents the classification accuracy of the baselines and MASKER, which validates that MASKER does not degrade the accuracy. Table 7 presents the other cross-domain results under the STS-B dataset, shows the effectiveness of our method. We also try to regularize attention weights to be uniform directly, but it harms both classification accuracy and OOD detection performance for vanilla models.
B Overall Procedure of MASKER
C Additional Experimental Results
D Analysis on Attention Scores
In Figure 6, we visualize the most influential keywords measured by the attention scores. Our method makes predictions based on the generalizable keywords, e.g., sentiment-related keywords for sentiment analysis tasks. Token-wise Classifier monitor screen Mask Keywords (Keywords : monitor, screen)
BERT + MASKER
[CLS] [MASK] [MASK] monitor [MASK] [MASK] [MASK] screen [SEP]
Mask Except Keywords (Keywords : monitor, screen) Figure 7: The overall procedure of MASKER using the attention-based keywords. Vanilla training and our proposed method are presented as the red and blue boxes, respectively. Parameter sharing without back-propagation is presented as a dashed arrow.
Uniform
Bi-directional Transformer
Computer
Document Classifier
Figure 2 :
2Frequency of the keywords selected from the source class (x-axis) in the target class (y-axis). (a) Both the source and target classes are in-distribution, (b) source and target distributions are in-and out-of-distribution, respectively, and (c) source and target distributions are identical but of multiple domains. In (b)
""Figure 4 :
4Love this monitor with a big screen" Mask Keywords (Keywords : monitor, screen) (a) Masked keyword reconstruction [CLS] [MASK] [MASK] monitor [MASK] [MASK] [MASK] screen [SEP] Love this monitor with a big screen" Mask Except Keywords (Keywords : monitor, screen) Illustration of two portions of our proposed method, MASKER: (a) Masked keyword reconstruction masks keyword tokens in input sentences and forces the model to predict the original words in masked tokens. (b) Masked entropy regularization masks non-keyword tokens in input sentences and forces the model to print uniform distribution, as regarding it as OOD.
We conduct the experiments on sentiment analysis (IMDB (Maas et al. 2011); SST-2 (Socher et al. 2013); Fine Food (McAuley and Leskovec 2013)), natural language inference (MNLI, Williams, Nangia, and Bowman (2017)), and semantic textual similarity (STS-B, Wang et al. (2019) dataset) tasks, following the settings of Hendrycks et al. (2020).
Figure 5
5: t-SNE plots on the document embeddings of BERT and MASKER, on (a,b) OOD detection (Amazon 50 class reviews with split ratio 25%), and (c,d) cross-domain generalization (Fine Food to SST-2). (a,b) Blue and red dots indicate the in-and out-of-distribution samples, respectively. (c,d) Blue and red dots indicate the samples from the same classes ('negative') from training and test domains, respectively. MASKER better distinguishes OOD samples and entangles cross-domain samples.
Figure 6 :
6Top 10 keywords according to the attention scores, chosen by BERT and ours trained on Fine Food and tested on SST-2. The sentiment-related keywords are highlighted.
Table 2 :
2AUROC (%) on various OOD detection scenarios. The reported results are averaged over three trials, and the best results are highlighted in bold. Bracket denotes the relative gain of MASKER over the vanilla model.
datasets for in-distribution, and Reuters (LewisMethod
Classifier
Keyword
MKR MER
Dataset (Train → Test)
IMDB
→ SST-2
IMDB
→ Food
SST-2
→ IMDB
SST-2
→ Food
Food
→ SST-2
Food
→ IMDB
OpenMax
-
-
-
-
79.55±0.78
(-8.12)
75.41±1.20
(-12.25)
75.30±0.44
(-7.61)
62.19±3.06
(-20.72)
61.85±0.63
(-31.70)
67.50±1.50
(-26.04)
DOC
-
-
-
-
77.90±1.22
(-10.06)
78.33±1.52
(-9.64)
76.88±0.70
(-6.23)
64.47±2.52
(-18.63)
62.00±0.86
(-31.27)
67.31±1.28
(-25.96)
BERT
Multi-class
-
-
-
85.92±1.92
(-7.57)
92.90±2.47
(-0.60)
85.74±0.56
(-6.74)
87.57±1.13
(-4.91)
67.55±5.27
(-28.92)
77.31±2.09
(-19.16)
1-vs-rest
-
-
-
84.28±0.23
(-8.92)
87.81±3.91
(-5.39)
85.34±0.63
(-7.46)
84.35±1.48
(-8.45)
64.57±1.27
(-32.15)
81.34±0.78
(-12.16)
BERT
+MASKER
(ours)
1-vs-rest
Random
-
87.29±1.48
(-6.29)
90.52±1.28
(-3.06)
86.57±0.87
(-7.60)
78.00±0.86
(-16.17)
78.79±1.15
(-17.51)
84.56±1.59
(-12.91)
1-vs-rest
Random
-
86.84±1.76
(-5.97)
90.27±1.25
(-2.54)
87.18±0.81
(-8.52)
85.91±1.04
(-9.79)
79.50±0.46
(-17.03)
84.61±0.41
(-11.92)
1-vs-rest
Frequency
-
86.52±1.40
(-6.04)
88.41±1.72
(-4.15)
87.06±1.14
(-7.37)
79.99±1.88
(-14.44)
74.72±1.90
(-23.73)
80.94±2.80
(-17.51)
1-vs-rest
Frequency
-
86.38±0.88
(-6.72)
84.20±2.59
(-8.90)
85.31±2.31
(-13.55)
88.43±1.85
(-10.43)
75.34±1.54
(-20.66)
85.34±0.99
(-10.66)
1-vs-rest
Attention
-
87.50±1.49
(-5.91)
92.03±2.97
(-5.74)
87.78±1.34
(-4.92)
90.12±2.68
(-2.58)
75.57±4.02
(-20.82)
79.32±5.09
(-17.07)
1-vs-rest
Attention
-
87.71±0.71
(-5.59)
90.39±0.39
(-2.63)
84.92±2.52
(-7.64)
87.21±1.01
(-5.35)
75.80±1.84
(-20.87)
82.13±2.39
(-14.54)
1-vs-rest
Attention
88.02±1.31
(-5.44)
93.58±2.63
(+0.12)
88.43±0.38
(-3.89)
89.21±0.40
(-3.11)
80.02±1.52
(-16.44)
85.57±0.28
(-10.90)
Table 3 :
3Ablation study on cross-domain generalization under sentiment analysis task. The reported results are averaged over five trials, subscripts denote standard deviations, bracketed numbers denote the generalization gap from the training domain accuracy, and the best accuracies are highlighted in bold. All components of our method contribute to the cross-domain accuracy (%).et al. 2004), IMDB
DOC / BERT / BERT+MAKSER (ours)ID
OOD
Split Ratio
AUROC ↑
EER ↓
Detection
Accuracy ↑
TNR at
TPR 80% ↑
Classification
Accuracy ↑
Newsgroup
Newsgroup
10%
83.7/85.4/87.0 23.1/22.9/21.0 81.0/90.9/91.5 61.0/67.4/75.2 98.7/99.0/98.9
25%
86.1/89.0/91.0 18.7/19.1/17.0 82.7/83.0/86.9 78.5/80.5/84.8 93.9/95.3/94.9
50%
80.4/82.4/83.0 28.2/25.4/25.0 72.3/75.0/76.8 65.0/68.0/71.8 89.9/94.4/94.0
Amazon
100%
84.1/86.0/96.8
23.3/19.5/8.3
90.1/87.2/94.7 74.5/86.7/95.5
86.9/90.4/90.1
Reuter
60.0/91.8/97.7
41.1/14.7/6.7
75.2/85.7/93.6 21.3/84.5/96.4
IMDB
88.6/94.6/98.5
19.1/11.5/5.1
88.4/93.4/96.3 81.7/87.7/98.2
SST-2
88.1/87.0/98.6
18.7/18.9/5.1
86.6/88.5/96.0 81.8/92.4/98.4
Fine Food
81.3/85.3/93.4 25.7/19.8/10.9 74.8/82.7/90.5 67.6/85.9/95.2
Amazon
Amazon
10%
80.7/82.8/86.0 22.9/22.1/21.0 79.2/92.3/93.0 74.0/75.7/80.1 92.7/93.5/93.3
25%
75.1/78.5/85.5 25.6/28.3/22.3 80.2/81.2/85.0 66.0/66.3/74.1 83.1/84.8/85.2
50%
74.6/75.4/77.7 29.0/28.9/27.9 69.4/70.2/71.1 60.0/60.7/61.5 78.8/81.7/80.6
Newsgroup
100%
81.3/84.8/87.2 23.0/21.6/20.0 80.0/81.5/83.5 72.7/77.0/80.4
66.6/70.8/70.0
Reuter
79.8/89.7/93.5 30.8/18.1/12.2 82.5/87.1/89.9 60.0/83.2/90.7
IMDB
89.6/93.3/95.2 18.1/13.9/10.5 82.4/87.0/90.7 83.0/89.2/92.9
SST-2
91.5/93.0/95.6
15.8/14.1/9.5
84.5/89.6/92.9 87.1/88.2/93.9
Fine Food
66.8/78.5/84.9 38.0/30.0/19.5 69.7/73.9/80.7 55.3/62.7/80.8
Table 5 :
5AUROC (%) on various additional OOD detection scenarios. The reported results are averaged over three trials, and the best results are highlighted in bold. MASKER outperforms the baselines in all cases. 70.8/70.3/70.0 71.0/69.1/71.2 68.7/64.2/68.6Dataset
Openmax DOC
Vanilla/Residual/MASKER
BERT
RoBERTa
ALBERT
Newsgroups
85.8
86.9
90.4/89.5/90.1 90.9/88.1/90.7 89.3/89.7/89.7
Amazon
63.0
66.6
Table 6 :
6Classification accuracy (%) of the datasets used for OOD detection. MASKER shows a comparable accuracy with the vanilla models, while residual ensemble shows a marginal drop.Model
MSRvid
Images
MSRvid
Images
MSRpar Headlines
Images
MSRvid MSRpar Headlines
BERT
91.5
82.0
38.2
61.7
88.0
89.7
50.8
73.9
+MASKER
91.2
84.3
40.9
66.7
88.1
91.6
52.5
75.3
RoBERTa
94.2
88.0
66.0
80.3
91.8
92.9
68.4
84.1
+MASKER
93.7
88.0
67.1
84.0
91.3
94.1
70.1
85.3
ALBERT
92.6
81.2
39.4
60.6
90.4
90.9
44.9
69.8
+MASKER
93.3
82.6
39.8
68.8
90.5
92.0
45.2
78.4
Model
Headlines
MSRpar
Headlines MSRvid
Images
MSRpar
MSRpar MSRvid
Images
Headlines
BERT
86.1
83.2
81.1
69.9
74.2
74.1
71.9
67.1
+MASKER
86.8
88.0
83.6
75.8
77.6
79.1
75.9
67.7
RoBERTa
90.7
93.3
90.1
75.5
86.4
88.2
85.8
85.4
+MASKER
88.2
90.3
90.7
70.9
84.8
90.2
85.9
86.4
ALBERT
86.8
89.3
87.1
63.5
78.5
82.4
80.8
69.2
+MASKER
87.0
90.4
87.1
67.5
76.7
82.7
81.7
75.8
Table 7 :
7Total Pearson correlation (%) of four genres in the STS-B dataset. The reported results are averaged over 3 trials.
AcknowledgementsThis work was supported by Center for Applied Research in Artificial Intelligence(CARAI) grant funded by Defense Acquisition Program Administration(DAPA) and Agency for Defense Development(ADD) (UD190031RD).
A survey of text classification algorithms. C C Aggarwal, C Zhai, Mining text data. SpringerAggarwal, C. C.; and Zhai, C. 2012. A survey of text classifi- cation algorithms. In Mining text data, 163-222. Springer.
Semeval-2012 task 6: A pilot on semantic textual similarity. E Agirre, D Cer, M Diab, A Gonzalez-Agirre, * SEM 2012: The First Joint Conference on Lexical and Computational Semantics. 1Proceedings of the Sixth International Workshop on Semantic EvaluationAgirre, E.; Cer, D.; Diab, M.; and Gonzalez-Agirre, A. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In * SEM 2012: The First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Evaluation (SemEval 2012), 385-393.
Don't just assume; look and answer: Overcoming priors for visual question answering. A Agrawal, D Batra, D Parikh, A Kembhavi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionAgrawal, A.; Batra, D.; Parikh, D.; and Kembhavi, A. 2018. Don't just assume; look and answer: Overcoming priors for visual question answering. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition, 4971- 4980.
Opinion mining and sentiment analysis. R K Bakshi, N Kaur, R Kaur, G Kaur, 3rd International Conference on Computing for Sustainable Global Development (INDIACom). IEEEBakshi, R. K.; Kaur, N.; Kaur, R.; and Kaur, G. 2016. Opin- ion mining and sentiment analysis. In 2016 3rd International Conference on Computing for Sustainable Global Develop- ment (INDIACom), 452-455. IEEE.
Towards open set deep networks. A Bendale, T E Boult, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionBendale, A.; and Boult, T. E. 2016. Towards open set deep networks. In Proceedings of the IEEE conference on com- puter vision and pattern recognition, 1563-1572.
An Iterative Similarity based Adaptation Technique for Cross-domain Text Classification. H S Bhatt, D Semwal, S Roy, Proceedings of the Nineteenth Conference on Computational Natural Language Learning. the Nineteenth Conference on Computational Natural Language LearningBhatt, H. S.; Semwal, D.; and Roy, S. 2015. An Iterative Similarity based Adaptation Technique for Cross-domain Text Classification. In Proceedings of the Nineteenth Confer- ence on Computational Natural Language Learning, 52-61.
. China Beijing, 10.18653/v1/K15-1006Association for Computational LinguisticsBeijing, China: Association for Computational Linguistics. doi:10.18653/v1/K15-1006. URL https://www.aclweb.org/ anthology/K15-1006.
Cross-domain Text Classification with Multiple Domains and Disparate Label Sets. H S Bhatt, M Sinha, S Roy, 10.18653/v1/P16-1155Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Bhatt, H. S.; Sinha, M.; and Roy, S. 2016. Cross-domain Text Classification with Multiple Domains and Disparate Label Sets. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 1641-1650. Berlin, Germany: Association for Computational Linguistics. doi:10.18653/v1/P16-1155. URL https://www.aclweb.org/anthology/P16-1155.
. S R Bowman, G Angeli, C Potts, C D Manning, Bowman, S. R.; Angeli, G.; Potts, C.; and Manning, C. D.
A large annotated corpus for learning natural language inference. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingA large annotated corpus for learning natural language inference. In Proceedings of the 2015 Conference on Empiri- cal Methods in Natural Language Processing, 632-642.
Mining topics in documents: standing on the shoulders of big data. Z Chen, B Liu, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningChen, Z.; and Liu, B. 2014. Mining topics in documents: standing on the shoulders of big data. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, 1116-1125.
Quac: Question answering in context. E Choi, H He, M Iyyer, M Yatskar, W Yih, Y Choi, P Liang, L Zettlemoyer, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingChoi, E.; He, H.; Iyyer, M.; Yatskar, M.; Yih, W.-t.; Choi, Y.; Liang, P.; and Zettlemoyer, L. 2018. Quac: Question answering in context. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, 2174-2184.
Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. C Clark, M Yatskar, L Zettlemoyer, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingClark, C.; Yatskar, M.; and Zettlemoyer, L. 2019. Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), 4069-4082.
Electra: Pre-training text encoders as discriminators rather than generators. K Clark, M.-T Luong, Q V Le, C D Manning, International Conference on Learning Representations. Clark, K.; Luong, M.-T.; Le, Q. V.; and Manning, C. D. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. Bert: Pre-training of deep bidirectional transformers for lan- guage understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), 4171-4186.
T Devries, G W Taylor, arXiv:1708.04552Improved regularization of convolutional neural networks with cutout. arXiv preprintDeVries, T.; and Taylor, G. W. 2017. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552 .
Social media text classification under negative covariate shift. G Fei, B Liu, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingFei, G.; and Liu, B. 2015. Social media text classification under negative covariate shift. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing, 2347-2356.
R Geirhos, J.-H Jacobsen, C Michaelis, R Zemel, W Brendel, M Bethge, F A Wichmann, arXiv:2004.07780Shortcut Learning in Deep Neural Networks. arXiv preprintGeirhos, R.; Jacobsen, J.-H.; Michaelis, C.; Zemel, R.; Bren- del, W.; Bethge, M.; and Wichmann, F. A. 2020. Short- cut Learning in Deep Neural Networks. arXiv preprint arXiv:2004.07780 .
A Baseline for Detecting Misclassified and Out-of-Distribution Examples in Neural Networks. D Hendrycks, K Gimpel, International Conference on Learning Representations. Hendrycks, D.; and Gimpel, K. 2017. A Baseline for De- tecting Misclassified and Out-of-Distribution Examples in Neural Networks. In International Conference on Learning Representations.
Pretrained Transformers Improve Out-of-Distribution Robustness. D Hendrycks, X Liu, E Wallace, A Dziedzic, R Krishnan, D Song, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsHendrycks, D.; Liu, X.; Wallace, E.; Dziedzic, A.; Krishnan, R.; and Song, D. 2020. Pretrained Transformers Improve Out-of-Distribution Robustness. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, 2744-2751.
Adam: A method for stochastic optimization. D P Kingma, J Ba, International Conference on Learning Representations. Kingma, D. P.; and Ba, J. 2015. Adam: A method for stochas- tic optimization. In International Conference on Learning Representations.
Albert: A lite bert for self-supervised learning of language representations. Z Lan, M Chen, S Goodman, K Gimpel, P Sharma, R Soricut, International Conference on Learning Representations. Lan, Z.; Chen, M.; Goodman, S.; Gimpel, K.; Sharma, P.; and Soricut, R. 2020. Albert: A lite bert for self-supervised learn- ing of language representations. In International Conference on Learning Representations.
Newsweeder: Learning to filter netnews. K Lang, Machine Learning Proceedings. ElsevierLang, K. 1995. Newsweeder: Learning to filter netnews. In Machine Learning Proceedings 1995, 331-339. Elsevier.
Rcv1: A new benchmark collection for text categorization research. D D Lewis, Y Yang, T G Rose, F Li, Journal of machine learning research. 5Lewis, D. D.; Yang, Y.; Rose, T. G.; and Li, F. 2004. Rcv1: A new benchmark collection for text categorization research. Journal of machine learning research 5(Apr): 361-397.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLiu, Y.; Ott, M.; Goyal, N.; Du, J.; Joshi, M.; Chen, D.; Levy, O.; Lewis, M.; Zettlemoyer, L.; and Stoyanov, V. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 .
Learning word vectors for sentiment analysis. A L Maas, R E Daly, P T Pham, D Huang, A Y Ng, C Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesMaas, A. L.; Daly, R. E.; Pham, P. T.; Huang, D.; Ng, A. Y.; and Potts, C. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, 142-150.
Visualizing data using t-SNE. L V Maaten, G Hinton, Journal of machine learning research. 9Maaten, L. v. d.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research 9(Nov): 2579-2605.
NLP's generalization problem, and how researchers are tackling it. The Gradient. A Marasović, Marasović, A. 2018. NLP's generalization problem, and how researchers are tackling it. The Gradient .
From amateurs to connoisseurs: modeling the evolution of user expertise through online reviews. J J Mcauley, J Leskovec, Proceedings of the 22nd international conference on World Wide Web. the 22nd international conference on World Wide WebMcAuley, J. J.; and Leskovec, J. 2013. From amateurs to con- noisseurs: modeling the evolution of user expertise through online reviews. In Proceedings of the 22nd international conference on World Wide Web, 897-908.
Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. T Mccoy, E Pavlick, T Linzen, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMcCoy, T.; Pavlick, E.; and Linzen, T. 2019. Right for the Wrong Reasons: Diagnosing Syntactic Heuristics in Natural Language Inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 3428-3448.
Compositional questions do not necessitate multi-hop reasoning. S Min, E Wallace, S Singh, M Gardner, H Hajishirzi, L Zettlemoyer, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsMin, S.; Wallace, E.; Singh, S.; Gardner, M.; Hajishirzi, H.; and Zettlemoyer, L. 2019. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Lin- guistics, 4249-4257.
Automatic Shortcut Removal for Self-Supervised Representation Learning. M Minderer, O Bachem, N Houlsby, M Tschannen, International Conference on Machine Learning. Minderer, M.; Bachem, O.; Houlsby, N.; and Tschannen, M. 2020. Automatic Shortcut Removal for Self-Supervised Representation Learning. In International Conference on Machine Learning.
M Mosbach, M Andriushchenko, D Klakow, arXiv:2006.04884On the Stability of Fine-tuning BERT: Misconceptions, Explanations, and Strong Baselines. arXiv preprintMosbach, M.; Andriushchenko, M.; and Klakow, D. 2020. On the Stability of Fine-tuning BERT: Misconceptions, Explana- tions, and Strong Baselines. arXiv preprint arXiv:2006.04884 .
J Nam, H Cha, S Ahn, J Lee, J Shin, arXiv:2007.02561Learning from Failure: Training Debiased Classifier from Biased Classifier. arXiv preprintNam, J.; Cha, H.; Ahn, S.; Lee, J.; and Shin, J. 2020. Learn- ing from Failure: Training Debiased Classifier from Biased Classifier. arXiv preprint arXiv:2007.02561 .
Probing neural network comprehension of natural language arguments. T Niven, H.-Y Kao, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsNiven, T.; and Kao, H.-Y. 2019. Probing neural network com- prehension of natural language arguments. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, 4658-4664.
Regularizing neural networks by penalizing confident output distributions. G Pereyra, G Tucker, J Chorowski, Ł Kaiser, G Hinton, ICLR Workshop. Pereyra, G.; Tucker, G.; Chorowski, J.; Kaiser, Ł.; and Hin- ton, G. 2017. Regularizing neural networks by penalizing confident output distributions. In ICLR Workshop.
Coqa: A conversational question answering challenge. S Reddy, D Chen, C D Manning, Transactions of the Association for Computational Linguistics. 7Reddy, S.; Chen, D.; and Manning, C. D. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Computational Linguistics 7: 249-266.
Understanding inverse document frequency: on theoretical arguments for IDF. S Robertson, Journal of documentationRobertson, S. 2004. Understanding inverse document fre- quency: on theoretical arguments for IDF. Journal of docu- mentation .
Distil-BERT, a distilled version of BERT: smaller, faster, cheaper and lighter. V Sanh, L Debut, J Chaumond, T Wolf, 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing -NeurIPS. Sanh, V.; Debut, L.; Chaumond, J.; and Wolf, T. 2019. Distil- BERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing -NeurIPS 2019.
Support vector method for novelty detection. B Schölkopf, R C Williamson, A J Smola, J Shawe-Taylor, J C Platt, Advances in neural information processing systems. Schölkopf, B.; Williamson, R. C.; Smola, A. J.; Shawe- Taylor, J.; and Platt, J. C. 2000. Support vector method for novelty detection. In Advances in neural information processing systems, 582-588.
Doc: Deep open classification of text documents. L Shu, H Xu, B Liu, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingShu, L.; Xu, H.; and Liu, B. 2017. Doc: Deep open clas- sification of text documents. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Pro- cessing, 2911-2916.
Recursive deep models for semantic compositionality over a sentiment treebank. R Socher, A Perelygin, J Wu, J Chuang, C D Manning, A Y Ng, C Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSocher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C. D.; Ng, A. Y.; and Potts, C. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, 1631-1642.
How to fine-tune bert for text classification?. C Sun, X Qiu, Y Xu, X Huang, China National Conference on Chinese Computational Linguistics. SpringerSun, C.; Qiu, X.; Xu, Y.; and Huang, X. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computational Linguistics, 194-206. Springer.
CSI: Novelty Detection via Contrastive Learning on Distributionally Shifted Instances. J Tack, S Mo, J Jeong, J Shin, Advances in Neural Information Processing Systems. Tack, J.; Mo, S.; Jeong, J.; and Shin, J. 2020. CSI: Nov- elty Detection via Contrastive Learning on Distributionally Shifted Instances. In Advances in Neural Information Pro- cessing Systems.
Out-of-Domain Detection for Low-Resource Text Classification Tasks. M Tan, Y Yu, H Wang, D Wang, S Potdar, S Chang, M Yu, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingTan, M.; Yu, Y.; Wang, H.; Wang, D.; Potdar, S.; Chang, S.; and Yu, M. 2019. Out-of-Domain Detection for Low- Resource Text Classification Tasks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), 3566-3572.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Advances in neural information processing systems, 5998-6008.
Glue: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S R Bowman, International Conference on Learning Representations. Wang, A.; Singh, A.; Michael, J.; Hill, F.; Levy, O.; and Bowman, S. R. 2019. Glue: A multi-task benchmark and analysis platform for natural language understanding. In International Conference on Learning Representations.
A broad-coverage challenge corpus for sentence understanding through inference. A Williams, N Nangia, S R Bowman, arXiv:1704.05426arXiv preprintWilliams, A.; Nangia, N.; and Bowman, S. R. 2017. A broad-coverage challenge corpus for sentence understand- ing through inference. arXiv preprint arXiv:1704.05426 .
T Zhang, F Wu, A Katiyar, K Q Weinberger, Y Artzi, arXiv:2006.05987Revisiting Few-sample BERT Fine-tuning. arXiv preprintZhang, T.; Wu, F.; Katiyar, A.; Weinberger, K. Q.; and Artzi, Y. 2020. Revisiting Few-sample BERT Fine-tuning. arXiv preprint arXiv:2006.05987 .
| [] |
[
"DEVELOPPEMENT DE METHODES AUTOMATIQUES POUR LA REUTILISATION DES COMPOSANTS LOGICIELS",
"DEVELOPPEMENT DE METHODES AUTOMATIQUES POUR LA REUTILISATION DES COMPOSANTS LOGICIELS"
] | [
"Koffi Kouakou \nDépartement d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n\n",
"Ive Arsène \nDépartement d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n\n",
"Docteur Brou \nDépartement d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n\n",
"Prof. OUMTAGADAKonan Marcellin \nDépartement d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n\n",
"Souleymane \nDépartement d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n\n"
] | [
"Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n",
"Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n",
"Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n",
"Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n",
"Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE\n"
] | [] | Les développeurs et les entreprises se trouvent souvent confrontées à une masse importante d'informations dans l'ingénierie des systèmes d'information. Cette masse importante d'informations a pour conséquence l'augmentation de la taille des logiciels à développer et l'accroissement de la complexité de ces applications. Pour résoudre ces difficultés, les développeurs de logiciel font de plus en plus recours à des composants réutilisables dans leurs applications. La réutilisation de ces composants nécessite le développement de modèles et de méthodes sans lesquels de nombreuses tâches sont manuelles et répétitives. Nous allons, dans ces travaux de recherche, développer des méthodes automatiques pour évaluer et améliorer la qualité des composants logiciels sélectionnés selon les critères définis par l'utilisateur. Il s'agit de :1. Définir un modèle de qualité du composant logiciel; 2. Définir un processus de sélection du composant pertinent et adapté au système logiciel à construire; 3. Etablir un modèle métrique pour évaluer, maximiser la qualité du composant. 4. Optimiser le coût et le temps de maintenance. Ce travail est organisé comme suit. La première partie concerne l'état de l'art relatif à la sélection des composants réutilisables. La deuxième partie traite des différents modèles développés qui font objet de notre article. La dernière partie concerne la conclusion et les perspectives. | null | [
"https://arxiv.org/pdf/1703.09749v1.pdf"
] | 10,789,811 | 1703.09749 | 7fff4475319670c756f7626124db13e338d58f10 |
DEVELOPPEMENT DE METHODES AUTOMATIQUES POUR LA REUTILISATION DES COMPOSANTS LOGICIELS
Koffi Kouakou
Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE
Ive Arsène
Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE
Docteur Brou
Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE
Prof. OUMTAGADAKonan Marcellin
Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE
Souleymane
Département d'informatique Ecole Doctorale Polytechnique Institut National Polytechnique Houphouet Boigny (EDP-INPHB) YAMOUSSOUKRO CÔTE D'IVOIRE
DEVELOPPEMENT DE METHODES AUTOMATIQUES POUR LA REUTILISATION DES COMPOSANTS LOGICIELS
1method developmentreusesoftware componentscomponent quality
Les développeurs et les entreprises se trouvent souvent confrontées à une masse importante d'informations dans l'ingénierie des systèmes d'information. Cette masse importante d'informations a pour conséquence l'augmentation de la taille des logiciels à développer et l'accroissement de la complexité de ces applications. Pour résoudre ces difficultés, les développeurs de logiciel font de plus en plus recours à des composants réutilisables dans leurs applications. La réutilisation de ces composants nécessite le développement de modèles et de méthodes sans lesquels de nombreuses tâches sont manuelles et répétitives. Nous allons, dans ces travaux de recherche, développer des méthodes automatiques pour évaluer et améliorer la qualité des composants logiciels sélectionnés selon les critères définis par l'utilisateur. Il s'agit de :1. Définir un modèle de qualité du composant logiciel; 2. Définir un processus de sélection du composant pertinent et adapté au système logiciel à construire; 3. Etablir un modèle métrique pour évaluer, maximiser la qualité du composant. 4. Optimiser le coût et le temps de maintenance. Ce travail est organisé comme suit. La première partie concerne l'état de l'art relatif à la sélection des composants réutilisables. La deuxième partie traite des différents modèles développés qui font objet de notre article. La dernière partie concerne la conclusion et les perspectives.
RÉSUMÉ.La masse importanted'informationset à lacomplexité croissantedes applications contraignent les développeurs à avoir des composants autonomes et réutilisables des bibliothèques et des marchés de composants. Notre approche consiste à développer des méthodes pour évaluer la qualité du composant logiciel de ces bibliothèques d'une part et d'autre part à optimiser le coût financier et le temps d'adaptation des composants sélectionnés. Notre fonction objectif définit une métrique qui maximise la valeur de la qualité du composant logiciel en minimisant le coût financier et le temps de maintenance. Ce modèle devrait permettre de classer les composants et de les ordonner afin de choisir le plus optimisé.
ABSTRACT.The large amount of information and the increasing complexity of applications constrain developers to have stand-alone and reusable components from libraries and component markets.Our approach consists in developing methods to evaluate the quality of the software component of these libraries, on the one hand and moreover to optimize the financial cost and the adaptation's time of these selected components. Our objective function defines a metric that maximizes the value of the software component quality by minimizing the financial cost and maintenance time. This model should make it possible to classify the components and order them in order to choose the most optimized.
MOTS-CLES
Proposition de processus de sélection du composant logiciel
Nous allons donner une description du processus de sélection des composants sélectionnés puis évalués. Ceci nous amènera à faire le choix du composant pertinent et optimisé.
Fig2 Modèle d'évaluation de la qualité du composant logiciel
En noir : les actions à effectuerlors de la sélection En rouge : les méthodes utilisées lors de la sélection 8 Ce processus est modélisé en UML de la manière suivante :
Fig3 processus de sélection d'un composant
Il s'agit de définir un processus de sélection qui permettra à l'utilisateur de faire le choix du composant logiciel dans une bibliothèque. En effet sur le net, il existe de nombreuses bibliothèques telles que ComponentSource, Sourceforge, flashline,citerAlterWay, etc, permettant de programmer une interface graphique pour des applications données. Le processus de sélection suit les étapes suivantes : Etape1:L'utilisateur exprime les besoins de qualité du composant. Etape2: Une première recherche consiste à prendre en compte les propriétés fonctionnelles fournissant certains services en rapport avec le type de logiciel à construire et surtout les besoins exprimés par l'utilisateur. Nous obtenons un ensemble de composants logiciels sélectionnés dont les propriétés sont des propriétés fonctionnelles fournies par les composants logiciels. Autrement dit, il s'agit des services rendus par les différents composants. Etape3: Cette étape consiste à faire une sélection basée sur les propriétés non fonctionnelles. Il s'agit de tenir compte de la qualité du composant logiciel et comment les services sont rendus. Cette étape consiste à évaluer la qualité du composant à partir de métrique définie. On sélectionnera le composant qui répond aux mieux, aux critères de qualité définis par l'utilisateur. Etape4: Au niveau de cette étape, on évalue la maintenance c'est-à-dire, la phase de modification et d'adaptation du composant dans un système en cours d'utilisation. Elle représente la phase de simulation pour déterminer la qualité du composant. Nous appliquons à ce niveau, la métrique qui permet d'évaluer le coût financier et le temps générés. Elle évalueet produit un coût financier et un temps de maintenance. Etape5: Dans le cas où les paramètrescoût et temps sont optimisés, alors, le composant sélectionné est retenu. Etape6: Si les paramètres ne le sont pas, alors la recherche continue et le processus reprend
Proposition de modèle pour évaluer le coût financier et le temps de maintenance
En nous inspirant des modèles définis dans les revues de littératures, notre approche consiste à évaluer en plus du coût financier du composant, le temps mis pour réaliser la maintenance.
Notre modèle a pour objectif de prendre en compte le paramètre temps lors de d'évaluation de la qualité du composant logiciel à l'aide de la programmation linéaire.
, , 1 i i f c t i ct i sc ;
Avec les contraintes suivantes : (6)
0 1 (4) max i t t T et 0 1 i t max i c c C et 0 1 i c (5) 1 h hi i i i i i h A S w q x c t x max i t t T (7) max max max max max max max max( 1 ) max( ) min( 1 ) 1 1 0 1 0 1 0 0 0 0 1 0 h hi i i i i h hi i i i i h A h A h h h A h A i i i i w q x c t x w q x c t x w w t t t et t T t et t T T T c c c et c C c et C C x siselectionné sinon x max 1 0 cC x siselectionné sinon x i h h i hA Q w q x i SC (8)
Procédures utilisées
Nous allons initialiser les 5 principales caractéristiques (fig1) définies par l'utilisateur pour faire notre étude. Après la phase de sélection de l'ensemble des composants (Pi), nous allons évaluer la qualité des caractéristiques de ses composants en fonction de leur importance (poids) et de la qualité relative du composant (note) à l'aide de la procédure evaluation(Pi,nbre
Conclusion
Notre travail repose sur trois approches. Nous avons construit un modèle de qualité qui prend en compte les caractéristiques définissant la fiabilité et la sûreté du logiciel à construire. Ensuite nous construit un modèle de sélection de composants logiciels dans une bibliothèque ou sur le net. Enfin nous avons défini une métrique qui prend en compte le coût financier et le temps d'adaptation des composants sélectionnés. Cette métrique évalue et simule la qualité du composant en optimisant les paramètres dont le coût financier et le temps. Cette approche est soutenue par l'algorithme SelectCompo construit. Plusieurs aspects restent à développer. Il s'agit de prendre en compte la sélection de composants logiciels dans diverses bibliothèques pour toute plateforme. Cela permettra de résoudre le problème d'interopérabilité de ces composants sur différentes plateformes. Dans les futurs travaux, nous pourrons tester la facilité d'intégration de ces composants sur une plateforme donnée. Afin de résoudre les difficultés de déploiement de certains composants logiciels sur une combinaison de plateforme, de fourniture de service à des utilisateurs ou des clients anonymes, nous pourrions orienter nos recherches vers les web services. Cela aura pour avantage de réagir rapidement à tous changement en s'assurant de la fiabilité et la de sécurité de ces composants réutilisables.
: développement de méthode, réutilisation, composants logiciels, qualité de composant KEYWORDS:method development, reuse, software components, component quality Pour évaluer la qualité des objets sélectionnés en général et des composants logiciels en particulier, plusieurs méthodes de prise de décision multicritère ont été réalisées. Dans l'article de E. Triantaphyllou et al.(1998), les chercheursproposentla méthode d'analyse hiérarchique des procédés (AHP).Cette technique établit une table de comparaison binaire des caractéristiques, des sous caractéristiques des composants logiciels par niveau. Ensuite détermine les poids de ces différents critères et sous critères d'une part et d'autre part évalue la cohérence de cette table. Dans les travauxde A.A. Zaidan et al.(2015), les auteurs définissent une approche de prise de décision multi-critères pour traiter les problèmes complexes. Cette approche permet de décomposer le problème en plusieurs niveaux, définissant les objectifs et fournissant un cadre d'ensemble pour l'évaluation de solutions. Ils proposent la méthode d'analyse AHP comme la meilleure alternative pour la maximisation des données complexes.Ils donnent le modèle suivant : Dans le cadre de l'optimisation des paramètres liés aux composants sélectionnés, l'article de R. Perriot et al.(2014), donne différents modèles mathématiques d'optimisation en programmation linéaire. L'un de ces modèles établit un compromis entre le coût monétaire minimum et le temps de réponse dans les nuages informatiques. Il est formulé ci-dessous :1.2. Sélection de composants logicielsLa gestion des informations implique la recherche, la sélection et le stockage de documents pertinents en général et dans le domaine du génie logiciel, des composants logiciels en particulier. Le chercheur E.Rames (1991) définit un modèle de recherche basé sur la classification hiérarchique et thématique des composants logiciels contenus dans une base. Cependant, sa méthode de classification des composants était basée sur une technique manuelle. Dans l'article deB. George et al.(2010), un mécanisme permettant l'automatisation de la sélection d'un composant logiciel, parmi un ensemble de candidats en fonction de leurs propriétés fonctionnelles et non fonctionnelles a été étudié. Ce mécanisme permet de rendre possible l'extraction et de comparer les composants. Il s'agit après des phases de sélection de composants, de mesurer l'indice de satisfaction de ces différents composants candidats sélectionnés afin de trouver les plus pertinents. L'article de A.A. Zaidan et al.(2015) propose une étude comparative des logiciels dont le but est d'évaluer et de sélectionner des logiciels « open source » pour la gestiondes dossiers médicaux électroniques et numériques. Cette étude est réalisée avec différentes techniques de prise de décision à critères multiples. Ainsi les systèmes logiciels sont sélectionnés sur la base d'un ensemble de résultats métriques à l'aide de la technique AHP intégrée à différentes techniques de prise de décision multicritères. Dans l'article de J. Pande et al.(2013), une métrique de souplesse a étédéfinie. Cette métrique leur a permis de déterminer une sélection optimale des composants logiciels avec le modèle suivant : A = ensemble d'attributs de qualité ; SC = ensemble de composants disponibles (composants candidats) ; q hi = le niveau normalisé de l'attribut de qualité hϵ A pour le composant i ; W h = poids attribué à l'attribut de qualité hϵ A ; x i = 1 si le composant i est sélectionné, sinon 0 ; C i = coût normalisé du composant i.Compte tenu des méthodes développées, nous remarquons que différents chercheurs ont apporté d'énormes contributions. Cependant, certains aspects tels que le temps d'adaptation et de maintenance des composants logiciels d'une part et d'autre part, l'optimisation des deux paramètres coût et temps relatifs aux composants sélectionnésde Fig1 modèle de qualité de composant logiciel Ce modèle est basé sur le modèle de qualité ISO 9126 et des représentations de qualité des revues de littérature. Il permet de préciser les caractéristiques les plus importantes pour le choix des composants logiciels selon les besoins de l'utilisateur. En utilisant la technique Analyse Hiérarchique des Procédés (AHP), nous pourrons atteindre l'objectif qui consiste à choisir le composant logiciel qui répond le mieux possible aux besoins de l'utilisateur. Pour se faire, nous avons construit le modèle hiérarchique de qualité en fonction des caractéristiques et des sous caractéristiques des composants logiciels (fig1). Ensuite, à l'aide de la méthode d'analyse multicritère, nous avons construit une table de comparaison binaire des caractéristiques et sous caractéristiques. Ceci a permis de déterminer les poids des différents critères de qualité définis du composant logiciel d'une part et d'autre part d'évaluer la cohérence de notre travail..
INTRODUCTION SECTION 1
1. Etat de l'art
1.1. méthodes d'analyse multicritère et optimisation
(1)
*
1
max
1, 2,3,....
AHP
ij j
j
A
q w for i
M
minimiser( C+(1-)T
avec les contraintes
: modèle de cout
:
quidefinissent
C le
T letemps de selection desvues
(2)
ARIMA -Etat de l'art
4
1.3. Limite des méthodes
ces
bibliothèques
n'ont
pas
été
pris
en
compte.
(3)
h hi i
i i
h A i SC
i SC
p
w q x
c x
SECTION 2
2. Problème de recherche
Dans la revue de littérature, divers travaux relatifs aux méthodes de sélection de
composants logiciels pertinents ont été réalisés.
Les travaux que nous présentons traitent de la problématique de l'évaluation de la
qualité des composants pré-faits. Il s'agit de la maximisation de leurs valeurs de qualité
en optimisant le coût financier, de maintenance et le temps d'adaptation de ces
composants en vue de les réutiliser dans un système logiciel.
Notre objectif est donc de déterminer une métrique qui permettra de maximiser la
qualité du composant logiciel sélectionné tout en minimisant le coût financier, de
maintenance et le temps d'adaptation de ce composant.
Il s'agit alors de définir un modèle de qualité du composant logiciel, ensuite de définir
un processus de sélection des composantsadaptés au système logiciel à construire, et
enfin d'établir un modèle métrique pour évaluer la qualité du composant logiciel
sélectionné d'une part, et d'autre part, d'optimiser le coût financier et le temps de
modification, d'adaptation de ce composant.
Ceci nous amène à formuler les hypothèses de recherche suivantes :
Les caractéristiques définies des composants logiciels permettent-elles d'obtenir des
attributs mesurables et générant des composants logiciels de qualité ?
La sélection de composant de qualité, pertinent répondant aux besoins de l'utilisateur
permet-elle d'optimiser le coût financier de maintenance et le temps d'adaptation de ce
composant
dans
un
système
logiciel?
SECTION 3
3. Phase de modélisation
3.1. Proposition de modèle de qualité du composant logiciel
Nous nous intéresserons à l'évaluation de la sélection et à l'intégration des composants
logiciels dans un système de logiciel. Notre objectif principal est de faire le choix du
« meilleur composant logiciel » d'une bibliothèque ou d'un marché de composants.
Cette sélection doit répondre au mieux aux critères définis par l'utilisateur et selon le
type d'application à construire. Les caractéristiques définies sont les suivantes : la
capacité fonctionnelle, la fiabilité, la facilité d'utilisation, la sécurité et
la
maintenabilité.
Ce qui nous permet de définir le modèle 1 suivant :
Il s'agira pour nous de définir une métrique à deux paramètres dont le coût financier et le temps sont fonction de l'indice du composant choisi. Cette métrique sert à optimiser les paramètres afin de sélectionner le « meilleur composant logiciel » d'une part et d'autre part de minimiser ce coût financier et le temps d'adaptation et de modification du composant sélectionné. Nous définissons notre fonction de la manière suivante :
Où Sc : ensemble de composants disponibles ; i c : Coût financier normalisé de maintenance du composant i ; C : cout relatif généré par le composant i ; C max : Coût maximum réalisé par un des composants sélectionnés ; i t : temps d'adaptation et de maintenance normalisé du composant i ; t : temps relatif, généré par le composant i ; : Coefficient d'adaptation ; T max : temps maximum réalisé par un des composants sélectionnés. Donc la métrique pour tout composant logiciel i sélectionné sera : Où A = ensemble des caractéristiques de qualité du logiciel; SC = ensemble de composants disponibles (composants candidats) ;Et les contraintes
0
1
i
t
et
max
i
c
c
C
et 0
1
i
c
q
hi = le niveau normalisé de l'attribut de qualité hϵ A pour le composant i ;
W
h = poids attribué à l'attribut de qualité hϵ A ;
x
i = 1 si le composant i est sélectionné, 0 sinon ;
C
i = coût normalisé du composant i ;
i
t : Temps normalisé de maintenance du composant i
;
: Coefficient d'adaptation à préciser
Le modèle (5) représente la fonction objectif. Cette fonction permet de calculer
puisd'évaluer la qualité des caractéristiques des composants logiciels sélectionnés.
i SC
0
1
Elle permet d'initialiser les caractéristiques et les sous caractéristiques définies par l'utilisateur. Toutes les valeurs décrites (fig1) sont initialisées. b. Procédure saisie (biblio : chaine, nbre :entier) Cette procédure permet de saisir la bibliothèque et le nombre de composants disponibles et répondant aux critères définis par l'utilisateur en entrée.En sortie, nous récupérons la somme des poids des différentes caractéristiques. c. Procedure evaluation (Composant : tableau [1..M ax C ompo ] : chaine, nombre : entier)4.3.2. Algorithme EvaluationNous avons donné la fonction objectif S i représentée par le modèle(5).Cette fonction permettra l'évaluation de la qualité du composant logiciel de la bibliothèque en tenant compte des contraintes définies au modèle(8). L'algorithmeEvaluation permet de calculer les valeurs de qualité définie par S i . Cette procédure prend en paramètre, une liste de composants et le nombre de composant obtenu après la phase de sélection basée sur les propriétés fonctionnelles de celui-ci. A la suite des tests et des contrôles, elle retourne un vecteur de valeurs réelles désignant la valeur de qualité des composants.procedure Evaluation(Composant : tableau [1..M ax C ompo ] : chaine,nombre : entier) Entrée :i,j entier Som_qlte , Som_CoutTemps réel Sortie :somme somme tableau [1.. M ax ] de réels debut Ecrire(le nombre de composants) Lire(nombre) Si (nomnbre>M ax C ompo ) alors nombre = M ax C ompo pour i de 1 à nombre debut Ecrire(cout et temps relatifs au composant courant) lire(c r [i], T r [i])) Som_qlte 0 ; Som_coutTemps0 ; Pour j de 1 à M ax C aracter faire Debut Si ((0≤ Caract[j].poids≤1) et (0< Caract[j].note ≤ Q max ) et Somm =1 ) alors Debutsi Som _qlte Som _qlte + w[j]*q[j] Finsi Finpour Si ( 0 < T r [i]) ≤ T max ) et ( 0 < C r [i]) ≤ C max ) alors debutsi Som_coutTemps Som_coutTemps + (αc[i] + (1-α)t[i]) somme[i] Som _qlte -Som_coutTemps finsi Finpour Fin Fig 5 : code de la procedudureevaluation 4.3.3. Algorithme Optimisation L'algorithmeevaluation nous a permis de sélectionner un ensemble de composantsrépondant aux besoins de qualité définis par l'utilisateur. A l'aide de la procédure optimisation, nous allons trier ce vecteur de réels désignant le degré de qualité des composants retenus. Procedure Optimisation (M ax : entier) Entrée :i,j entier somme tableau [1.. M ax ] de réels Sortie : somme tableau [1.. M ax ] de réels triés par ordre décroissant debut pour i de 1 à M ax -1 faire debut pour j de i +1 à M ax faire debut si (somme [i ] >somme[j] ) alors debutsi tampon = somme [i ] Fig 6 : code de la procédure optimisation). A l'étape suivante, nous allons optimiser les composants en
fonction du coût et du temps de d'adaptation par laprocédure optimisation(Pi,nbre).
Enfin nous allons choisir le composant le plus optimisé de l'ensemble des composants
Pi avec la procédure affichage(). Ce dernier correspondra au composant ayant la valeur
maximale des Pi notée Max,suite à l'application de la procédure optimisation(Pi,nbre).
Alors pour écrire l'algorithme SelectCompo, nous allons utiliser différentes procédures
suivantes :
a. Procédure initialisation().
Cette procédure permet de calculer par la métrique des valeurs de qualité définie.
Elle prend en entrée une liste de composants et le nombre de composant souhaité. A
la suite des tests et des contrôles, elle retourne un vecteur de valeurs réelles
désignant la valeur de qualité du composant notée Somme [].
d. Procedure optimisation (nbre : entier)
Cette procédure permet de classer et d'ordonner les valeurs de qualité retournées
par
la procédureevaluation. En paramètre, nous avons le nombre de composants retenus
à trier.
e. affichage()
Cette procédure permet d'affiche le composant retenu après le tri par ordre
décroissant des valeurs de qualité des composants
4.3. Présentationdes algorithmes
Dans cette partie de notre travail, nous présenterons le pseudo code dans les différents
algorithmes.
4.3.1. Algorithme SelectCompo
Le programme principal va se présenter comme suit :
Algorithme SelectCompo
Entrée : biblio,
tableau_de_composants chaine
Nbre, i entier
Somme [] tableau de réels
Sortie : compo chaine
Debut
Tantque (besoins Exprimés)
Debut
saisie (biblio , nombre )
nbre = nombre
Pour i de 1 à nbre faire
Debut
Selectionner (Pi, nbre)
Pi = tableau_de_composants
Finpour
initialisation()
Fintanque
si ((conditionsCaracterisques Remplies) et (coût et temps relatifs dans
intervalles requis) alors
debut
Pour i de 1 à nbre faire
Debut
evaluation (Pi, nbre)
somme[i] = ValeurQualite(Pi)
Finpour
Si SatisfactionQualitéalors
Optimisation (nbre )
Sinon
Reverser (composants dans biblio)
finsi
Afichage()
Fin
Fig 4 : pseudo code de selectCompo
.
W j
caract j poids
.
r
q j
caract j note
max
r
qj
qj
Q
max
r
Ti
ti
T
max
r
Ci
ci
C
somme [i ]= somme [j ]
somme[j]=tampon
Finsi
Finpour
Finpour
Fin
Modèle de qualité, Inspiré du modèle ISO 9126 et de la qualité logicielle définie par Jéremie Grodziski
« recherche d'information contextuelle et sémantique sur le web » thèse à Université MENTOURI de Constantine Faculté des Sciences de l'Ingénieur Département d'Informatique. A Bouramoul, 153A.BOURAMOUL, « recherche d'information contextuelle et sémantique sur le web » thèse à Université MENTOURI de Constantine Faculté des Sciences de l'Ingénieur Département d'Informatique, 2011, p 153;
« Un processus de sélection de composants logiciels multi-niveaux », thèse à université de Bretagne sud. Bart George, Bart George, « Un processus de sélection de composants logiciels multi-niveaux », thèse à université de Bretagne sud, 2007;
Sahraoui « Un mécanisme de sélection de composants logiciels. Bart George, R Fleurquin, S Sadou, H , juillet. Bart George, R. Fleurquin, S. Sadou, H. Sahraoui « Un mécanisme de sélection de composants logiciels », juillet 2010;
L'approche multicritère et la prise de décision dans les entreprises publiques, le cas de l'Algérie. Jean-Philippe Boumedyen Taibi A, Waaub, a Faculté des sciences économiques. Tlemcen; Montréal (Québec) Canada,H3C 3P8Université de Tlemcen ; Université du Québec à Montréal (UQAM)Algérie, b GERAD & Département de géographieBoumedyen Taibi a , Jean-Philippe Waaub b , "L'approche multicritère et la prise de décision dans les entreprises publiques, le cas de l'Algérie", a Faculté des sciences économiques, Université de Tlemcen, Tlemcen 13000, Algérie, b GERAD & Département de géographie, Université du Québec à Montréal (UQAM), Montréal (Québec) Canada,H3C 3P8, Avril 2015 ;
E Rames, Sur la réutilisation de composants logiciels : classification et recherche » thèse à Toulouse 3. E. Rames,« Sur la réutilisation de composants logiciels : classification et recherche » thèse à Toulouse 3, 1991;
Optimal Component Selection for Component Based Software Development using Pliability Metric. J Pande, Garcia, Pant, ACM SIGSOFT Software Engineering Notes. J Pande, CJ Garcia, D Pant, "Optimal Component Selection for Component Based Software Development using Pliability Metric", ACM SIGSOFT Software Engineering Notes, January 2013;
Stéphane Ducasse « Modèles de mesure de la qualité des logiciels. Karine Mordal, Jannik Laval, Karine Mordal, Jannik Laval, Stéphane Ducasse « Modèles de mesure de la qualité des logiciels » November 7, 2011 ;
Method Families Concept: Application to Decision-Making Methods. E Kornyshova, R Deneckère, C Rolland, LNBIP. 81SpringerInt. conf. EMMSADKornyshova E., Deneckère R., Rolland C.. Method Families Concept: Application to Decision-Making Methods. Int. conf. EMMSAD, pp. 413-427, LNBIP 81, Springer, 2011 ;
« sélection sémantique de composants logiciels basée sur des critères de qualité de service, thèse à l'université Mentouri-Constantine. L Yessad, L.YESSAD, « sélection sémantique de composants logiciels basée sur des critères de qualité de service, thèse à l'université Mentouri-Constantine, 2012
A Metric for Assessing Reusability of Software Components. S Nsadana, Dhaiya, Ms Ahuja, International Journal of Computer Application. 4NSadana, S Dhaiya, MS Ahuja, "A Metric for Assessing Reusability of Software Components", International Journal of Computer Application, Issue 4, Volume 1, February 2014;
Sofiane Batata « Moteur de recherche pour la sélection de composants logiciels. Informatique (Ex. INI). Ecole Nationale Supérieure dSofiane Batata « Moteur de recherche pour la sélection de composants logiciels » Ecole Nationale Supérieure d'Informatique (Ex. INI), 2011;
Romain Perriot, * , Jérémy Pfeifer, * Laurent D'orazio, * , Bruno Bachelet, * , Sandro Bimonte, ** , Jérôme Darmont***, Modèles de Coût pour la Sélection de Vues Matérialisées dans le Nuage, Application aux Services Amazon EC2 et S3. UR TSCF, Clermont-Ferrand6158Clermont Université, CNRS, Université Blaise Pascal ; Université de Lyonnom.prenom@univ-bpclermont.fr **IRSTEAsandro.bimonte@irstea.fr ***ERIC Lyon. jerome.darmont@univ-lyon2.fr, p.15, archives ouvertesRomain Perriot*, Jérémy Pfeifer*, Laurent d'Orazio*, Bruno Bachelet*, Sandro Bimonte**, Jérôme Darmont***, « Modèles de Coût pour la Sélection de Vues Matérialisées dans le Nuage, Application aux Services Amazon EC2 et S3 », *Clermont Université, CNRS, Université Blaise Pascal, LIMOS UMR 6158, nom.prenom@univ-bpclermont.fr **IRSTEA, UR TSCF, Clermont-Ferrand, sandro.bimonte@irstea.fr ***ERIC Lyon 2, Université de Lyon, jerome.darmont@univ-lyon2.fr, p.15, archives ouvertes, 2014
| [] |
[
"A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations",
"A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations"
] | [
"Peter D Turney peter.turney@nrc-cnrc.gc.ca \nNational Research Council of Canada Institute for Information Technology\nM50 Montreal RoadK1A 0R6OttawaOntarioCanada\n"
] | [
"National Research Council of Canada Institute for Information Technology\nM50 Montreal RoadK1A 0R6OttawaOntarioCanada"
] | [
"Proceedings of the 22nd International Conference on Computational Linguistics"
] | Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology. | 10.3115/1599081.1599195 | null | 7,898,033 | 0809.0124 | d8f814ea3f3985a8be60daf5ec87a9c73a863624 |
A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations
ManchesterCopyright ManchesterColing 2008. August 2008
Peter D Turney peter.turney@nrc-cnrc.gc.ca
National Research Council of Canada Institute for Information Technology
M50 Montreal RoadK1A 0R6OttawaOntarioCanada
A Uniform Approach to Analogies, Synonyms, Antonyms, and Associations
Proceedings of the 22nd International Conference on Computational Linguistics
the 22nd International Conference on Computational LinguisticsManchesterColing 2008. August 2008
Recognizing analogies, synonyms, antonyms, and associations appear to be four distinct tasks, requiring distinct NLP algorithms. In the past, the four tasks have been treated independently, using a wide variety of algorithms. These four semantic classes, however, are a tiny sample of the full range of semantic phenomena, and we cannot afford to create ad hoc algorithms for each semantic phenomenon; we need to seek a unified approach. We propose to subsume a broad range of phenomena under analogies. To limit the scope of this paper, we restrict our attention to the subsumption of synonyms, antonyms, and associations. We introduce a supervised corpus-based machine learning algorithm for classifying analogous word pairs, and we show that it can solve multiple-choice SAT analogy questions, TOEFL synonym questions, ESL synonym-antonym questions, and similar-associated-both questions from cognitive psychology.
Introduction
A pair of words (petrify:stone) is analogous to another pair (vaporize:gas) when the semantic relations between the words in the first pair are highly similar to the relations in the second pair. Two words (levied and imposed) are synonymous in a context (levied a tax) when they can be interchanged (imposed a tax), they are are antonymous when they have opposite meanings (black c 2008, National Research Council of Canada (NRC). Licensed to the Coling 2008 Organizing Committee for publication in Coling 2008 and for re-publishing in any form or medium. and white), and they are associated when they tend to co-occur (doctor and hospital).
On the surface, it appears that these are four distinct semantic classes, requiring distinct NLP algorithms, but we propose a uniform approach to all four. We subsume synonyms, antonyms, and associations under analogies. In essence, we say that X and Y are antonyms when the pair X:Y is analogous to the pair black:white, X and Y are synonyms when they are analogous to the pair levied:imposed, and X and Y are associated when they are analogous to the pair doctor:hospital.
There is past work on recognizing analogies (Reitman, 1965), synonyms (Landauer and Dumais, 1997), antonyms (Lin et al., 2003), and associations (Lesk, 1969), but each of these four tasks has been examined separately, in isolation from the others. As far as we know, the algorithm proposed here is the first attempt to deal with all four tasks using a uniform approach. We believe that it is important to seek NLP algorithms that can handle a broad range of semantic phenomena, because developing a specialized algorithm for each phenomenon is a very inefficient research strategy.
It might seem that a lexicon, such as Word-Net (Fellbaum, 1998), contains all the information we need to handle these four tasks. However, we prefer to take a corpus-based approach to semantics. Veale (2004) used WordNet to answer 374 multiple-choice SAT analogy questions, achieving an accuracy of 43%, but the best corpus-based approach attains an accuracy of 56% (Turney, 2006). Another reason to prefer a corpus-based approach to a lexicon-based approach is that the former requires less human labour, and thus it is easier to extend to other languages.
In Section 2, we describe our algorithm for recognizing analogies. We use a standard supervised machine learning approach, with feature vectors based on the frequencies of patterns in a large corpus. We use a support vector machine (SVM) to learn how to classify the feature vectors (Platt, 1998;Witten and Frank, 1999).
Section 3 presents four sets of experiments. We apply our algorithm for recognizing analogies to multiple-choice analogy questions from the SAT college entrance test, multiple-choice synonym questions from the TOEFL (test of English as a foreign language), ESL (English as a second language) practice questions for distinguishing synonyms and antonyms, and a set of word pairs that are labeled similar, associated, and both, developed for experiments in cognitive psychology.
We discuss the results of the experiments in Section 4. The accuracy of the algorithm is competitive with other systems, but the strength of the algorithm is that it is able to handle all four tasks, with no tuning of the learning parameters to the particular task. It performs well, although it is competing against specialized algorithms, developed for single tasks.
Related work is examined in Section 5 and limitations and future work are considered in Section 6. We conclude in Section 7.
Classifying Analogous Word Pairs
An analogy, A:B::C:D, asserts that A is to B as C is to D; for example, traffic:street::water:riverbed asserts that traffic is to street as water is to riverbed; that is, the semantic relations between traffic and street are highly similar to the semantic relations between water and riverbed. We may view the task of recognizing word analogies as a problem of classifying word pairs (see Table 1). We approach this as a standard classification problem for supervised machine learning. The al-gorithm takes as input a training set of word pairs with class labels and a testing set of word pairs without labels. Each word pair is represented as a vector in a feature space and a supervised learning algorithm is used to classify the feature vectors. The elements in the feature vectors are based on the frequencies of automatically defined patterns in a large corpus. The output of the algorithm is an assignment of labels to the word pairs in the testing set. For some of the experiments, we select a unique label for each word pair; for other experiments, we assign probabilities to each possible label for each word pair.
For a given word pair, such as mason:stone, the first step is to generate morphological variations, such as masons:stones. In the following experiments, we use morpha (morphological analyzer) and morphg (morphological generator) for morphological processing (Minnen et al., 2001). 1 The second step is to search in a large corpus for all phrases of the following form: "[0 to 1 words] X [0 to 3 words] Y [0 to 1 words]" In this template, X:Y consists of morphological variations of the given word pair, in either order; for example, mason:stone, stone:mason, masons:stones, and so on. A typical phrase for mason:stone would be "the mason cut the stone with". We then normalize all of the phrases that are found, by using morpha to remove suffixes.
The template we use here is similar to Turney (2006), but we have added extra context words before the X and after the Y . Our morphological processing also differs from Turney (2006). In the following experiments, we search in a corpus of 5 × 10 10 words (about 280 GB of plain text), consisting of web pages gathered by a web crawler. 2 To retrieve phrases from the corpus, we use Wumpus (Büttcher and Clarke, 2005), an efficient search engine for passage retrieval from large corpora. 3 The next step is to generate patterns from all of the phrases that were found for all of the input word pairs (from both the training and testing sets). To generate patterns from a phrase, we replace the given word pairs with variables, X and Y , and we replace the remaining words with a wild card symbol (an asterisk) or leave them as they are.
For example, the phrase "the mason cut the stone with" yields the patterns "the X cut * Y with", "* X * the Y *", and so on. If a phrase contains n words, then it yields 2 (n−2) patterns.
Each pattern corresponds to a feature in the feature vectors that we will generate. Since a typical input set of word pairs yields millions of patterns, we need to use feature selection, to reduce the number of patterns to a manageable quantity. For each pattern, we count the number of input word pairs that generated the pattern. For example, "* X cut * Y *" is generated by both mason:stone and carpenter:wood. We then sort the patterns in descending order of the number of word pairs that generated them. If there are N input word pairs (and thus N feature vectors, including both the training and testing sets), then we select the top kN patterns and drop the remainder. In the following experiments, k is set to 20. The algorithm is not sensitive to the precise value of k.
The reasoning behind the feature selection algorithm is that shared patterns make more useful features than rare patterns. The number of features (kN ) depends on the number of word pairs (N ), because, if we have more feature vectors, then we need more features to distinguish them. Turney (2006) also selects patterns based on the number of pairs that generate them, but the number of selected patterns is a constant (8000), independent of the number of input word pairs. The next step is to generate feature vectors, one vector for each input word pair. Each of the N feature vectors has kN elements, one element for each selected pattern. The value of an element in a vector is given by the logarithm of the frequency in the corpus of the corresponding pattern for the given word pair. For example, suppose the given pair is mason:stone and the pattern is "* X cut * Y *". We look at the normalized phrases that we collected for mason:stone and we count how many match this pattern. If f phrases match the pattern, then the value of this element in the feature vector is log(f + 1) (we add 1 because log(0) is undefined). Each feature vector is then normalized to unit length. The normalization ensures that features in vectors for high-frequency word pairs (traffic:street) are comparable to features in vectors for low-frequency word pairs (water:riverbed). Now that we have a feature vector for each input word pair, we can apply a standard supervised learning algorithm. In the following experiments, we use a sequential minimal optimization (SMO) support vector machine (SVM) with a radial basis function (RBF) kernel (Platt, 1998), as implemented in Weka (Waikato Environment for Knowledge Analysis) (Witten and Frank, 1999). 4 The algorithm generates probability estimates for each class by fitting logistic regression models to the outputs of the SVM. We disable the normalization option in Weka, since the vectors are already normalized to unit length. We chose the SMO RBF algorithm because it is fast, robust, and it easily handles large numbers of features.
For convenience, we will refer to the above algorithm as PairClass. In the following experiments, PairClass is applied to each of the four problems with no adjustments or tuning to the specific problems. Some work is required to fit each problem into the general framework of PairClass (supervised classification of word pairs) but the core algorithm is the same in each case.
Experiments
This section presents four sets of experiments, with analogies, synonyms, antonyms, and associations. We explain how each task is treated as a problem of classifying analogous word pairs, we give the experimental results, and we discuss past work on each of the four tasks.
SAT Analogies
In this section, we apply PairClass to the task of recognizing analogies. To evaluate the performance, we use a set of 374 multiple-choice questions from the SAT college entrance exam. Table 2 shows a typical question. The target pair is called the stem. The task is to select the choice pair that is most analogous to the stem pair. The problem of recognizing word analogies was first attempted with a system called Argus (Reit-man, 1965), using a small hand-built semantic network with a spreading activation algorithm. Turney et al. (2003) used a combination of 13 independent modules. Veale (2004) used a spreading activation algorithm with WordNet (in effect, treating WordNet as a semantic network). Turney (2006) used a corpus-based algorithm.
We may view Table 2 as a binary classification problem, in which mason:stone and carpenter:wood are positive examples and the remaining word pairs are negative examples. The difficulty is that the labels of the choice pairs must be hidden from the learning algorithm. That is, the training set consists of one positive example (the stem pair) and the testing set consists of five unlabeled examples (the five choice pairs). To make this task more tractable, we randomly choose a stem pair from one of the 373 other SAT analogy questions, and we assume that this new stem pair is a negative example, as shown in Table 3 Table 3: How to fit a SAT analogy question into the framework of supervised pair classification.
To answer the SAT question, we use PairClass to estimate the probability that each testing example is positive, and we guess the testing example with the highest probability. Learning from a training set with only one positive example and one negative example is difficult, since the learned model can be highly unstable. To increase the stability, we repeat the learning process 10 times, using a different randomly chosen negative training example each time. For each testing word pair, the 10 probability estimates are averaged together. This is a form of bagging (Breiman, 1996).
PairClass attains an accuracy of 52.1%. For comparison, the ACL Wiki lists 12 previously published results with the 374 SAT analogy questions. 5 Only 2 of the 12 algorithms have higher accuracy. The best previous result is an accuracy of 56.1% (Turney, 2006). Random guessing would yield an accuracy of 20%. The average senior high school student achieves 57% correct (Turney, 2006).
TOEFL Synonyms
Now we apply PairClass to the task of recognizing synonyms, using a set of 80 multiple-choice synonym questions from the TOEFL (test of English as a foreign language). A sample question is shown in Table 4. The task is to select the choice word that is most similar in meaning to the stem word.
Stem:
levied Choices: Synonymy can be viewed as a high degree of semantic similarity. The most common way to measure semantic similarity is to measure the distance between words in WordNet (Resnik, 1995;Jiang and Conrath, 1997;Hirst and St-Onge, 1998). Corpus-based measures of word similarity are also common (Lesk, 1969;Landauer and Dumais, 1997;Turney, 2001).
(a) imposed (b) believed (c) requested (d) correlated Solution: (a) imposed
We may view Table 4 as a binary classification problem, in which the pair levied:imposed is a positive example of the class synonymous and the other possible pairings are negative examples, as shown in Table 5.
Word pair
Class label levied:imposed positive levied:believed negative levied:requested negative levied:correlated negative Table 5: How to fit a TOEFL question into the framework of supervised pair classification.
The 80 TOEFL questions yield 320 (80 × 4) word pairs, 80 labeled positive and 240 labeled negative. We apply PairClass to the word pairs using ten-fold cross-validation. In each random fold, 90% of the pairs are used for training and 10% are used for testing. For each fold, the model that is learned from the training set is used to assign probabilities to the pairs in the testing set. With ten separate folds, the ten non-overlapping testing sets cover the whole dataset. Our guess for each TOEFL question is the choice with the highest probability of being positive, when paired with the corresponding stem.
PairClass attains an accuracy of 76.2%. For comparison, the ACL Wiki lists 15 previously published results with the 80 TOEFL synonym questions. 6 Of the 15 algorithms, 8 have higher accuracy and 7 have lower. The best previous result is an accuracy of 97.5% (Turney et al., 2003), obtained using a hybrid of four different algorithms. Random guessing would yield an accuracy of 25%. The average foreign applicant to a US university achieves 64.5% correct (Landauer and Dumais, 1997).
Synonyms and Antonyms
The task of classifying word pairs as either synonyms or antonyms readily fits into the framework of supervised classification of word pairs. Table 6 shows some examples from a set of 136 ESL (English as a second language) practice questions that we collected from various ESL websites.
Word pair
Class label galling:irksome synonyms yield:bend synonyms naive:callow synonyms advise:suggest synonyms dissimilarity:resemblance antonyms commend:denounce antonyms expose:camouflage antonyms unveil:veil antonyms Table 6: Examples of synonyms and antonyms from 136 ESL practice questions. Lin et al. (2003) distinguish synonyms from antonyms using two patterns, "from X to Y " and "either X or Y ". When X and Y are antonyms, they occasionally appear in a large corpus in one of these two patterns, but it is very rare for synonyms to appear in these patterns. Our approach is similar to Lin et al. (2003), but we do not rely on hand-coded patterns; instead, PairClass patterns are generated automatically.
Using ten-fold cross-validation, PairClass attains an accuracy of 75.0%. Always guessing the majority class would result in an accuracy of 65.4%. The average human score is unknown and 6 For more information, see TOEFL Synonym Questions (State of the art) at http://aclweb.org/aclwiki/. there are no previous results for comparison.
Similar, Associated, and Both
A common criticism of corpus-based measures of word similarity (as opposed to lexicon-based measures) is that they are merely detecting associations (co-occurrences), rather than actual semantic similarity (Lund et al., 1995). To address this criticism, Lund et al. (1995) evaluated their algorithm for measuring word similarity with word pairs that were labeled similar, associated, or both. These labeled pairs were originally created for cognitive psychology experiments with human subjects (Chiarello et al., 1990). Lund et al. (1995) did not measure the accuracy of their algorithm on this three-class classification problem. Instead, following standard practice in cognitive psychology, they showed that their algorithm's similarity scores for the 144 word pairs were correlated with the response times of human subjects in priming tests. In a typical priming test, a human subject reads a priming word (cradle) and is then asked to complete a partial word (complete bab as baby). The time required to perform the task is taken to indicate the strength of the cognitive link between the two words (cradle and baby).
Using ten-fold cross-validation, PairClass attains an accuracy of 77.1% on the 144 word pairs. Since the three classes are of equal size, guessing the majority class and random guessing both yield an accuracy of 33.3%. The average human score is unknown and there are no previous results for comparison.
Discussion
The four experiments are summarized in Tables 8 and 9. For the first two experiments, where there are previous results, PairClass is not the best, but it performs competitively. For the second two experiments, PairClass performs significantly above the baselines. However, the strength of this approach is not its performance on any one task, but the range of tasks it can handle.
As far as we know, this is the first time a standard supervised learning algorithm has been applied to any of these four problems. The advantage of being able to cast these problems in the framework of standard supervised learning problems is that we can now exploit the huge literature on supervised learning. Past work on these problems has required implicitly coding our knowledge of the nature of the task into the structure of the algorithm. For example, the structure of the algorithm for latent semantic analysis (LSA) implicitly contains a theory of synonymy (Landauer and Dumais, 1997). The problem with this approach is that it can be very difficult to work out how to modify the algorithm if it does not behave the way we want. On the other hand, with a supervised learning algorithm, we can put our knowledge into the labeling of the feature vectors, instead of putting it directly into the algorithm. This makes it easier to guide the system to the desired behaviour.
With our approach to the SAT analogy questions, we are blurring the line between supervised and unsupervised learning, since the training set for a given SAT question consists of a single real positive example (and a single "virtual" or "simulated" negative example). In effect, a single example (mason:stone) becomes a sui generis; it constitutes a class of its own. It may be possible to apply the machinery of supervised learning to other problems that apparently call for unsupervised learning (for example, clustering or measuring similarity), by using this sui generis device.
Related Work
One of the first papers using supervised machine learning to classify word pairs was Rosario and Hearst's (2001) paper on classifying nounmodifier pairs in the medical domain. For example, the noun-modifier expression brain biopsy was classified as Procedure. Rosario and Hearst (2001) constructed feature vectors for each nounmodifier pair using MeSH (Medical Subject Head-ings) and UMLS (Unified Medical Language System) as lexical resources. They then trained a neural network to distinguish 13 classes of semantic relations, such as Cause, Location, Measure, and Instrument. Nastase and Szpakowicz (2003) explored a similar approach to classifying generaldomain noun-modifier pairs, using WordNet and Roget's Thesaurus as lexical resources. Turney and Littman (2005) used corpus-based features for classifying noun-modifier pairs. Their features were based on 128 hand-coded patterns. They used a nearest-neighbour learning algorithm to classify general-domain noun-modifier pairs into 30 different classes of semantic relations. Turney (2006) later addressed the same problem using 8000 automatically generated patterns.
One of the tasks in SemEval 2007 was the classification of semantic relations between nominals (Girju et al., 2007). The problem is to classify semantic relations between nouns and noun compounds in the context of a sentence. The task attracted 14 teams who created 15 systems, all of which used supervised machine learning with features that were lexicon-based, corpus-based, or both.
PairClass is most similar to the algorithm of Turney (2006), but it differs in the following ways:
• PairClass does not use a lexicon to find synonyms for the input word pairs. One of our goals in this paper is to show that a pure corpus-based algorithm can handle synonyms without a lexicon. This considerably simplifies the algorithm. • PairClass uses a support vector machine (SVM) instead of a nearest neighbour (NN) learning algorithm. • PairClass does not use the singular value decomposition (SVD) to smooth the feature vectors. It has been our experience that SVD is not necessary with SVMs. • PairClass generates probability estimates, whereas Turney (2006) uses a cosine measure of similarity. Probability estimates can be readily used in further downstream processing, but cosines are less useful. • The automatically generated patterns in Pair-Class are slightly more general than the patterns of Turney (2006). • The morphological processing in PairClass (Minnen et al., 2001) is more sophisticated than in Turney (2006). However, we believe that the main contribution of this paper is not PairClass itself, but the extension of supervised word pair classification beyond the classification of noun-modifier pairs and semantic relations between nominals, to analogies, synonyms, antonyms, and associations. As far as we know, this has not been done before.
Limitations and Future Work
The main limitation of PairClass is the need for a large corpus. Phrases that contain a pair of words tend to be more rare than phrases that contain either of the members of the pair, thus a large corpus is needed to ensure that sufficient numbers of phrases are found for each input word pair. The size of the corpus has a cost in terms of disk space and processing time. In the future, as hardware improves, this will become less of an issue, but there may be ways to improve the algorithm, so that a smaller corpus is sufficient. Another area for future work is to apply Pair-Class to more tasks. WordNet includes more than a dozen semantic relations (e.g., synonyms, hyponyms, hypernyms, meronyms, holonyms, and antonyms). PairClass should be applicable to all of these relations. Other potential applications include any task that involves semantic relations, such as word sense disambiguation, information retrieval, information extraction, and metaphor interpretation.
Conclusion
In this paper, we have described a uniform approach to analogies, synonyms, antonyms, and associations, in which all of these phenomena are subsumed by analogies. We view the problem of recognizing analogies as the classification of semantic relations between words.
We believe that most of our lexical knowledge is relational, not attributional. That is, meaning is largely about relations among words, rather than properties of individual words, considered in isolation. For example, consider the knowledge encoded in WordNet: much of the knowledge in WordNet is embedded in the graph structure that connects words.
Analogies of the form A:B::C:D are called proportional analogies. These types of lowerlevel analogies may be contrasted with higherlevel analogies, such as the analogy between the solar system and Rutherford's model of the atom (Falkenhainer et al., 1989), which are sometimes called conceptual analogies. We believe that the difference between these two types is largely a matter of complexity. A higher-level analogy is composed of many lower-level analogies. Progress with algorithms for processing lower-level analogies will eventually contribute to algorithms for higher-level analogies.
The idea of subsuming a broad range of semantic phenomena under analogies has been suggested by many researchers. Minsky (1986) wrote, "How do we ever understand anything? Almost always, I think, by using one or another kind of analogy." Hofstadter (2007) claimed, "all meaning comes from analogies." In NLP, analogical algorithms have been applied to machine translation (Lepage and Denoual, 2005), morphology (Lepage, 1998), and semantic relations (Turney and Littman, 2005). Analogy provides a framework that has the potential to unify the field of semantics. This paper is a small step towards that goal.
Table 2 :
2An example of a question from the 374 SAT analogy questions.
.Word pair
Train or test Class label
mason:stone
train
positive
tutor:pupil
train
negative
teacher:chalk
test
hidden
carpenter:wood
test
hidden
soldier:gun
test
hidden
photograph:camera test
hidden
book:word
test
hidden
Table 4 :
4An example of a question from the 80 TOEFL questions.
Table 7
7shows some examples from this collection of 144 word pairs (48 pairs in each of the three classes).Word pair
Class label
table:bed
similar
music:art
similar
hair:fur
similar
house:cabin
similar
cradle:baby
associated
mug:beer
associated
camel:hump
associated
cheese:mouse
associated
ale:beer
both
uncle:aunt
both
pepper:salt
both
frown:smile
both
Table 7 :
7Examples of word pairs labeled similar, associated, or both.
Table 8 :Table 9 :
89Summary of the four tasks. See Section 3 for explanations. Summary of experimental results. See Section 3 for explanations.Experiment
Accuracy Best previous
Human Baseline
Rank
SAT Analogies
52.1%
56.1%
57.0%
20.0%
2 higher out of 12
TOEFL Synonyms
76.2%
97.5%
64.5%
25.0%
8 higher out of 15
Synonyms and Antonyms
75.0%
none
unknown
65.4%
none
Similar, Associated, and Both
77.1%
none
unknown
33.3%
none
http://www.informatics.susx.ac.uk/research/groups/nlp/ carroll/morph.html. 2 The corpus was collected by Charles Clarke, University of Waterloo. We can provide copies on request. 3 http://www.wumpus-search.org/.
http://www.cs.waikato.ac.nz/ml/weka/.
For more information, see SAT Analogy Questions (State of the art) at http://aclweb.org/aclwiki/.
AcknowledgementsThanks to Joel Martin and the anonymous reviewers of Coling 2008 for their helpful comments.
Bagging predictors. Leo Breiman, Machine Learning. 24Breiman, Leo. 1996. Bagging predictors. Machine Learning, 24(2):123-140.
Efficiency vs. effectiveness in terabyte-scale information retrieval. Stefan Büttcher, Charles Clarke, Proceedings of the 14th Text REtrieval Conference (TREC 2005). the 14th Text REtrieval Conference (TREC 2005)Gaithersburg, MDBüttcher, Stefan and Charles Clarke. 2005. Efficiency vs. effectiveness in terabyte-scale information re- trieval. In Proceedings of the 14th Text REtrieval Conference (TREC 2005), Gaithersburg, MD.
Semantic and associative priming in the cerebral hemispheres: Some words do, some words don't ... sometimes, some places. Christine Chiarello, Curt Burgess, Lorie Richards, Alma Pollock, Brain and Language. 38Chiarello, Christine, Curt Burgess, Lorie Richards, and Alma Pollock. 1990. Semantic and associative priming in the cerebral hemispheres: Some words do, some words don't ... sometimes, some places. Brain and Language, 38:75-104.
The structure-mapping engine: Algorithm and examples. Falkenhainer, Kenneth D Brian, Dedre Forbus, Gentner, Artificial Intelligence. 411Falkenhainer, Brian, Kenneth D. Forbus, and Dedre Gentner. 1989. The structure-mapping engine: Algorithm and examples. Artificial Intelligence, 41(1):1-63.
WordNet: An Electronic Lexical Database. Fellbaum, ChristianeMIT PressFellbaum, Christiane, editor. 1998. WordNet: An Elec- tronic Lexical Database. MIT Press.
Semeval-2007 task 04: Classification of semantic relations between nominals. Roxana Girju, Preslav Nakov, Vivi Nastase, Stan Szpakowicz, Peter Turney, Deniz Yuret, SemEval. Prague, Czech RepublicGirju, Roxana, Preslav Nakov, Vivi Nastase, Stan Sz- pakowicz, Peter Turney, and Deniz Yuret. 2007. Semeval-2007 task 04: Classification of semantic re- lations between nominals. In SemEval 2007, pages 13-18, Prague, Czech Republic.
Lexical chains as representations of context for the detection and correction of malapropisms. Graeme Hirst, David St-Onge, WordNet: An Electronic Lexical Database. Fellbaum, ChristianeMIT PressHirst, Graeme and David St-Onge. 1998. Lexical chains as representations of context for the detec- tion and correction of malapropisms. In Fellbaum, Christiane, editor, WordNet: An Electronic Lexical Database, pages 305-332. MIT Press.
I Am a Srange Loop. Douglas Hofstadter, Basic BooksHofstadter, Douglas. 2007. I Am a Srange Loop. Basic Books.
Semantic similarity based on corpus statistics and lexical taxonomy. Jay J Jiang, David W Conrath, ROCLING X. Tapei, TaiwanJiang, Jay J. and David W. Conrath. 1997. Seman- tic similarity based on corpus statistics and lexical taxonomy. In ROCLING X, pages 19-33, Tapei, Tai- wan.
A solution to Plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Thomas K Landauer, Susan T Dumais, Psychological Review. 1042Landauer, Thomas K. and Susan T. Dumais. 1997. A solution to Plato's problem: The latent seman- tic analysis theory of the acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211-240.
Purest ever example-based machine translation: Detailed presentation and assessment. Yves Lepage, Etienne Denoual, Machine Translation. 193Lepage, Yves and Etienne Denoual. 2005. Purest ever example-based machine translation: Detailed presentation and assessment. Machine Translation, 19(3):251-282.
Solving analogies on words: An algorithm. Yves Lepage, Proceedings of the 36th Annual Conference of the Association for Computational Linguistics. the 36th Annual Conference of the Association for Computational LinguisticsLepage, Yves. 1998. Solving analogies on words: An algorithm. In Proceedings of the 36th Annual Con- ference of the Association for Computational Lin- guistics, pages 728-735.
Word-word associations in document retrieval systems. Michael E Lesk, American Documentation. 201Lesk, Michael E. 1969. Word-word associations in document retrieval systems. American Documenta- tion, 20(1):27-38.
Identifying synonyms among distributionally similar words. Dekang Lin, Shaojun Zhao, Lijuan Qin, Ming Zhou, IJCAI-03. Lin, Dekang, Shaojun Zhao, Lijuan Qin, and Ming Zhou. 2003. Identifying synonyms among distri- butionally similar words. In IJCAI-03, pages 1492- 1493.
Semantic and associative priming in highdimensional semantic space. Kevin Lund, Curt Burgess, Ruth Ann Atchley, Proceedings of the 17th Annual Conference of the Cognitive Science Society. the 17th Annual Conference of the Cognitive Science SocietyLund, Kevin, Curt Burgess, and Ruth Ann Atchley. 1995. Semantic and associative priming in high- dimensional semantic space. In Proceedings of the 17th Annual Conference of the Cognitive Science So- ciety, pages 660-665.
Applied morphological processing of English. Guido Minnen, John Carroll, Darren Pearce, Natural Language Engineering. 73Minnen, Guido, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Nat- ural Language Engineering, 7(3):207-223.
The Society of Mind. Marvin Minsky, Simon & SchusterNew York, NYMinsky, Marvin. 1986. The Society of Mind. Simon & Schuster, New York, NY.
Exploring noun-modifier semantic relations. Vivi Nastase, Stan Szpakowicz, Fifth International Workshop on Computational Semantics (IWCS-5). Tilburg, The NetherlandsNastase, Vivi and Stan Szpakowicz. 2003. Explor- ing noun-modifier semantic relations. In Fifth In- ternational Workshop on Computational Semantics (IWCS-5), pages 285-301, Tilburg, The Netherlands.
Fast training of support vector machines using sequential minimal optimization. John C Platt, Advances in Kernel Methods: Support Vector Learning. Cambridge, MA, USAMIT PressPlatt, John C. 1998. Fast training of support vector machines using sequential minimal optimization. In Advances in Kernel Methods: Support Vector Learn- ing, pages 185-208. MIT Press Cambridge, MA, USA.
Cognition and Thought: An Information Processing Approach. Walter R Reitman, John Wiley and SonsNew York, NYReitman, Walter R. 1965. Cognition and Thought: An Information Processing Approach. John Wiley and Sons, New York, NY.
Using information content to evaluate semantic similarity in a taxonomy. Philip Resnik, IJCAI-95. San Mateo, CAMorgan KaufmannResnik, Philip. 1995. Using information content to evaluate semantic similarity in a taxonomy. In IJCAI-95, pages 448-453, San Mateo, CA. Morgan Kaufmann.
Classifying the semantic relations in noun-compounds via a domain-specific lexical hierarchy. Barbara Rosario, Marti Hearst, EMNLP-01. Rosario, Barbara and Marti Hearst. 2001. Classify- ing the semantic relations in noun-compounds via a domain-specific lexical hierarchy. In EMNLP-01, pages 82-90.
Corpus-based learning of analogies and semantic relations. Peter D Turney, L Michael, Littman, Machine Learning. 60Turney, Peter D. and Michael L. Littman. 2005. Corpus-based learning of analogies and semantic re- lations. Machine Learning, 60(1-3):251-278.
Combining independent modules to solve multiple-choice synonym and analogy problems. Peter D Turney, L Michael, Jeffrey Littman, Victor Bigham, Shnayder, RANLP-03. Borovets, BulgariaTurney, Peter D., Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Combining indepen- dent modules to solve multiple-choice synonym and analogy problems. In RANLP-03, pages 482-489, Borovets, Bulgaria.
Mining the Web for synonyms: PMI-IR versus LSA on TOEFL. Peter D Turney, Proceedings of the Twelfth European Conference on Machine Learning. the Twelfth European Conference on Machine LearningBerlinSpringerTurney, Peter D. 2001. Mining the Web for syn- onyms: PMI-IR versus LSA on TOEFL. In Proceed- ings of the Twelfth European Conference on Machine Learning, pages 491-502, Berlin. Springer.
Similarity of semantic relations. Peter D Turney, Computational Linguistics. 323Turney, Peter D. 2006. Similarity of semantic rela- tions. Computational Linguistics, 32(3):379-416.
WordNet sits the SAT: A knowledge-based approach to lexical analogy. Tony Veale, Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004). the 16th European Conference on Artificial Intelligence (ECAI 2004)Valencia, SpainVeale, Tony. 2004. WordNet sits the SAT: A knowledge-based approach to lexical analogy. In Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004), pages 606-612, Valencia, Spain.
Ian H Witten, Eibe Frank, Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. San FranciscoMorgan KaufmannWitten, Ian H. and Eibe Frank. 1999. Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations. Morgan Kaufmann, San Francisco.
| [] |
[
"To Normalize, or Not to Normalize: The Impact of Normalization on Part-of-Speech Tagging",
"To Normalize, or Not to Normalize: The Impact of Normalization on Part-of-Speech Tagging"
] | [
"Rob Van Der Goot r.van.der.goot@rug.nl \nCenter for Language and Cognition\nUniversity of Groningen\nThe Netherlands\n",
"Barbara Plank b.plank@rug.nl \nCenter for Language and Cognition\nUniversity of Groningen\nThe Netherlands\n",
"Malvina Nissim m.nissim@rug.nl \nCenter for Language and Cognition\nUniversity of Groningen\nThe Netherlands\n"
] | [
"Center for Language and Cognition\nUniversity of Groningen\nThe Netherlands",
"Center for Language and Cognition\nUniversity of Groningen\nThe Netherlands",
"Center for Language and Cognition\nUniversity of Groningen\nThe Netherlands"
] | [
"Proceedings of the 3rd Workshop on Noisy User-generated Text"
] | Does normalization help Part-of-Speech (POS) tagging accuracy on noisy, noncanonical data? To the best of our knowledge, little is known on the actual impact of normalization in a real-world scenario, where gold error detection is not available. We investigate the effect of automatic normalization on POS tagging of tweets. We also compare normalization to strategies that leverage large amounts of unlabeled data kept in its raw form. Our results show that normalization helps, but does not add consistently beyond just word embedding layer initialization. The latter approach yields a tagging model that is competitive with a Twitter state-of-the-art tagger. | 10.18653/v1/w17-4404 | [
"https://www.aclweb.org/anthology/W17-4404.pdf"
] | 5,476,320 | 1707.05116 | f2922ccd8d2b9a3c282419bd923dafe9f2b6e951 |
To Normalize, or Not to Normalize: The Impact of Normalization on Part-of-Speech Tagging
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 7, 2017. 2017
Rob Van Der Goot r.van.der.goot@rug.nl
Center for Language and Cognition
University of Groningen
The Netherlands
Barbara Plank b.plank@rug.nl
Center for Language and Cognition
University of Groningen
The Netherlands
Malvina Nissim m.nissim@rug.nl
Center for Language and Cognition
University of Groningen
The Netherlands
To Normalize, or Not to Normalize: The Impact of Normalization on Part-of-Speech Tagging
Proceedings of the 3rd Workshop on Noisy User-generated Text
the 3rd Workshop on Noisy User-generated TextCopenhagen, DenmarkAssociation for Computational LinguisticsSeptember 7, 2017. 2017
Does normalization help Part-of-Speech (POS) tagging accuracy on noisy, noncanonical data? To the best of our knowledge, little is known on the actual impact of normalization in a real-world scenario, where gold error detection is not available. We investigate the effect of automatic normalization on POS tagging of tweets. We also compare normalization to strategies that leverage large amounts of unlabeled data kept in its raw form. Our results show that normalization helps, but does not add consistently beyond just word embedding layer initialization. The latter approach yields a tagging model that is competitive with a Twitter state-of-the-art tagger.
Introduction
Non-canonical data poses a series of challenges to Natural Language Processing, as reflected in large performance drops documented in a variety of tasks, e.g., on POS tagging (Gimpel et al., 2011;Hovy et al., 2014), parsing (McClosky, 2010;Foster et al., 2011) and named entity recognition (Ritter et al., 2011). In this paper we focus on POS tagging and on a particular source of non-canonical language, namely Twitter data.
One obvious way to tackle the problem of processing non-canonical data is to build taggers that are specifically tailored to such text. A prime example is the ARK POS tagger, designed especially to process English Twitter data (Gimpel et al., 2011;Owoputi et al., 2013), on which it achieves state-of-the-art results. One drawback of this approach is that non-canonical data is not all of the same kind, so that for non-canonical non-Twitter data or even collections of Twitter samples from different times, typically a new specifically dedicated tool needs to be created. The alternative route is to take a general purpose state-of-the-art POS tagger and adapt it to successfully tag non-canonical data. In the case of Twitter, one way to go about this is lexical normalization. It is the task of detecting "ill-formed" words (Han and Baldwin, 2011) and replacing them with their canonical counterpart. To illustrate why this might help, consider the following tweet: "new pix comming tomoroe". An off-the-shelf system such as the Stanford NLP suite 1 makes several mistakes on the raw input, e.g., the verb 'comming' as well as the plural noun 'pix' are tagged as singular noun. Instead, its normalized form is analyzed correctly, as shown in Figure 1.
While being a promising direction, we see at least two issues with the assessment of normalization as a successful step in POS tagging noncanonical text. Firstly, normalization experiments are usually carried out assuming that the tokens to be normalized are already detected (gold error detection). Thus little is known on how normalization impacts tagging accuracy in a realworld scenario (not assuming gold error detection). Secondly, normalization is one way to go about processing non-canonical data, but not the only one (Eisenstein, 2013;Plank, 2016). Indeed, alternative approaches leverage the abundance of unlabeled data kept in its raw form. For instance, such data can be exploited with semi-supervised learning methods (Abney, 2007). The advantage of this approach is that portability could be successful also towards domains where normalization is not necessary or crucial. These observations lead us to the following research questions: Q1 In a real-world setting, without assuming gold error detection, does normalization help in POS tagging of tweets?
Q2 In the context of POS tagging, is it more beneficial to normalize input data or is it better to work with raw data and exploit large amounts of it in a semi-supervised setting?
Q3 To what extent are normalization and semisupervised approaches complementary?
To answer these questions, we run a battery of experiments that evaluate different approaches. Specifically:
1. We study the impact of normalization on POS tagging in a realistic setup, i.e., we compare normalizing only unknown words, or words for which we know they need correction; we compare this with a fully automatic normalization model (Section 3).
2. We evaluate the impact of leveraging large amounts of unlabeled data using two approaches: a) deriving various word representations, and studying their effect for model initialization (Section 4.1); b) applying a bootstrapping approach based on selftraining to automatically derive labeled training data, evaluating a range of a-priori data selection mechanisms (Section 4.2).
3. We experiment with combining the most promising methods from both directions, to gain insights on their potential complementarity (Section 5).
Experimental Setup
We run two main sets of POS tagging experiments.
In the first one, we use normalization in a variety of settings (see Section 3). In the second one, we leverage large amounts of unlabeled data that does not undergo any normalization but is used as training in a semi-supervised setting (Section 4). For all experiments we use existing datasets as well as newly created resources, cf. Section 2.1. The POS model used is described in Section 2.2. Gray area: no gold normalization layer available.
Data
The annotated datasets used in this study originate from two sources: Owoputi et al. (2013) and Han and Baldwin (2011), which we will refer to as OWOPUTI and LEXNORM, respectively. All datasets used in this study are annotated with the 26 Twitter tags as described in (Gimpel et al., 2011). 2 OWOPUTI was originally annotated with POS labels, whereas LEXNORM was solely annotated for normalization. Li and Liu (2015) added a POS tag layer to the LEXNORM corpus, and a normalization layer to 798 Tweets from OWOPUTI, which we split into a separate DEV and TEST part of 249 and 549 Tweets, respectively, keeping the original POS labels. We use DEV throughout all experiments during development, and test our final best system on the held-out test sets (both containing 549 tweets). An illustration of the data is given in Figure 2. For the different improvements to our baseline tagger, we need raw data from the target domain (Twitter). In addition, the normalization model needs unlabeled canonical data. We use a snapshot of English Wikipedia as unlabeled canonical data source. To get raw data for the social media domain, we collected Tweets during the whole year of 2016 by means of the Twitter API. We only collected Tweets containing one of the 100 frequent words in the Oxford English Corpus 3 as a rough language filter. This resulted in a dataset of 760,744,676 English Tweets. We do some very basic pre-processing in which we replace urls and usernames by <URL> and <USERNAME>, and remove duplicate tweets. Because of different casing strategies, we always apply a simple postprocessing step to 'rt' (retweet) tokens.
Model
We use BILTY, an off-the-shelf bi-directional Long Short-Term Memory (bi-LSTM) tagger which utilizes both word and character embeddings . The tagger is trained on 1,576 training tweets (Section 2.1). We tune the parameters of the POS tagger on the development set to derive the following hyperparameter setup, which we use throughout the rest of the experiments: 10 epochs, 1 bi-LSTM layer, 100 input dimensions for words, 256 for characters, σ=0.2, constant embeddings initializer, Adam trainer, and updating embeddings during backpropagation. 4
To Normalize
First we evaluate the impact of normalization on the POS tagger.
Model
We use an in-house developed normalization model (van der Goot and van Noord, 2017). 5 The model is based on the assumption that different normalization problems require different handling. First, since unintentional disfluencies can often be corrected by the use of a spell checker, the normalization model exploits Aspell 6 . Second, since intentional disfluencies typically have a much larger edit distance, the normalization system uses word embeddings (Mikolov et al., 2013); 7 words close to the non-canonical word in the vector space are considered potential normalization candidates. On top of that, the model uses a lookup list generated from the training data, which works especially well for slang.
Features originating from the ranking are combined with uni-and bi-gram probabilities from Wikipedia data as well as from raw Tweets (Section 2.1). A random forest classifier (Breiman, 2001) is then used to rank the candidates for each word. Note that the original word is also a candidate; this enables the model to handle error detection, which is not always the case in models of previous work. 4 Adam was consistently better than sgd on this small training dataset. More LSTM layers lowered performance. 5 Available at: https://bitbucket.org/ robvanderg/monoise 6 www.aspell.net 7 Using the tweets from Section 2.1 and the following parameters: -size 400 -window 1 -negative 5 -sample 1e-4 -iter 5
We train the normalization model on 2,577 tweets from Li and Liu (2014). Our model (van der Goot and van Noord, 2017) achieves state-of-art performance on the erroneous tokens (using gold error detection) on the LexNorm dataset (Han and Baldwin, 2011) as well as state-of-art on another corpus which is usually benchmarked without assuming gold error detection (Baldwin et al., 2015). We refer the reader to the paper (van der Goot and van Noord, 2017) for further details.
To obtain a more detailed view of the effect of normalization on POS tagging, we investigate four experimental setups:
• normalizing only unknown words;
• considering all words: the model decides whether a word should be normalized or not;
• assuming gold error detection: the model knows which words should be normalized;
• gold normalization; we consider this a theoretical upper bound.
Traditionally, normalization is used to make the test data more similar to the train data. Since we train our tagger on the social media domain as well, the normalization of only the test data might actually result in more distance between the train and test data. Therefore, we also train the tagger on normalized training data, and on the union of the normalized and the original training data.
Results
The effects of the different normalization strategies on the DEV data are shown in Table 1. Throughout the paper we report average accuracies over 5 runs including standard deviation.
The first row shows the effect of normalization at test-time only. From these results we can conclude that normalizing all words is beneficial over normalizing only unknown words; this shows that normalization has a positive effect that goes beyond changing unknown words.
The results of using the gold normalization suggest that there is still more to gain by improving the normalization model. In contrast, the results for gold error detection (GOLDED) show that error detection is not the main reason for this difference, since the performance difference between ALL and GOLDED is relatively small compared to the gap with GOLD. Considering the normalization of the training data, we see that it has a negative effect. The table suggests that training on the raw (non-normalized) training data works best. Adding normalized data to raw data (UNION) does not yield any clear improvement over RAW only, but requires more training time. For the test data, normalization is instead always beneficial.
To sum up, normalization improved the base tagger by 1.9% absolute performance on the development data, reaching 84.06% accuracy. Overall, our state-of-art normalization model only reaches approximately 50% of the theoretical upper bound of using gold normalization. We next investigate whether using large amounts of unlabeled data can help us to obtain a similar effect.
Or Not to Normalize
An alternative option to normalization is to leave the text as is, and exploit very large amounts of raw data via semi-supervised learning. The rationale behind this is the following: provided the size of the data is sufficient, a model can be trained to naturally learn the POS tags of noisy data.
Effect of Word Embeddings
An easy and effective use of word embeddings in neural network approaches is to use them to initialize the word lookup parameters.
We train a skip-gram word embeddings model using word2vec (Mikolov et al., 2013) on 760M tweets (as described in Section 3.1). We also experiment with structured skip-grams (Ling et al., 2015), an adaptation of word2vec which takes word order into account. It has been shown to be beneficial for syntactically oriented tasks, like POS tagging. Therefore we want to evaluate structured skip-grams as well.
The normalization model uses word embeddings with a window size of 1; we compare this with the default window size of 5 for structured skip-grams.
Results Table 2 shows the results of using the different skip-gram models for initialization of the word embeddings layer. Structured skip-grams perform slightly better, confirming earlier findings. Using a smaller window is more beneficial, probably because of the fragmented nature of Twitter data.
Structured skip-grams of window size 1 result in the best embedding model. This results in an improvement from 82.16% (Table 1) to 88.51% accuracy. This improvement is considerably larger than what obtained by normalization (82.16).
Effect of Self-training
We work with a rather small training set, which is all that is available to us in terms of gold data. This is due to the use of an idiosyncratic tagset (Gimpel et al., 2011). Adding more data could be beneficial to the system. To get an idea of how much effect extra annotated data could potentially have on POS tag accuracy, we plot the performance using smaller amounts of gold training data in Figure 3. We can see that there is still a slight upward trend; however, even when adding manually annotated data, the performance sometimes drop, especially after adding 55% of the training data.
To create more training data, we use an iterative indelible self-training setup (Abney, 2007) tweets are tagged, they get added to the training data, and after this a new model is trained. While we do not adopt any filtering strategy on the predictions (e.g., confidence thresholds), we do explore different strategies of a-priori data selection, from two corpora: raw tweets (Section 3.1), and the English Web Treebank (Petrov and McDonald, 2012).
For the English Web Treebank (EWT), we directly use raw text. Moreover, because the texts in the EWT are distributed by domains, i.e., answers, emails, weblogs, newsgroups, and reviews, we preserve this information and keep the data separate according to their domain to see whether adding data from the different domains can provide a more useful signal.
For the raw tweets, we compare different strategies of sampling. In addition to selecting random tweets, we experimented with selections aimed at providing the tagger with specific information that we knew was missing or confusing in the original training data. One strategy thus was to include tweets that contained words occurring in the development data but not in the training data. Note that this would result in a very slow tagger in a real-world situation, since the tagger needs to be retrained for every new unknown word. Another strategy was based on a preliminary analysis of errors on the development data: from the confusion matrix we observed that a frequently confounded tag was proper noun. Considering named entities as adequate proxies for proper nouns in this context, we also experimented with adding tweets that contained named entities. The detection of named entities was performed using a Twitter-specific named entity recognizer (Ritter et al., 2011). For control and comparison, we also collect additional training data where only tweets that do not contain named entities are selected. Hence, we end up with the following four sampling strategies:
• random sampling • tweets containing words which occur in the development data, but not in the training data
• tweets containing named entities
• tweets not containing named entities
Results Adding more automatically-labeled data did not show any consistent improvement. This holds for both selection methods regarding named entities (presence/absence of NERs) and different domains of the Web treebank. Therefore we do not elaborate further here. We hypothesize that post-selection based on e.g., confidence sampling, is a more promising direction. We consider this future work.
Normalizing and Not Normalizing
In the previous sections, we explored ways to improve the POS tagging of Tweets. The most promising directions were initializing the tagger with pre-trained embeddings and using normalization. Self-training was not effective. In this Section, we report on additional experiments on the development data aimed at obtaining insights on the potential of combining these two strategies. Table 3: Effect of different models on canonical/non-canonical words. Table 3 shows the effect of the two approaches on the two subsets of tokens (canonical/noncanonical) on the DEV set. Word embeddings have a higher impact on standard, canonical tokens. It is interesting to note that word embeddings and normalization both have a similar yet complementary effect on the words to be normalized (noncanonical). The improvements on non-canonical words seem to be complementary. The combined model additionally improves on words which need normalization, whereas it scores almost 1% lower on canonical words. This suggests that both strategies have potential to complement each other. Figure 4: Differences in numbers of errors on development data between best normalization setting and best word embeddings. Dark means normalization makes more errors.
Consequences of Normalization
Performance per POS
We compare the type of errors made by the best normalization setting versus the best word embeddings setting in a confusion matrix which displays the difference in errors in Figure 4. To recall: the best normalization setting was to use the raw training data, normalizing all words at test time; the best word embeddings model was a structured skip gram embeddings model with a window of 1.
In the confusion graph it becomes clear that normalization results in over-predicting nouns (N), which often get confused with verbs (V), adjectives (A) and proper nouns (ˆ). Normalization is better at recognizing prepositions (P), which it confuses less with numerals ($) compared to the embedding model. This is due to normalizing '2' and '4'. Instead, the embedding model has better predictions for proper nouns, nouns and verbs, presumably due to the higher coverage.
Evaluation
In this section we report results on the test data, as introduced in Section 2.1.
Our main aim is to compare different approaches for successfully applying a generic stateof-the-art POS tagger to Twitter data. Therefore we have to assess the contribution of the two methods we explore (normalization and using embeddings) and see how they fare, not only to each other but also in comparison to a state-of-the-art Twitter tagger. We use the ARK tagger (Owoputi et al., 2013) and retrain it on our dataset for direct comparison with our models. The ARK system is a conditional random fields tagger, which exploits clusters, lexical features and gazetteers. Table 4 shows the performance of our best models and the ARK tagger on the test datasets.
! # $ & , @ A D E G L M N O P R S T U V X Y ZĜ old label ! # $ & , @ A D E G L M N O P R S T U V X Y Ẑ
Embeddings work considerably better than normalization, which confirms what we found on the DEV data. The combined approach yields the highest accuracy over all evaluation sets, however, it significantly differs from embeddings only on TEST L. This can be explained by our earlier observation (cf . Table 3), which shows that COMB yields the highest improvement on non-canonical tokens, but the same does not hold for canonical tokens. Notice that TEST L does indeed contain the highest proportion of non-canonical tokens.
Our best results on all datasets are comparable to the state-of-the-art results achieved by the ARK tagger. In Figure 5 we compare the errors made by our system (COMB in Table 4: Results on test data (average over 5 runs) compared to ARK-tagger (Owoputi et al., 2013). Bold: best result (in case of multiple: no stat.significant difference according to randomization test).
gers obtain the highest performance.
The ARK tagger has difficulties with prepositions (P), which are mistagged as numerals ($). These are almost all cases of '2' and '4', which represent Twitter slang for 'to' and 'for', respectively. Our system performs a lot better on these, due to the normalization model as already observed earlier. Still regarding prepositions, ARK is better at distinguishing them from adverbs (R), which is a common mistake for our system. Our tagger makes more mistakes on confusing proper nouns (ˆ) with nouns (N) in comparison to ARK.
Related Work
Theoretically, this works fits well within the debate on normalization vs domain adaptation (Eisenstein, 2013). For a practical comparison, the work most related to ours is that of Li and Liu (2015). They propose a joint model for normalization and POS tagging. The candidate lists of six different normalization models, including spell checkers and machine translation systems, are combined with all their possible POS tags as found by the ARK Twitter POS tagger. Note that they use gold error detection, while we perform fully automatic normalization. These combined units of words and POS tags are then used to build joint Viterbi decoding (Viterbi, 1973). The optimal path in this decoding does not only contain a sequence of normalized tokens, but also a sequence of POS tags. This joint model proves to be beneficial for both tasks.
Work on normalization for improving POS tagging has also been done on other languages. For example, Ljubešić et al. (2017) show that performing normalization, in addition to using external resources, can remove half of the errors of a standard POS tagger for South Slavic languages. Quite surprisingly, instead, of all systems participating in shared tasks on POS tagging of Twitter data for both Italian (Bosco et al., 2016) and German (Beißwenger et al., 2016), none of the participating systems incorporated any normalization strategy before performing POS tagging.
Finally, normalization for POS tagging is certainly not limited to non-canonical data stemming from social media. Indeed, another stream of related work is focused on historical data, usually originating from the 15th till the 18th century. The motivation behind this is that in order to apply current language processing tools, the texts need to be normalized first, as spelling has changed through time. Experiments on POS tagging historical data that was previously normalized have been investigated for English (Yang and Eisenstein, 2016), German (Bollmann, 2013), and Dutch (Hupkes and Bod, 2016;Tjong Kim Sang, 2016). In this latter work, different methods of 'translating' historical Dutch texts to modern Dutch are explored, and a vocabulary lookup-based approach appears to work best. 8 In this paper we focused on normalization and POS tagging for Twitter data only.
Conclusion
We investigated the impact of normalization on POS tagging for the Twitter domain, presenting the first results on automatic normalization and comparing normalization to alternative strategies. We compared a generic tagger to a tagger specifically designed for Twitter data.
Regarding Q1, we can conclude that normalization does help. However, using large amounts of unlabeled data for embedding initialization yields an improvement that is twice as large as the one obtained using normalization (Q2).
Combining both methods (Q3) does indeed yield the highest scores on all datasets. This suggests that the two approaches are complementary, also because in isolation their most frequent errors differ. However, the contribution of normalization on top of embeddings alone is relatively small and only significant on one test set, which was specifically developed for normalization and contains the largest proportion of non-canonical tokens.
Overall, our best model is comparable to the ARK tagger. As a general direction, our results suggest that exploiting large amounts of unlabeled data of the target domain is preferable. However, if the data is expected to include a large proportion of non-canonical tokens, it is definitely worth applying normalization in combination with embeddings.
Our investigation was limited by the amount of available training data. Adding data via selftraining did not help. We observed mixed results for different types of a-priori filtering, but none of them yielded a steady improvement. A more promising direction might be post-selection, based on confidence scores or agreement among different taggers. Obviously another way to go is to add manually labeled data, some of which is available for more canonical domains. This would require a mapping of tagsets, and might be another good testbed to assess the contribution of normalization, which we leave for future work.
All code and distributable data used in this paper are available at https://github.com/ bplank/wnut-2017-pos-norm.
Figure 2 :
2Labeled data for POS and normalization.
Figure 3 :
3Effect of increasing amounts of training data (100% training data == 1,576 tweets).
Figure 5 :
5Comparison of errors per POS between our best model and the ARK tagger on TEST O; darker means our system performs better.
Table 1: Results of normalization (N) on DEV (macro average and stdev over 5 runs). RAW: no normalization, ALL: automatic normalization, UNK: normalize only unknown words, GOLDED: use gold error detection, GOLD: use gold normalization (Oracle). Row: whether training data is normalized. UNION stands for the training set formed by the union of both normalized and original raw data.↓ Train → Test
RAW
UNK
ALL
GOLDED
GOLD
RAW
82.16 (±.33) 83.44 (±.25) 84.06 (±.32) 84.67 (±.23) 86.71 (±.25)
ALL
80.42 (±.71) 81.99 (±.64) 83.87 (±.28) 84.05 (±.31) 86.11 (±.14)
UNION
81.54 (±.27) 83.11 (±.31) 84.04 (±.34) 84.42 (±.24) 86.35 (±.17)
to obtain automatically labeled data. Specifically: 100WINDOW SIZE
1
5
SKIPG.
88.14 (±.30) 87.56 (±.08)
STRUCT.SKIPG.
88.51 (±.24) 88.11 (±.49)
Table 2 :
2Accuracy on raw DEV: various pretrained skip-gram embeddings for initialization.
Table 4 )
4and ARK on TEST O, which is the test set on which both tag-DEV
TEST O
TEST L
% non-canonical tokens
11.75%
10.95%
12.09%
BILTY
82.16 (±.33) 83.81 (±.23) 80.78 (±.32)
+NORM
84.06 (±.32) 84.73 (±.19) 84.61 (±.21)
+EMBEDS
88.51 (±.24) 90.02 (±.35) 88.53 (±.41)
+COMB
88.89 (±.25) 90.25 (±.19) 89.63 (±.13)
ARK
89.08
90.65
89.67
http://nlp.stanford.edu:8080/parser/ index.jsp, accessed June 1, 2017.
Some tags are rare, like M and Y. In fact, M occurs only once in TEST L; Y never occurs in DEV and only once in TEST L and three times in TEST O. Therefore our confusion matrices (over DEV and TEST O, respectively) have different number of labels on the axes. 3 https://en.wikipedia.org/wiki/Most_ common_words_in_English
Interestingly, this work also resulted in a shared task on normalization of historical Dutch, in which the secondary evaluation metric was POS tagging accuracy: https:// ifarm.nl/clin2017st/.
AcknowledgmentsWe want to thank Héctor Martínez Alonso and Gertjan van Noord for valuable comments on earlier drafts of this paper. We are also grateful to the anonymous reviewers. This research has been supported by the Nuance Foundation and the University of Groningen High Performance Computing center.
Semisupervised learning for computational linguistics. Steven Abney, CRC PressSteven Abney. 2007. Semisupervised learning for com- putational linguistics. CRC Press.
Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition. Timothy Baldwin, Marie-Catherine De Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, Wei Xu, Proceedings of the Workshop on Noisy User-generated Text. the Workshop on Noisy User-generated TextBeijing, ChinaAssociation for Computational LinguisticsTimothy Baldwin, Marie-Catherine de Marneffe, Bo Han, Young-Bum Kim, Alan Ritter, and Wei Xu. 2015. Shared tasks of the 2015 workshop on noisy user-generated text: Twitter lexical normalization and named entity recognition. In Proceedings of the Workshop on Noisy User-generated Text, pages 126- 135, Beijing, China. Association for Computational Linguistics.
Empirist 2015: A shared task on the automatic linguistic annotation of computer-mediated communication and web corpora. Michael Beißwenger, Sabine Bartsch, Stefan Evert, Kay-Michael Würzner, Proceedings of the 10th Web as Corpus Workshop (WAC-X) and the EmpiriST Shared Task. the 10th Web as Corpus Workshop (WAC-X) and the EmpiriST Shared TaskBerlin, GermanyMichael Beißwenger, Sabine Bartsch, Stefan Evert, and Kay-Michael Würzner. 2016. Empirist 2015: A shared task on the automatic linguistic annotation of computer-mediated communication and web cor- pora. In Proceedings of the 10th Web as Corpus Workshop (WAC-X) and the EmpiriST Shared Task. Berlin, Germany, pages 44-56.
POS tagging for historical texts with sparse training data. Marcel Bollmann, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse, LAW-ID@ACL 2013. the 7th Linguistic Annotation Workshop and Interoperability with Discourse, LAW-ID@ACL 2013Sofia, BulgariaThe Association for Computer LinguisticsMarcel Bollmann. 2013. POS tagging for historical texts with sparse training data. In Proceedings of the 7th Linguistic Annotation Workshop and Interoper- ability with Discourse, LAW-ID@ACL 2013, August 8-9, 2013, Sofia, Bulgaria, pages 11-18. The Asso- ciation for Computer Linguistics.
Overview of the evalita 2016 part of speech on twitter for italian task. Cristina Bosco, Fabio Tamburini, Andrea Bolioli, Alessandro Mazzei, Proceedings of Third Italian Conference on Computational Linguistics (CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016). Associazione Italiana di Linguistica Computazionale (AILC). Third Italian Conference on Computational Linguistics (CLiC-it 2016) & Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016). Associazione Italiana di Linguistica Computazionale (AILC)Cristina Bosco, Fabio Tamburini, Andrea Bolioli, and Alessandro Mazzei. 2016. Overview of the evalita 2016 part of speech on twitter for italian task. In Proceedings of Third Italian Conference on Compu- tational Linguistics (CLiC-it 2016) & Fifth Evalua- tion Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2016). Associazione Italiana di Linguistica Com- putazionale (AILC).
Random forests. Machine learning. Leo Breiman, 45Leo Breiman. 2001. Random forests. Machine learn- ing, 45(1):5-32.
What to do about bad language on the internet. Jacob Eisenstein, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlantaAssociation for Computational LinguisticsGeorgiaJacob Eisenstein. 2013. What to do about bad language on the internet. In Proceedings of the 2013 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 359-369, Atlanta, Geor- gia. Association for Computational Linguistics.
From news to comment: Resources and benchmarks for parsing the language of web 2.0. Jennifer Foster, Özlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, Josef Van Genabith, Proceedings of the 5th International Joint Conference on Natural Language Processing. the 5th International Joint Conference on Natural Language ProcessingIJCNLPJennifer Foster,Özlem Cetinoglu, Joachim Wagner, Joseph Le Roux, Joakim Nivre, Deirdre Hogan, and Josef van Genabith. 2011. From news to comment: Resources and benchmarks for parsing the language of web 2.0. In Proceedings of the 5th International Joint Conference on Natural Language Processing (IJCNLP).
Part-of-speech tagging for twitter: Annotation, features, and experiments. Kevin Gimpel, Nathan Schneider, O' Brendan, Dipanjan Connor, Daniel Das, Jacob Mills, Michael Eisenstein, Dani Heilman, Jeffrey Yogatama, Noah A Flanigan, Smith, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsKevin Gimpel, Nathan Schneider, Brendan O'Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flani- gan, and Noah A. Smith. 2011. Part-of-speech tag- ging for twitter: Annotation, features, and experi- ments. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, pages 42-47, Portland, Oregon, USA. Association for Computa- tional Linguistics.
Monoise: Modeling noise using a modular normalization system. Rob Van Der Goot, Gertjan Van Noord, Computational Linguistics in the Netherlands Journal. 7Rob van der Goot and Gertjan van Noord. 2017. Monoise: Modeling noise using a modular normal- ization system. Computational Linguistics in the Netherlands Journal, 7.
Lexical normalisation of short text messages: Makn sens a #twit-ter. Bo Han, Timothy Baldwin, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesOregon, USAAssociation for Computational LinguisticsPortlandBo Han and Timothy Baldwin. 2011. Lexical normal- isation of short text messages: Makn sens a #twit- ter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 368-378, Port- land, Oregon, USA. Association for Computational Linguistics.
When pos data sets don't add up: Combatting sample bias. Dirk Hovy, Barbara Plank, Anders Søgaard, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). the Ninth International Conference on Language Resources and Evaluation (LREC-2014)Dirk Hovy, Barbara Plank, and Anders Søgaard. 2014. When pos data sets don't add up: Combatting sam- ple bias. In Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014), pages 4472-4475.
Pos-tagging of historical dutch. Dieuwke Hupkes, Rens Bod, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Paris, FranceEuropean Language Resources Association (ELRADieuwke Hupkes and Rens Bod. 2016. Pos-tagging of historical dutch. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC 2016), Paris, France. European Language Resources Association (ELRA).
Improving text normalization via unsupervised model and discriminative reranking. Chen Li, Yang Liu, Proceedings of the ACL 2014 Student Research Workshop. the ACL 2014 Student Research WorkshopBaltimore, Maryland, USAAssociation for Computational LinguisticsChen Li and Yang Liu. 2014. Improving text normal- ization via unsupervised model and discriminative reranking. In Proceedings of the ACL 2014 Student Research Workshop, pages 86-93, Baltimore, Mary- land, USA. Association for Computational Linguis- tics.
Joint POS tagging and text normalization for informal text. Chen Li, Yang Liu, Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015. the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015Buenos Aires, ArgentinaChen Li and Yang Liu. 2015. Joint POS tagging and text normalization for informal text. In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 1263-1269.
Two/too simple adaptations of word2vec for syntax problems. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsWang Ling, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proceedings of the 2015 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 1299- 1304, Denver, Colorado. Association for Computa- tional Linguistics.
Adapting a state-of-the-art tagger for south slavic languages to non-standard text. Nikola Ljubešić, Tomaž Erjavec, Darja Fišer, Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing. the 6th Workshop on Balto-Slavic Natural Language ProcessingValencia, SpainAssociation for Computational LinguisticsNikola Ljubešić, Tomaž Erjavec, and Darja Fišer. 2017. Adapting a state-of-the-art tagger for south slavic languages to non-standard text. In Proceedings of the 6th Workshop on Balto-Slavic Natural Language Processing, pages 60-68, Valencia, Spain. Associa- tion for Computational Linguistics.
Any domain parsing: automatic domain adaptation for natural language parsing. David Mcclosky, Brown UniversityPh.D. thesisDavid McClosky. 2010. Any domain parsing: auto- matic domain adaptation for natural language pars- ing. Ph.D. thesis, Brown University.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jef- frey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
Improved part-of-speech tagging for online conversational text with word clusters. Olutobi Owoputi, O' Brendan, Chris Connor, Kevin Dyer, Nathan Gimpel, Noah A Schneider, Smith, Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAtlanta, GeorgiaAssociation for Computational LinguisticsOlutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 380-390, Atlanta, Georgia. Association for Computational Linguistics.
Overview of the 2012 shared task on parsing the web. Slav Petrov, Ryan Mcdonald, Notes of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL). 59Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Notes of the First Workshop on Syntactic Analysis of Non- Canonical Language (SANCL), volume 59.
What to do about non-standard (or non-canonical) language in NLP. Barbara Plank, KONVENS. Barbara Plank. 2016. What to do about non-standard (or non-canonical) language in NLP. In KONVENS.
Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. Barbara Plank, Anders Søgaard, Yoav Goldberg, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyShort Papers2Association for Computational LinguisticsBarbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 412- 418, Berlin, Germany. Association for Computa- tional Linguistics.
Named entity recognition in tweets: An experimental study. Alan Ritter, Sam Clark, Mausam , Oren Etzioni, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UKAssociation for Computational LinguisticsAlan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An ex- perimental study. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 1524-1534, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Improving Part-of-Speech Tagging of Historical Text by First Translating to Modern Text. Erik Tjong Kim Sang, 2nd IFIP International Workshop on Computational History and Data-Driven Humanities. Springer VerlagErik Tjong Kim Sang. 2016. Improving Part-of-Speech Tagging of Historical Text by First Translating to Modern Text. In 2nd IFIP International Work- shop on Computational History and Data-Driven Humanities. Springer Verlag.
Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. A Viterbi, IEEE Trans. Inform. Theory. 132A. Viterbi. 1973. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE Trans. Inform. Theory, 13(2):260-269.
Part-of-speech tagging for historical english. Yi Yang, Jacob Eisenstein, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsYi Yang and Jacob Eisenstein. 2016. Part-of-speech tagging for historical english. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguis- tics: Human Language Technologies, pages 1318- 1328, San Diego, California. Association for Com- putational Linguistics.
| [] |
[
"Efficient Machine Translation Domain Adaptation",
"Efficient Machine Translation Domain Adaptation"
] | [
"Pedro Henrique Martins pedrohenriqueamartins@tecnico.ulisboa.pt \nInstituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)\nInstituto Superior Técnico Unbabel Lisbon\nPortugal\n",
"Zita Marinho zmarinho@google.com \nInstituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)\nInstituto Superior Técnico Unbabel Lisbon\nPortugal\n",
"André F T Martins andre.t.martins@tecnico.ulisboa.pt. \nInstituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)\nInstituto Superior Técnico Unbabel Lisbon\nPortugal\n"
] | [
"Instituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)\nInstituto Superior Técnico Unbabel Lisbon\nPortugal",
"Instituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)\nInstituto Superior Técnico Unbabel Lisbon\nPortugal",
"Instituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)\nInstituto Superior Técnico Unbabel Lisbon\nPortugal"
] | [
"Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge"
] | Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore(Khandelwal et al., 2021). A drawback of these retrievalaugmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbor machine translation. We adapt the methods recently proposed byHe et al. (2021)for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. 1 | 10.18653/v1/2022.spanlp-1.3 | [
"https://www.aclanthology.org/2022.spanlp-1.3.pdf"
] | 247,701,377 | 2204.12608 | 0ac00d023a73db791974847b2b705af39bfa5e77 |
Efficient Machine Translation Domain Adaptation
May 27, 2022
Pedro Henrique Martins pedrohenriqueamartins@tecnico.ulisboa.pt
Instituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)
Instituto Superior Técnico Unbabel Lisbon
Portugal
Zita Marinho zmarinho@google.com
Instituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)
Instituto Superior Técnico Unbabel Lisbon
Portugal
André F T Martins andre.t.martins@tecnico.ulisboa.pt.
Instituto de Telecomunicações DeepMind Institute of Systems and Robotics LUMLIS (Lisbon ELLIS Unit)
Instituto Superior Técnico Unbabel Lisbon
Portugal
Efficient Machine Translation Domain Adaptation
Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge
the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from KnowledgeMay 27, 2022
Machine translation models struggle when translating out-of-domain text, which makes domain adaptation a topic of critical importance. However, most domain adaptation methods focus on fine-tuning or training the entire or part of the model on every new domain, which can be costly. On the other hand, semi-parametric models have been shown to successfully perform domain adaptation by retrieving examples from an in-domain datastore(Khandelwal et al., 2021). A drawback of these retrievalaugmented models, however, is that they tend to be substantially slower. In this paper, we explore several approaches to speed up nearest neighbor machine translation. We adapt the methods recently proposed byHe et al. (2021)for language modeling, and introduce a simple but effective caching strategy that avoids performing retrieval when similar contexts have been seen before. Translation quality and runtimes for several domains show the effectiveness of the proposed solutions. 1
Introduction
Modern neural machine translation models are mostly parametric (Bahdanau et al., 2015;Vaswani et al., 2017), meaning that, for each input, the output depends only on a fixed number of model parameters, obtained using some training data, hopefully in the same domain. However, when running machine translation systems in the wild, it is often the case that the model is given input sentences or documents from domains that were not part of the training data, which frequently leads to subpar translations. One solution is training or fine-tuning the entire model or just part of it for each domain, but this can be expensive and may lead to catastrophic forgetting (Saunders, 2021).
Recently, an approach that has achieved promising results is augmenting parametric models with a retrieval component, leading to semi-parametric models (Gu et al., 2018;Zhang et al., 2018;Bapna and Firat, 2019;Khandelwal et al., 2021;Meng et al., 2021;Jiang et al., 2021). These models construct a datastore based on a set of source / target sentences or word-level contexts (translation memories) and retrieve similar examples from this datastore, using this information in the generation process. This allows having only one model that can be used for every domain. However, the model's runtime increases with the size of the domain's datastore and searching for related examples on large datastores can be computationally very expensive: for example, when retrieving 64 neighbors from the datastore, the model may become two orders of magnitude slower (Khandelwal et al., 2021). Due to this, some recent works have proposed methods that aim to make this process more efficient. Meng et al. (2021) proposed constructing a different datastore for each source sentence, by first searching for the neighbors of the source tokens; and He et al. (2021) proposed several techniques -datastore pruning, adaptive retrieval, dimension reduction -for nearest neighbor language modeling.
In this paper, we adapt several methods proposed by He et al. (2021) to machine translation, and we further propose a new approach that increases the model's efficiency: the use of a retrieval distributions cache. By caching the kNN probability distributions, together with the corresponding decoder representations, for the previous steps of the generation of the current translation(s), the model can quickly retrieve the retrieval distribution when the current representation is similar to a cached one, instead of having to search for neighbors in the datastore at every single step.
We perform a thorough analysis of the model's efficiency on a controlled setting, which shows that the combination of our proposed techniques results in a model, the efficient kNN-MT, which is approx-imately twice as fast as the vanilla kNN-MT. This comes without harming translation performance, which is, on average, more than 8 BLEU points and 5 COMET points better than the base MT model.
In sum, this paper presents the following contributions:
• We adapt the methods proposed by He et al. (2021) for efficient nearest neighbor language modeling to machine translation.
• We propose a caching strategy to store the retrieval probability distributions, improving the translation speed.
• We compare the efficiency and translation quality of the different methods, which show the benefits of the proposed and adapted techniques.
Background
When performing machine translation, the model is given a source sentence or document, x = [x 1 , . . . , x L ], on one language, and the goal is to output a translation of the sentence in the desired language, y = [y 1 , . . . , y N ]. This is usually done using a parametric sequence-to-sequence model (Bahdanau et al., 2015;Vaswani et al., 2017), in which the encoder receives the source sentence as input and outputs a set of hidden states. Then, at each step t, the decoder attends to these hidden states and outputs a probability distribution p NMT (y t |y <t , x) over the vocabulary. Finally, these probability distributions are used to predict the output tokens, typically with beam search.
Nearest Neighbor Machine Translation
Khandelwal et al. (2021) introduced a nearest neighbor machine translation model, kNN-MT, which is a semi-parametric model. This means that besides having a parametric component that outputs a probability distribution over the vocabulary, p NMT (y t |y <t , x), the model also has a nearest neighbor retrieval mechanism, which allows direct access to a datastore of examples. More specifically, we build a datastore D which consists of a key-value memory, where each entry key is the decoder's output representation, f (x, y <t ), and the value is the target token y t :
D = {(f (x, y <t ) , y t ) ∀y t ∈ y | (x, y) ∈ (X , Y)},(1)
where (X , Y) corresponds to a set of parallel source and target sequences. Then, at inference time, the model searches the datastore to retrieve the set of k nearest neighbors N . Using their distances d(·) to the current decoder's output representation, we can compute the retrieval distribution p kNN (y t |y <t , x) as:
p kNN (y t |y <t , x) = (2) (k j ,v j )∈N 1 yt=v j exp (−d (k j , f (x, y <t )) /T ) (k j ,v j )∈N exp (−d (k j , f (x, y <t )) /T ) ,
where T is the softmax temperature, k j denotes the key of the j th neighbor and v j its value. Finally, p NMT (y t |y <t , x) and p kNN (y t |y <t , x) are combined to obtain the final distribution, which is used to generate the translation through beam search, by performing interpolation:
p(y t |y <t , x) =(1 − λ) p NMT (y t |y <t , x) (3) + λ p kNN (y t |y <t , x),
where λ is a hyper-parameter that controls the weights given to the two distributions.
Efficient kNN-MT
In this section, we describe the approaches introduced by He et al. (2021) to speed-up the inference time for nearest neighbor language modeling, such as pruning the datastore ( §3.1) and reducing the representations dimension ( §3.2), which we adapt to machine translation. We further describe a novel method that allows the model to have access to examples without having to search them in the datastore at every step, by maintaining a cache of the past retrieval distributions, for the current translation(s) ( §3.3).
Datastore Pruning
The goal of datastore pruning is to reduce the size of the datastore, so that the model is able to search for the nearest neighbors faster, without severely compromising the translation performance. To do so, we follow He et al. (2021), and use greedy merging. In greedy merging, we aim to merge datastore entries that share the same value (target token) while their keys are close to each other in vector space. To do this, we first need to find the k nearest neighbors of every entry of the datastore, where k is a hyper-parameter. Then, if in the set of neighbors, retrieved for a given entry, there is an entry which has not been merged before and has the same value, we merge the two entries, by simply removing the neighboring one.
Dimension Reduction
The decoder's output representations, f (x, y <t ) are, usually, high-dimensional (1024, in our case). This leads to a high computational cost when computing vector distances, which are needed for retrieving neighbors from the datastore. To alleviate this, we follow He et al. (2021), and use principal component analysis (PCA), an efficient dimension reduction method, to reduce the dimension of the decoder's output representation to a pre-defined dimension, d, and generate a compressed datastore.
Cache
The model does not need to search the datastore at every step of the translation generation in order to do it correctly. Here, we aim to predict when it needs to retrieve neighbors from the datastore, so that, by only searching the datastore in the necessary steps, we can increase the generation speed.
Adaptive retrieval. To do so, first we follow He et al. (2021), and use a simple MLP to predict the value of the interpolation coefficient λ at each step. Then, we define a threshold, α, so that the model only performs retrieval when λ > α. However, we observed that this leads to results ( §A.3) similar to randomly selecting when to search the datastore. We posit that this occurs because it is difficult to predict when the model should perform retrieval, for domain adaptation (He et al., 2021), and because in machine translation error propagation occurs more prominently than in language modeling.
Cache. Because it is common to have similar contexts along the generation process, when using beam search, the model can be often retrieving similar neighbors at different steps, which is not efficient. To avoid repeating searches on the datastore for similar context vectors, f (x, y <t ), we propose keeping a cache of the previous retrieval distributions, of the current translation(s). More specifically, at each step of the generation of y, we add the decoder's representation vector along with the retrieval distribution p kNN (y t |y <t , x), corresponding to all beams, B, to the cache C:
C={(f (x, y <t ), p kNN (y t |y <t ,x))∀y t ∈ y | y ∈ B}.
Then, at each step of the generation, we compute the Euclidean distance between the current decoder's representation and the keys on the cache. If all distances are bigger than a threshold τ , the model searches the datastore to find the nearest neighbors. Otherwise, the model retrieves, from the cache, the retrieval distribution that corresponds to the closest key.
Experiments
Dataset and metrics. We perform experiments on the Medical, Law, IT, and Koran domain data of the multi-domains dataset (Koehn and Knowles, 2017) re-splitted by Aharoni and Goldberg (2020).
To build the datastores we use the in-domain training sets which have from 17,982 to 467,309 sentences. The validation and test sets have 2,000 sentences.
To evaluate the models we use BLEU (Papineni et al., 2002;Post, 2018) and COMET (Rei et al., 2020).
Settings. We use the WMT'19 German-English news translation task winner (with 269 M parameters), available on the Fairseq library , as the base MT model.
As baselines, we consider the base MT model, the vanilla kNN-MT model (Khandelwal et al., 2021), and the Fast kNN-MT model (Meng et al., 2021). For all models, which perform retrieval, we select the hyper-parameters, for each method and each domain, by performing grid search on k ∈ {8, 16, 32, 64} and λ ∈ {0.5, 0.6, 0.7, 0.8}. The selected values are stated in Table 9 of App. B.
For the vanilla kNN-MT model and the efficient kNN-MT we follow Khandelwal et al. (2021) and use the Euclidean distance to perform retrieval and the proposed softmax temperature. For the Fast kNN-MT, we use the cosine distance and the softmax temperature proposed by Meng et al. (2021). For the efficient kNN-MT we selected parameters that ensure a good speed/quality trade-off: k = 2 for datastore pruning, d = 256 for PCA, and τ = 6 as the cache threshold. Results for each methods using different parameters are reported in App. A.
Results
The translation scores are reported on Figure 1: Plots of the generation speed (tokens/s) for the different models on the medical, law, IT, and Koran domains, for different batch sizes (1,8,16). The generation speed (y-axis) is in log scale. When using the Fast kNN-MT model, the maximum batch size that we are able to use is 2, due to out of memory errors. points and 5 COMET points more than the base MT model.
Generation speed
Computational infrastructure. All experiments were performed on a server with 3 RTX 2080 Ti (11 GB), 12 AMD Ryzen 2920X CPUs (24 cores), and 128 Gb of RAM. For the generation speed measurements, we ran each model on a single GPU while no other process was running on the server, to have a controlled environment. To search the datastore, we used the FAISS library (Johnson et al., 2019). When using the vanilla kNN-MT and efficient kNN-MT, the nearest neighbor search is performed on the CPUs, since not all datastores fit into memory, while when using the Fast kNN-MT this is done on the GPU.
Analysis. As can be seen on the plots of Figure 1, for a batch size of 1 Fast kNN-MT leads to a generation speed higher than our proposed method and vanilla kNN-MT. However, because of its high memory requirements, we are not able to run Fast kNN-MT for batch sizes larger than 2, on the computational infrastructure stated above. On the contrary, when using the proposed methods (efficient kNN-MT) we are able to run the model with higher batch sizes, achieving superior generation speeds to Fast kNN-MT and vanilla kNN-MT, and reducing the gap to the base MT model. Ablation. We plot the generation speed for different combinations of the proposed methods (averaged across domains), for several batch sizes, on Figure 2. On this plot, we can clearly see that every method contributes to the speed-up achieved by the model that combines all approaches. Moreover, we can observe that the method which leads to the largest speed-up is the use of a cache of retrieval distributions, by saving, on average 57% of the retrieval searches.
Conclusion
In this paper we propose the efficient kNN-MT, in which we combine several methods to improve the kNN-MT generation speed. First, we adapted to machine translation methods that improve retrieval efficiency in language modeling (He et al., 2021). Then we proposed a new method which consists on keeping in cache the previous retrieval distributions so that the model does not need to search for neighbors in the datastore at every step. Through experiments on domain adaptation, we show that the combination of the proposed methods leads to a considerable speed-up (up to 2x) without harming the translation performance substantially.
A Additional results
In this section we report the BLEU scores as well as additional statistics for the different methods, when varying their hyper-parameters.
A.1 Datastore pruning
We report on Table 2 the BLEU scores for datastore pruning, when varying the number of neighbors used for greedy merging, k. The resulting datastore sizes are presented on Table 3
A.2 Dimension reduction
We report on Table 4 the BLEU scores for dimension reduction, when varying the output dimension d.
A.3 Adaptive retrieval
We report on Table 5 the BLEU scores for adaptive retrieval, when varying the threshold α. The percentage of times the model performs retrieval is stated on Table 6.
A.4 Cache
We report on Table 7 the BLEU scores for a model using a cache of the retrieval distributions, when varying the threshold τ . The percentage of times the model performs retrieval is stated on
B Hyper-parameters
On Table 9: Values of the hyper-parameters: number of neighbors to be retrieved k, interpolation coefficient λ, and retrieval softmax temperature T .
k ∈ {8, 16, 32, 64}, the interpolation coefficient λ ∈ {0.5, 0.6, 0.7, 0.8}, and retrieval softmax temperature T . For decoding we use beam search with a beam size of 5.
Figure 2 :
2Plot of the generation speed (tokens/s), averaged across domains, for different combinations of the proposed methods.
Table 1 .
1We can clearly see that both Fast kNN-MT and the efficient kNN-MT (combining the different methods) do not hurt the translation performance substantially, still leading to, on average, 8 BLEUBLEU
COMET
Medical Law
IT
Koran Average
Medical Law
IT
Koran Average
Baselines
Base MT
40.01
45.64 37.91 16.35
34.98
.4702
.5770 .3942 -.0097
.3579
kNN-MT
54.47
61.23 45.96 21.02
45.67
.5760
.6781 .5163 .0480
.4546
Fast kNN-MT
52.90
55.71 44.73 21.29
43.66
.5293
.5944 .5445 -.0455
.4057
Efficient kNN-MT
cache
53.30
59.12 45.39 20.67
44.62
.5625
.6403 .5085 .0346
.4365
PCA + cache
53.58
58.57 46.29 20.67
44.78
.5457
.6379 .5311 -.0021
.4282
PCA + pruning
53.23
60.38 45.16 20.52
44.82
.5658
.6639 .4981 .0298
.4394
PCA + cache + pruning
51.90
57.82 44.44 20.11
43.57
.5513
.6260 .4909 -.0052
.4158
Table 1: BLEU and COMET scores on the multi-domains test set, for a batch size of 8.
1
8
16
Batch size
10 2
10 3
Generation speed
Medical
base
efficient kNN-MT
fast kNN-MT
kNN-MT
1
8
16
Batch size
10 2
10 3
Law
1
8
16
Batch size
10 2
10 3
IT
1
8
16
Batch size
10 2
10 3
Koran
.Medical Law
IT
Koran Average
kNN-MT
54.47
61.23 45.96 21.02
45.67
k = 1
53.60
60.23 45.03 20.81
44.92
k = 2
52.95
59.40 44.76 20.12
44.31
k = 5
51.63
57.55 44.07 19.29
43.14
Table 2 :
2BLEU scores on the multi-domains test set
when performing datastore pruning with several values
of k, for a batch size of 8.
Medical
Law
IT
Koran
kNN-MT 6,903,141 19,061,382 3,602,862 524,374
k = 1
4,780,514 13,130,326 2,641,709 400,385
k = 2
4,039,432 11,103,775 2,303,808 353,007
k = 5
3,084,106 8,486,551 1,852,191 290,192
Table 3 :
3Sizes of the in-domain datastores when performing datastore pruning with several values of k, for a batch size of 8.
Table 4 :
4BLEU scores on the multi-domains test set when performing PCA with different dimension, d, values, for a batch size of 8.
Table 5 :
5BLEU scores on the multi-domains test set when performing adaptive retrieval for different values of the threshold α, for a batch size of 8.Medical
Law
IT
Koran
kNN-MT
100%
100% 100% 100%
α = 0.25
78%
73%
38%
4%
α = 0.5
96%
96%
60%
61%
α = 0.75
98%
99%
92%
91%
Table 6 :
6Percentage of times the model searches for neighbors on the datastore when performing adaptive retrieval for different values of the threshold α, for a batch size of 8.
Table 8 .
8Medical Law
IT
Koran Average
kNN-MT
54.47
61.23 45.96 21.02
45.67
τ = 2
54.47
61.23 45.93 20.98
45.65
τ = 4
54.17
61.10 46.07 21.00
45.58
τ = 6
53.30
59.12 45.39 20.67
44.62
τ = 8
30.06
23.01 25.53 16.08
23.67
Table 7 :
7BLEU scores on the multi-domains test set when using a retrieval distributions' cache for different values of the threshold τ , for a batch size of 8.Medical
Law
IT
Koran
kNN-MT
100%
100% 100% 100%
τ = 2
59%
51%
67%
64%
τ = 4
50%
42%
57%
53%
τ = 6
43%
35%
49%
45%
τ = 8
26%
16%
29%
31%
Table 8 :
8Percentage of times the model searches for neighbors on the datastore when using a retrieval distributions' cache for different values of the threshold τ , for a batch size of 8.
Table 9
9we report the values for the hyperparameters: number of neighbors to be retrievedMedical
Law
IT
Koran
k
λ
T
k
λ
T
k
λ
T
k
λ
T
kNN-MT
8 0.7
10
8 0.8
10
8 0.7 10
8 0.6 100
Fast kNN-MT
16 0.7 .015
32 0.6 .015
8 0.6 .02
16 0.6 .05
cache
8 0.7
10
8 0.8
10
8 0.7 10
8 0.6 100
PCA + cache
8 0.8
10
8 0.8
10
8 0.7 10
8 0.7 100
PCA + pruning
8 0.7
10
8 0.8
10
8 0.7 10
8 0.7 100
PCA + cache + pruning 8 0.7
10
8 0.8
10
8 0.7 10
8 0.7 100
The code is available at https://github.com/ deep-spin/efficient_kNN_MT.
AcknowledgmentsThis work was supported by the European Research Council (ERC StG DeepSPIN 758969), by the P2020 project MAIA (contract 045909), by the Fundação para a Ciência e Tecnologia through project PTDC/CCI-INF/4703/2021 (PRE-LUNA, contract UIDB/50008/2020), and by contract PD/BD/150633/2020 in the scope of the Doctoral Program FCT -PD/00140/2013 NETSyS. We thank Junxian He, Graham Neubig, the SARDINE team members, and the reviewers for helpful discussion and feedback.
Unsupervised domain clusters in pretrained language models. Roee Aharoni, Yoav Goldberg, Proc. ACL. ACLRoee Aharoni and Yoav Goldberg. 2020. Unsupervised domain clusters in pretrained language models. In Proc. ACL.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyung Hyun Cho, Yoshua Bengio, Proc. ICLR. ICLRDzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. ICLR.
Non-Parametric Adaptation for Neural Machine Translation. Ankur Bapna, Orhan Firat, Proc. NAACL. NAACLAnkur Bapna and Orhan Firat. 2019. Non-Parametric Adaptation for Neural Machine Translation. In Proc. NAACL.
Search engine guided neural machine translation. Jiatao Gu, Yong Wang, Kyunghyun Cho, O K Victor, Li, Proc. AAAI. AAAIJiatao Gu, Yong Wang, Kyunghyun Cho, and Victor OK Li. 2018. Search engine guided neural machine trans- lation. In Proc. AAAI.
Efficient Nearest Neighbor Language Models. Junxian He, Graham Neubig, Taylor Berg-Kirkpatrick, Proc. EMNLP. EMNLPJunxian He, Graham Neubig, and Taylor Berg- Kirkpatrick. 2021. Efficient Nearest Neighbor Lan- guage Models. In Proc. EMNLP.
Learning Kernel-Smoothed Machine Translation with Retrieved Examples. Qingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, Lei Li, Proc. EMNLP. EMNLPQingnan Jiang, Mingxuan Wang, Jun Cao, Shanbo Cheng, Shujian Huang, and Lei Li. 2021. Learn- ing Kernel-Smoothed Machine Translation with Re- trieved Examples. In Proc. EMNLP.
Billion-scale similarity search with gpus. Jeff Johnson, Matthijs Douze, Hervé Jégou, IEEE Transactions on Big Data. Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with gpus. IEEE Transactions on Big Data.
Nearest neighbor machine translation. Urvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, Mike Lewis, Proc. ICLR. ICLRUrvashi Khandelwal, Angela Fan, Dan Jurafsky, Luke Zettlemoyer, and Mike Lewis. 2021. Nearest neigh- bor machine translation. In Proc. ICLR.
Six Challenges for Neural Machine Translation. Philipp Koehn, Rebecca Knowles, Proceedings of the First Workshop on Neural Machine Translation. the First Workshop on Neural Machine TranslationPhilipp Koehn and Rebecca Knowles. 2017. Six Chal- lenges for Neural Machine Translation. In Proceed- ings of the First Workshop on Neural Machine Trans- lation.
Fast Nearest Neighbor Machine Translation. Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xiaofei Sun, Tianwei Zhang, Jiwei Li, Yuxian Meng, Xiaoya Li, Xiayu Zheng, Fei Wu, Xi- aofei Sun, Tianwei Zhang, and Jiwei Li. 2021. Fast Nearest Neighbor Machine Translation.
Facebook FAIR's WMT19 News Translation Task Submission. Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, Sergey Edunov, Proc. of the Fourth Conference on Machine Translation. of the Fourth Conference on Machine TranslationNathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook FAIR's WMT19 News Translation Task Submission. In Proc. of the Fourth Conference on Machine Trans- lation.
fairseq: A Fast, Extensible Toolkit for Sequence Modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Proc. NAACL (Demonstrations). NAACL (Demonstrations)Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A Fast, Extensible Toolkit for Sequence Modeling. In Proc. NAACL (Demonstra- tions).
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proc. ACL. ACLKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proc. ACL.
A Call for Clarity in Reporting BLEU Scores. Matt Post, Proc. Third Conference on Machine Translation. Third Conference on Machine TranslationMatt Post. 2018. A Call for Clarity in Reporting BLEU Scores. In Proc. Third Conference on Machine Trans- lation.
COMET: A Neural Framework for MT Evaluation. Ricardo Rei, Craig Stewart, Ana C Farinha, Alon Lavie, Proc. EMNLP. EMNLPRicardo Rei, Craig Stewart, Ana C Farinha, and Alon Lavie. 2020. COMET: A Neural Framework for MT Evaluation. In Proc. EMNLP.
Domain Adaptation and Multi-Domain Adaptation for Neural Machine Translation: A Survey. Danielle Saunders, Danielle Saunders. 2021. Domain Adaptation and Multi-Domain Adaptation for Neural Machine Trans- lation: A Survey.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proc. NeurIPS. NeurIPSAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proc. NeurIPS.
Guiding Neural Machine Translation with Retrieved Translation Pieces. Jingyi Zhang, Masao Utiyama, Eiichiro Sumita, Proc. NAACL. NAACLGraham Neubig, and Satoshi NakamuraJingyi Zhang, Masao Utiyama, Eiichiro Sumita, Gra- ham Neubig, and Satoshi Nakamura. 2018. Guiding Neural Machine Translation with Retrieved Transla- tion Pieces. In Proc. NAACL.
Adaptive Nearest Neighbor Machine Translation. Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, Jiajun Chen, Xin Zheng, Zhirui Zhang, Junliang Guo, Shujian Huang, Boxing Chen, Weihua Luo, and Jiajun Chen. 2021. Adaptive Nearest Neighbor Machine Translation.
| [] |
[
"LEARNING TO SCREEN FOR FAST SOFTMAX INFER- ENCE ON LARGE VOCABULARY NEURAL NETWORKS",
"LEARNING TO SCREEN FOR FAST SOFTMAX INFER- ENCE ON LARGE VOCABULARY NEURAL NETWORKS",
"LEARNING TO SCREEN FOR FAST SOFTMAX INFER- ENCE ON LARGE VOCABULARY NEURAL NETWORKS",
"LEARNING TO SCREEN FOR FAST SOFTMAX INFER- ENCE ON LARGE VOCABULARY NEURAL NETWORKS"
] | [
"Pei-Hung ",
"Patrick ) Chen patrickchen@g.ucla.edu \nDepartment of Computer Science\nUniversity of California\nLos Angeles\n",
"Si Si \nGoogle Research\n\n",
"Sanjiv Kumar sanjivk@google.com \nGoogle Research\n\n",
"Yang Li liyang@google.com \nGoogle Research\n\n",
"Cho-Jui Hsieh chohsieh@cs.ucla.edu ",
"Pei-Hung ",
"Patrick ) Chen patrickchen@g.ucla.edu \nDepartment of Computer Science\nUniversity of California\nLos Angeles\n",
"Si Si \nGoogle Research\n\n",
"Sanjiv Kumar sanjivk@google.com \nGoogle Research\n\n",
"Yang Li liyang@google.com \nGoogle Research\n\n",
"Cho-Jui Hsieh chohsieh@cs.ucla.edu "
] | [
"Department of Computer Science\nUniversity of California\nLos Angeles",
"Google Research\n",
"Google Research\n",
"Google Research\n",
"Department of Computer Science\nUniversity of California\nLos Angeles",
"Google Research\n",
"Google Research\n",
"Google Research\n"
] | [] | Neural language models have been widely used in various NLP tasks, including machine translation, next word prediction and conversational agents. However, it is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax layer. In this paper, we introduce a novel softmax layer approximation algorithm by exploiting the clustering structure of context vectors. Our algorithm uses a light-weight screening model to predict a much smaller set of candidate words based on the given context, and then conducts an exact softmax only within that subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with back-propagation in the training process. Using the Gumbel softmax, we are able to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-k words in various tasks such as beam search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary, we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% preci-sion@5 with the original softmax layer prediction, while state-of-the-art (Zhang et al., 2018) only achieves 6.7x speedup with 98.7% precision@1 and 98.1% pre-cision@5 for the same task. | null | [
"https://arxiv.org/pdf/1810.12406v1.pdf"
] | 53,113,692 | 1810.12406 | e3392bdb1c03ab359f6acd452b3610999c02131b |
LEARNING TO SCREEN FOR FAST SOFTMAX INFER- ENCE ON LARGE VOCABULARY NEURAL NETWORKS
Pei-Hung
Patrick ) Chen patrickchen@g.ucla.edu
Department of Computer Science
University of California
Los Angeles
Si Si
Google Research
Sanjiv Kumar sanjivk@google.com
Google Research
Yang Li liyang@google.com
Google Research
Cho-Jui Hsieh chohsieh@cs.ucla.edu
LEARNING TO SCREEN FOR FAST SOFTMAX INFER- ENCE ON LARGE VOCABULARY NEURAL NETWORKS
Neural language models have been widely used in various NLP tasks, including machine translation, next word prediction and conversational agents. However, it is challenging to deploy these models on mobile devices due to their slow prediction speed, where the bottleneck is to compute top candidates in the softmax layer. In this paper, we introduce a novel softmax layer approximation algorithm by exploiting the clustering structure of context vectors. Our algorithm uses a light-weight screening model to predict a much smaller set of candidate words based on the given context, and then conducts an exact softmax only within that subset. Training such a procedure end-to-end is challenging as traditional clustering methods are discrete and non-differentiable, and thus unable to be used with back-propagation in the training process. Using the Gumbel softmax, we are able to train the screening model end-to-end on the training set to exploit data distribution. The algorithm achieves an order of magnitude faster inference than the original softmax layer for predicting top-k words in various tasks such as beam search in machine translation or next words prediction. For example, for machine translation task on German to English dataset with around 25K vocabulary, we can achieve 20.4 times speed up with 98.9% precision@1 and 99.3% preci-sion@5 with the original softmax layer prediction, while state-of-the-art (Zhang et al., 2018) only achieves 6.7x speedup with 98.7% precision@1 and 98.1% pre-cision@5 for the same task.
INTRODUCTION
Neural networks have been widely used in many natural language processing (NLP) tasks, including neural machine translation (Sutskever et al., 2014), text summarization (Rush et al., 2015) and dialogue systems (Li et al., 2016). In these applications, a neural network (e.g. LSTM) summarizes current state by a context vector, and a softmax layer is used to predict the next output word based on this context vector. The softmax layer first computes the "logit" of each word in the vocabulary, defined by the inner product of context vector and weight vector, and then a softmax function is used to transform logits into probabilities. For most applications, only top-k candidates are needed, for example in neural machine translation where k corresponds to the search beam size. In this procedure, the computational complexity of softmax layer is linear in the vocabulary size, which can easily go beyond 10K. Therefore, the softmax layer has become the computational bottleneck in many NLP applications at inference time.
Our goal is to speed up the prediction time of softmax layer. In fact, computing top-k predictions in softmax layer is equivalent to the classical Maximum Inner Product Search (MIPS) problem-given a query, finding k vectors in a database that have the largest inner product values with the query. In neural language model prediction, context vectors are equivalent to queries, and weight vectors are equivalent to the database. MIPS is an important operation in the prediction phase of many machine learning models, and many algorithms have been developed (Bachrach et al., 2014;Shrivastava & Li, 2014;Neyshabur & Srebro, 2015;Yu et al., 2017;Guo et al., 2016). Surprisingly, when we apply recent MIPS algorithms to LSTM language model prediction, there's not much speedup if we need to achieve > 98% precision (see experimental section for more details). This motivates our work to develop a new algorithm for fast neural language model prediction.
In natural language, some combinations of words appear very frequently, and when some specific combination appears, it is almost-sure that the prediction should only be within a small subset of vocabulary. This observation leads to the following question: Can we learn a faster "screening" model that identifies a smaller subset of potential predictions based on a query vector? In order to achieve this goal, we need to design a learning algorithm to exploit the distribution of context vectors (queries). This is quite unique compared with previous MIPS algorithms, where most of them only exploit the structure of database (e.g., KD-tree, PCA-tree, or small world graph) instead of utilizing the query distribution.
We propose a novel algorithm (L2S: learning to screen) to exploit the distribution of both context embeddings (queries) and word embeddings (database) to speed up the inference time in softmax layer. To narrow down the search space, we first develop a light-weight screening model to predict the subset of words that are more likely to belong to top-k candidates, and then conduct an exact softmax only within the subset. The algorithm can be illustrated in Figure 1. Our contribution is four folds:
• We propose a screening model to exploit the clustering structure of context features. All the previous neural language models only consider partitioning the embedding matrix to exploit the clustering structure of the word embedding to achieve prediction speedup. • To make prediction for a context embedding, after obtaining cluster assignment from screening model, L2S only needs to evaluate a small set of vocabulary in that cluster. Therefore, L2S can significantly reduce the inference time complexity from O(Ld) to O((r +L)d) withL L and r L where d is the context vector' dimension; L is the vocabulary size, r is the number of clusters, andL is the average word/candidate size inside clusters.
• We propose to form a joint optimization problem to learn both screening model for clustering as well as the candidate label set inside each cluster simultaneously. Using the Gumbel trick (Jang et al., 2017), we are able to train the screening network end-to-end on the training data. • We show in our experiment that L2S can quickly identify the top-k prediction words in the vocabulary an order of magnitude faster than original softmax layer inference for machine translation and next words prediction tasks.
RELATED WORK
We summarize previous works on speeding up the softmax layer computation.
Algorithms for speeding up softmax in the training phase. Many approaches have been proposed for speeding up softmax training. Jean et al. (2014); Mnih & Teh (2012) proposed importance sampling techniques to select only a small subset as "hard negative samples" to conduct the updates. The hierarchical softmax-based methods (Morin & Bengio, 2005;Grave et al., 2017) use the tree structure for decomposition of the conditional probabilities, constructed based on external word semantic hierarchy or by word frequency. Most hierarchical softmax methods cannot be used to speed up inference time since they only provide a faster way to compute probability for a target word, but not for choosing top-k predictions as they still need to compute the logits for all the words for inference. One exception is the recent work by Grave et al. (2017), which constructs the tree structure by putting frequent words in the first layer-so in the prediction phase, if top-k words are found in the first layer, they do not need to go down the tree. We provide comparison with this approach in our experiments.
Algorithms for Maximum Inner Product Search (MIPS). Given a query vector and a database with n candidate vectors, MIPS aims to identify a subset of vectors in the database that have top-k inner product values with the query. Top-k softmax can be naturally approximated by conducting MIPS. Here we summarize existing MIPS algorithms:
• Hashing: (Shrivastava & Li, 2014;Neyshabur & Srebro, 2015) proposed to reduce MIPS to nearest neighbor search (NNS) and then solve NNS by Locality Sensitive Hashing (LSH) (Indyk & Motwani, 1998). • Database partitioning: PCA tree (Sproull, 1991) partitions the space according to the directions of principal components and shows better performance in practice. Bachrach et al. (2014) shows tree-based approaches can be used for solving MIPS but the performance is poor for high dimensional data. an NNS algorithm based on small world graph. The main idea is to form a graph with candidate vectors as nodes and edges are formed between nearby candidate vectors. The query stage can then done by navigating in this graph. Zhang et al. (2018) applies the MIPS-to-NNS reduction and shows graph-based approach performs well on neural language model prediction. • Direct solvers for MIPS: Some algorithms are proposed to directly tackle MIPS problem instead of transforming to NNS. Guo et al. (2016); Wu et al. (2017) use quantization-based approach to approximate candidate set. Another Greedy MIPS algorithm is recently proposed in (Yu et al., 2017), showing significant improvement over LSH and tree-based approaches.
Algorithms for speeding up softmax in inference time. MIPS algorithms can be used to speed up the prediction phase of softmax layer, since we can view context vectors as query vectors and weight vectors as database. In the experiments, we also include the comparisons with hashing-based approach (LSH) (Indyk & Motwani, 1998), partition-based approach (PCA-tree (Sproull, 1991)) and Greedy approach (Yu et al., 2017). The results show that they perform worse than graph-based approach (Zhang et al., 2018) and are not efficient if we want to keep a high precision.
For NLP tasks, there are two previous attempts to speed up softmax layer prediction time. (Shim et al., 2017) proposed to approximate the weight matrix in the softmax layer with singular value decomposition, find a smaller candidate set based on the approximate logits, and then do a finegrained search within the subset. (Zhang et al., 2018) transformed MIPS to NNS and applied graphbased NNS algorithm to speed up softmax. In the experiments, we show our algorithm is faster and more accurate than all these previous algorithms. Although they also have a screening component to select an important subset, our algorithm is able to learn the screening component using training data in an end-to-end manner to achieve better performance.
ALGORITHM
Softmax layer is the main bottleneck when making prediction in neural language models. We assume L is the number of output tokens, W ∈ R d×L is the weight matrix of the softmax layer, and b ∈ R L is the bias vector. For a given context vector h ∈ R d (such as output of LSTM), softmax layer first computes the logits
x s = w T s h + b s for s = 1, · · · , L(1)
where w s is the s-th column of W and b s is the s-th entry of b, and then transform logits into probabilities p s = e xs L l=1 e x l for s = 1, · · · , L. Finally it outputs the top-k candidate set by sorting the probabilities [p 1 , · · · , p L ], and uses this information to perform beam search in translation or predict next word in language model.
To speedup the computation of top-k candidates, all the previous algorithms try to exploit the structure of {w s } L s=1 vectors, such as low-rank, tree partitioning or small world graphs (Zhang et al., 2018;Shim et al., 2017;Grave et al., 2017). However, in NLP applications, there exists strong structure of context vectors {h} that has not been exploited in previous work. In natural language, some combinations of words appear very frequently, and when some specific combinations appear, the next word should only be within a small subset of vocabulary. Intuitively, if two context vectors h i and h j are similar, meaning similar context, then their candidate label sets C(h i ) and C(h j ) can be shared. In other words, suppose we already know the candidate set of C(h i ) for h i , to find the candidate set for h j , instead of computing the logits for all L tokens in the vocabulary, we can narrow down the candidate sets to be C(h i ), and only compute the logits in C(h i ) to find top-k prediction for h j .
The Prediction Process. Suppose the context vectors are partitioned into r disjoint clusters and similar ones are grouped in the same partition/cluster, if a vector h falls into one of the cluster, we will narrow down to that cluster's label sets and only compute the logits of that label set. This screening model is parameterized by clustering weights v 1 , . . . , v r ∈ R d and label candidate set for each cluster c 1 , . . . , c r ∈ {0, 1} L . To predict a hidden state h, our algorithm first computes the cluster indicator
z(h) = arg max t v T t h,(2)
and then narrows down the search space to C(h) := {s | c z(h),s = 1}. The exact softmax is then computed within the subset C(h) to find the top-k predictions (used in language model) or compute probabilities used for beam search in neural machine translation. As we can see the prediction time includes two steps. The first step has r inner product operations to find the cluster which takes O(rd) time. The second step computes softmax over a subset, which takes O(Ld) time whereL (L L) is the average number of labels in the subsets. Overall the prediction time for a context embedding h is O((r +L)d), which is much smaller than the O(Ld) complexity using the vanilla softmax layer. Figure 1 illustrates the overall prediction process.
However, how to learn the clustering parameter {v t } r t=1 and the candidate sets {c t } r t=1 ? We found that running spherical kmeans on all the context vectors in the training set can lead to reasonable results (as shown in the appendix), but can we learn even parameters to minimize the prediction error? In the following, we propose an end-to-end procedure to learn both context clusters and candidate subsets simultaneously to maximize the performance.
Learning the clustering. Traditional clustering algorithms such as kmeans on Euclidean space or cosine similarity have two drawbacks. First, they are discrete and non-differentiable, and thus hard to use with back-propagation in the end-to-end training process. Second, they only consider clustering on {h i } N i=1 , without taking the predicted label space into account. In this paper, we consider learning the partition through Gumbel-softmax trick. We will briefly summarize the technique and direct the reader to (Jang et al., 2017) for further details on these techniques. In Table 4 in the appendix, we compare our proposed method to traditional spherical-kmeans to show that it can further improve the performance.
First, we turn the deterministic clustering in Eq(2) into a stochastic process: the probability that h belongs to cluster t is modeled as
P (t|h) = exp(v T t h) r j=1 exp(v T j h) , ∀t, and z(h) = arg max t P (t|h).(3)
However, since argmax is a discrete operation, we cannot combine this operation with final objective function to find out better clustering weight vectors. To overcome this, we can re-parameterize Eq(3) using Gumbel trick. Gumbel trick provides an efficient way to draw samples z from the categorical distribution calculated in Eq (3):
m(h) = one hot(arg max j [g j + log P (j|h)]),(4)
where each g j is an i.i.d sample drawn from Gumbel(0, 1). We then use the Gumbel softmax with temperature = 1 as a continuous, differentiable approximation to argmax, and generate rdimensional sample vectors p = [p 1 , · · · , p r ] which is approximately one-hot m(h) with p t = exp(log(P (t|h)) + g t ) r j=1 exp(log(P (j|h)) + g j )
, ∀t ∈ {1, . . . , r}.
Using the Straight-Through (ST) technique proposed in (Jang et al., 2017), we denotep = p + (one hot(arg max j p j ) − p) as the one-hot representation of p and assume back-propagation only goes through the first term. This enables end-to-end training with the loss function defined in the following section. We also usep(h) to denote the one-hot entry ofp (i.e., the position of the "one" entry ofp).
Learning the candidate set for each cluster. For a context vector h i , after getting into the partition t, we will narrow down the search space of labels to a smaller subset. Let c t be the label vector for t-th cluster, we define the following loss to penalize the mis-match between correct predictions and the candidate set:
(h i , y i ) = s:yis=1 (1 − c ts ) 2 + λ s:yis=0 (c ts ) 2 ,(6)
where y i ∈ {0, 1} L is the 'ground truth' label vector for h i that is computed from the exact softmax.
We set y to be the label vector from full softmax because our goal is to approximate full softmax prediction results while having faster inference (same setting with (Shim et al., 2017;Zhang et al., 2018)). The loss is designed based on the following intuition: when we narrow down the candidate set, there are two types of loss: 1) When a candidate s (y is = 1) is a correct prediction but not in the candidate set (c ts = 0), then our algorithm will miss this label. 2) When a candidate j (y is = 0) is not a correct prediction but it's in the candidate set (c ts = 1), then our algorithm will waste the computation of one vector product. Intuitively, 1) is much worse than 2), so we put a much smaller weight λ ∈ (0, 1) on the second term.
The choice of true candidate set in y can be set according to the application. Throughout this paper, we set y to be the correct top-5 prediction (i.e., positions of 5-largest x s in Eq(1). y is = 1 means s is within the correct top-5 prediction of h i , while y is = 0 means it's outside the top-5 prediction.
Final objective function: We propose to learn the partition function (parameterized by {v t } r t=1 ) and the candidate sets ({c t } r t=1 ) simultaneously. The joint objective function will be: where N is the number of samples,L is the average label size defined asL = ( N i=1 L s=1 cp (hi),s )/N ,p(h i ) is the index for wherep t = 1 for t = 1, · · · , r; B is the desired average label/candidate size across different clusters which could be thought as prediction time budget. SinceL is related to the computation time of proposed method, by enforcingL ≤ B we can make sure label sets won't grow too large and desired speed-up rate could be achieved. Note thatp(h i ) is for clustering assignment, and thus a function of clustering parameters v 1 , · · · , v r as shown in Eq(3).
Optimization. To solve the optimization problem in Eq (7), we apply alternating minimization. First, when fixing the clustering (parameters {v t }) to update the candidate sets (parameters {c t }), the problem is identical to the classic "Knapsack" problem-each c t,s is an item, with weight proportional to number of samples belonging to this cluster, and value defined by the loss function of Eq(7), and the goal is to maximize the value within weight capacity B. There is no polynomial time solution with respect to r, so we apply a greedy approach to solve it. We sort items by the value-capacity ratio and add them one-by-one until reaching the upper capacity B.
When fixing {c t } and learning {v t }, we convert the cluster size constraint to objective function by Lagrange-Multiplier:
minimize v1,··· ,vr N i=1 ( j:yis=1 (1 − cp (hi)s ) 2 + λ j:yis=0 (cp (hi)s ) 2 ) + γ max(0,L − B)(8)
s.t. c t ∈ {0, 1} L ∀t = 1, . . . , r, and simply use SGD since back-propagation is available after applying Gumbel trick. To deal with L in the mini-batch setting, we replace it by the moving-average, updated at each iteration when we go through a batch of samples. The overall learning algorithm is given in Algorithm 1.
Algorithm 1: Training Process for Learning to Screen (L2S)
Input: Context vectors {h i } N i=1 (e.g., from LSTM); trained network's softmax layer's weight W and basis vector b. Output: Clustering parameters v t and candidate label set c t for each cluster for t = 1, · · · , r. 1 Hyperparameter: Number of clusters r; prediction time budget B; regularization terms λ and γ;
number of iterations T . 2 Compute ground true label vector {y i } N i=1 with only top-k non-zeros entries. The top-k labels for each context vector h i are generated by computing and then sorting the values in x i = W T h i + b;
3 Initialize cluster weights {v t } r t=1 using spherical kmeans over {h i } N i=1 ; 4 Initialize the label set for each cluster {c t } r t=1 to be zeros; 5 for j = 1, · · · , T do 6 Fixing {c t } r t=1 and learning the clustering parameters {v t } r t=1 in Eq(8) by SGD with Gumbel trick; 7 Fixing {v t } r t=1 and learning the labels set c t t = 1, · · · , r by solving the "Knapsack" problem using Greedy approach; 8 return c t ,v t for all t = 1, · · · , r.
EXPERIMENTS
We evaluate our method on two tasks: Language Modeling (LM) and Neural Machine Translation (NMT). For LM, we use the Penn Treebank Bank (PTB) dataset (Marcus et al., 1993). For NMT, we use the IWSLT 2014 German-to-English translation task (Cettolo et al., 2014) and IWSLT 2015 English-Vietnamese data (Luong & Manning, 2015). All the models use a 2-layer LSTM neural network structure. For IWSLT-14 DE-EN task, we use the PyTorch checkpoint provided by Open-NMT (Klein et al., 2017). For IESLT-15 EN-VE task, we set the dimension of hidden size to be 200, and the rest follows the default training hyperparameters of OpenNMT. For PTB, we train a 2-layer LSTM-based language model on PTB from scratch with two setups: PTB-Small and PTB-Large. The LSTM hidden state sizes are 200 for PTB-Small and 1500 for PTB-Large, so are their embedding sizes. We verified that all these models achieved benchmark performance on the corresponding datasets as reported in the literature. We then apply our method to accelerate the inference of these benchmark models.
COMPETING ALGORITHMS
We include the following algorithms in our comparisons:
• L2S (Our proposed algorithm): the proposed learning-to-screen method. Number of clusters and average label size across clusters will be the main hyperparameters affecting computational time. We could control the tradeoff of time and accuracy by fixing the number of clusters and varying the size constraint B. For all the experiments we set parameters λ = 0.0003 and γ = 10. We will show later that L2S is robust to different numbers of clusters. • FGD (Zhang et al., 2018): transform the softmax inference problem into nearest neighbor search (NNS) and solve it by a graph-based NNS algorithm. • SVD-softmax (Shim et al., 2017): a low-rank approximation approach for fast softmax computation. We vary the rank of SVD to control the tradeoff between prediction speed and accuracy. • Adaptive-softmax (Grave et al., 2017): a variant of hierarchical softmax that was mainly developed for fast training on GPUs. However, this algorithm can also be used to speedup prediction time (as discussed in Section 2), so we include it in our comparison. The tradeoff is controlled by varying the number of frequent words in the top level in the algorithm. • Greedy-MIPS (Yu et al., 2017): the greedy algorithm for solving MIPS problem. The tradeoff is controlled by varying the budget parameter in the algorithm. We implement L2S, SVD-softmax and Adaptive-softmax in numpy. For FGD, we use the C++ library implemented in (Malkov & Yashunin, 2016b;Boytsov & Naidan, 2013) for the core NNS operations. The last three algorithms (Greedy-MIPS, PCA-MIPS and LSH-MIPS) have not been used to speed up softmax prediction in the literature and they do not perform well in these NLP tasks, but we still include them in the experiments for completeness. We use the C++ code by (Yu et al., 2017) to run experiments for these three MIPS algorithms.
Since our focus is to speedup the softmax layer which is known to be the bottleneck of NLP tasks with large vocabulary, we only report the prediction time results for the softmax layer in all the experiments. To compare under the same amount of hardware resource, all the experiments were conducted on an Intel Xeon E5-2620 CPU using a single thread.
PERFORMANCE COMPARISONS
To measure the quality of top-k approximate softmax, we compute Precision@k (P@k) defined by |A k ∩ S k |/k, where A k is the top-k candidates computed by the approximate algorithm and S k is the top-k candidates computed by exact softmax. We present the results for k = 1, 5. This measures the accuracy of next-word-prediction in LM and NMT. To measure the speed of each algorithm, we report the speedup defined by the ratio of wall clock time of the exact softmax to find top-k words divided by the wall clock time of the approximate algorithm.
For each algorithm, we show the prediction accuracy vs speedup over the exact softmax in Figure 2, 3, 4, 5, 6, 7 (The last three are in the appendix). We do not show the results for PCA-MIPS and LSH-MIPS in the figures as their curves run outside the range of the figures. Some represented results are reported in Table 1. These results indicate that the proposed algorithm significantly outperforms all the previous algorithms for predicting top-k words/tokens on both language model (next word prediction) and neural machine translation.
Next, we measure the BLEU score of the NMT tasks when incorporating the proposed algorithm with beam search. We consider the common settings with beam size = 1 or 5, and report the wall clock time of each algorithm excluding the LSTM part. We only calculate log-softmax values on reduced search space and leave probability of other vocabularies not in the reduced search space to be 0. From the precision comparison, since FGD shows better performance than other completing methods in Table 1, we only compare our method with state-of-the-art algorithm FGD in Table 2 in terms of BLEU score. Our method can achieve more than 13 times speed up with only 0.14 loss in BLEU score in DE-EN task with beam size 5. Similarly, our method can achieve 20 times speed up in EN-VE task with only 0.08 loss in BLEU score. In comparison, FGD can only achieve less than 3-6 times speed up over exact softmax to achieve a similar BLEU score. We also compare our algorithm with other methods using perplexity as a metric in PTB-Small and PTB-Large as shown in Table 5 in the appendix. We observe more than 5 times speedup over using full softmax without losing much perplexity (less than 5% difference). More details can be found in the appendix.
In addition, we also show some qualitative results of our proposed method on DE-EN translation task in Table 6 to demonstrate that our algorithm can provide similar translation results but with faster inference time.
SELECTION OF THE NUMBER OF CLUSTERS
Finally, we show the performance of our method with different number of clusters in Table 3. When varying number of clusters, we also vary the time budget B so that the prediction time including finding the correct cluster and computing the softmax in the candidate set are similar. The results indicate that our method is quite robust to number of clusters. Therefore, in practice we suggest to just choose the number of clusters to be 100 or 200 and tune the "time budget" in our loss function to get the desired speed-accuracy tradeoff.
CONCLUSION
In this paper, we proposed a new algorithm for fast softmax inference on large vocabulary neural language models. The main idea is to use a light-weight screening model to predict a smaller subset of candidates, and then conduct exact search within that subset. By forming a joint optimization problem, we are able to learn the screening network end-to-end using the Gumbel trick. In the experiment, we show that the proposed algorithm achieves much better inference speedup than stateof-the-art algorithms for language model and machine translation tasks.
ACKNOWLEDGEMENT
We are grateful to Ciprian Chelba for the fruitful comments, corrections and inspiration. CJH acknowledges the support of NSF via IIS-1719097, Intel faculty award, Google Cloud and Nvidia.
COMPARISON TO SPHERICAL-KMEANS INITIALIZATION
Since we firstly initialize parameters in our method by Shperical-KMEANS, we also show in Table 4 that L2S can further improve over the baseline clustering methods. Notice that even the basic Spherical-KMEANS can outperform state-of-the-art methods. This shows that clustering structure of context features is a key to perform fast prediction.
PERPLEXITY RESULTS
Finally, we go beyond top-k prediction and apply our algorithm to speed up the perplexity computation for language models. To get perplexity, we need to compute the probability of each token appeared in the dataset, which may not be within top-k softmax predictions. In order to apply a top-k approximate softmax algorithm for this task, we adopt the low-rank approximation idea proposed in (Shim et al., 2017). For tokens within the candidate set, we compute the logits using exact inner product computation, while for tokens outside the set we approximate the logits byW h wherẽ W is a low-rank approximation of the original weight matrix in the softmax layer. The probability can then be computed using these logits. For all the algorithms, we set the rank ofW to be 20 for PTB-Small and 200 for PTB-Large. The results are presented in Table 5. We observe that our method outperforms previous fast softmax approximation methods for computing perplexity on both PTB-small and PTB-large language models.
QUALITATIVE RESULTS
We select some translated sentences of DE-EN task shown in Table 6 to demonstrate that our algorithm can provide similar translations but with faster inference time. Table 6: Qualitative comparison of our method to full softmax computation. The accelerated model used is the same as reported in Table 2.
Full-softmax Our method you know , one of the great <unk> at travel and one of the pleasures at the <unk> research is to live with the people who remember the old days , who still feel their past in the wind , touch them on the rain of <unk> rocks , taste them in the bitter sheets of plants .
you know, one of the great <unk> at travel and one of the joy of the <unk> research is to live together with the people who remember the old days , who still feel their past in the wind , touch them on the rain of <unk> rocks , taste them in the bitter sheets of plants. it s the symbol of all that we are , and what were capable of as astonishingly <unk> species .
its the symbol of all of what we are , and what were capable of as astonishingly <unk> species . when any of you were born in this room , there were 6,000 languages talking on earth .
when everybody was born in this room , there were 6,000 languages spoken on earth . a continent is always going to leave out , because the idea was that in sub-saharan africa there was no religious faith , and of course there was a <unk> , and <unk> is just the remains of these very profound religious thoughts that <unk> in the tragic diaspora of the <unk> . a continent is always going to leave out , because the presumption was that in subsaharan africa there was no religious faith , and of course there was a <unk> , and <unk> is just the cheapest of these very profound religious thoughts that <unk> in the tragic diaspora of <unk> <unk> . so , the fact is that , in the 20th century, in 300 years , it is not going to be remembered for its wars or technological innovation , but rather than an era where we were present , and the massive destruction of biological and cultural diversity on earth either on earth is either active or <unk>. so the problem is not the change .
so , the fact is that , in the 20th century , in 300 years , it is not going to be remembered for its wars or technological innovation , but rather than an era where we were present , and the massive destruction of biological and cultural diversity on earth either on earth is either <unk> or passive. so the problem is not the change . and in this song , we're going to be able to connect the possibility of what we are : people with full consciousness , who are aware of the importance that all people and gardens have to thrive , and there are great moments of optimism . and in this song , we're going to be able to rediscover the possibility of what we are : people with full consciousness that the importance of the importance of being able to thrive is to be able to thrive , and there are great moments of optimism .
Figure 1 :
1Illustration of the proposed algorithm.
•
Graph-based algorithm: Malkov et al. (2014); Malkov & Yashunin (2016a) recently developed
( 1
1− cp (hi),s ) 2 + λ s:yis=0 (cp (hi),s ) 2 )(7)s.t. c t ∈ {0, 1} L ∀t = 1, . . . , r L ≤ B ∀i = 1, . . . , N
• PCA-MIPS(Bachrach et al., 2014): transform MIPS into Nearest Neighbor Search (NNS) and then solve NNS by PCA-tree. The tradeoff is controlled by varying the tree depth.• LSH-MIPS(Neyshabur & Srebro, 2015): transform MIPS into NNS and then solve NNS by Locality Sensitive Hashing (LSH). The tradeoff is controlled by varying number of hash functions.
Figure 2 :
2Precision@1 versus speed-up rate of PTB Large Setup.
Figure 3 :
3Precision@1 versus speed-up rate of PTB Small Setup.
Figure 4 :
4Precision@1 versus speed-up rate of NMT:DE-EN Setup.
Figure 5 :
5Precision@5 versus speed-up rate of PTB Large Setup.
Figure 6 :
6Precision@5 versus speed-up rate of PTB Small Setup.
Figure 7 :
7Precision@5 versus speed-up rate of NMT:DE-EN Setup.
Table 1 :
1Comparison of softmax prediction results on three datasets. Speedup is based on the original softmax time. For example, 10x means the method's prediction time is 10 times faster than original softmax layer prediction time. Computation of full softmax per step is 4.32 ms for PTB-Large, 0.32 ms for PTB-Small and 4.83 ms for NMT: DE-EN.PTB-Small
PTB-Large
NMT: DE-EN
Speedup P@1 P@5 Speedup P@1 P@5 Speedup P@1 P@5
L2S (Our Method)
10.6x
0.998 0.990
45.3x
0.996 0.982
20.4x
0.989 0.993
FGD
1.3x
0.980 0.989
6.9x
0.975 0.979
6.7x
0.987 0.981
SVD-softmax
0.8x
0.987 0.99
2.3x
0.988 0.981
3.4x
0.98 0.985
Adaptive-softmax
1.9x
0.972 0.981
4.2x
0.974 0.937
3.2x
0.982 0.984
Greedy-MIPS
0.5x
0.998 0.972
1.8x
0.945 0.903
2.6x
0.911 0.887
PCA-MIPS
0.14x
0.322 0.341
0.5x
0.361 0.326
1.3x
0.379 0.320
LSH-MIPS
1.3x
0.165 0.33
2.2x
0.353 0.31
1.6x
0.131 0.137
Table 2 :
2Comparison of BLEU score results vs prediction time on DE-EN and EN-VE task. Speedup is based on the original softmax time.Model
Metric
Original FGD Our method
NMT: DE-EN Speedup Rate
1x
2.7x
14.0x
Beam=1
BLEU
29.50 29.43
29.46
NMT: DE-EN Speedup Rate
1x
2.9x
13.4x
Beam=5
BLEU
30.33 30.13
30.19
NMT: EN-VE Speedup Rate
1x
6.4x
12.4x
Beam=1
BLEU
24.58 24.28
24.38
NMT: EN-VE Speedup Rate
1x
4.6x
20x
Beam=5
BLEU
25.35 25.26
25.27
Table 3 :
3L2S with different number of clusters.
Number of Clusters
50
100
200
250
Time in ms
0.12
0.17
0.14
0.12
P@1
0.997 0.998 0.998 0.994
P@5
0.988
0.99
0.99
0.98
Table 4 :
4Comparison of L2S to spherical-KMEANS clustering.PTB-Small
PTB-Large
NMT: DE-EN
Speedup P@1 P@5 Speedup P@1 P@5 Speedup P@1 P@5
Our Method
10.6x
0.998 0.990
45.3x
0.999 0.82
20.4x
0.989 0.993
Sphereical-kmeans
4x
0.988 0.992
6.9x
0.992 0.971
13.8x
0.991 0.993
FGD
1.3x
0.980 0.989
6.9x
0.975 0.979
6.7x
0.987 0.981
Table 5 :
5Comparison of Perplexity results vs prediction time on PTB dataset.Model
Metric
Original SVD-softmax Adaptive-softmax
FGD Our method
PTB-Small Speedup Rate
1x
0.84x
1.69x
0.95x
5.69x
PPL
112.28
116.64
121.43 116.49
115.91
PTB-Large Speedup Rate
1x
0.61x
1.76x
2.27x
8.11x
PPL
78.32
80.30
82.59
80.47
80.09
Nir Nice, and Ulrich Paquet. Speeding up the xbox recommender system using a euclidean transformation for inner-product spaces. Yoram Bachrach, Yehuda Finkelstein, Ran Gilad-Bachrach, Liran Katzir, Noam Koenigstein, Proceedings of the 8th ACM Conference on Recommender systems. the 8th ACM Conference on Recommender systemsACMYoram Bachrach, Yehuda Finkelstein, Ran Gilad-Bachrach, Liran Katzir, Noam Koenigstein, Nir Nice, and Ulrich Paquet. Speeding up the xbox recommender system using a euclidean transfor- mation for inner-product spaces. In Proceedings of the 8th ACM Conference on Recommender systems, pp. 257-264. ACM, 2014.
Engineering efficient and effective non-metric space library. Leonid Boytsov, Bilegsaikhan Naidan, Similarity Search and Applications -6th International Conference. SISAP 2013, A Coruña, SpainLeonid Boytsov and Bilegsaikhan Naidan. Engineering efficient and effective non-metric space library. In Similarity Search and Applications -6th International Conference, SISAP 2013, A Coruña, Spain, October 2-4, 2013, Proceedings, pp. 280-293, 2013.
Report on the 11th iwslt evaluation campaign, iwslt 2014. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, Marcello Federico, Proceedings of the International Workshop on Spoken Language Translation. the International Workshop on Spoken Language TranslationHanoi, VietnamMauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. Report on the 11th iwslt evaluation campaign, iwslt 2014. In Proceedings of the International Workshop on Spoken Language Translation, Hanoi, Vietnam, 2014.
Efficient softmax approximation for gpus. Edouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, Hervé Jégou, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningSydney, NSW, AustraliaEdouard Grave, Armand Joulin, Moustapha Cissé, David Grangier, and Hervé Jégou. Efficient softmax approximation for gpus. In Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, pp. 1302-1310, 2017.
Quantization based fast inner product search. Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, David Simcha, Artificial Intelligence and Statistics. Ruiqi Guo, Sanjiv Kumar, Krzysztof Choromanski, and David Simcha. Quantization based fast inner product search. In Artificial Intelligence and Statistics, pp. 482-490, 2016.
Approximate nearest neighbors: towards removing the curse of dimensionality. Piotr Indyk, Rajeev Motwani, Proceedings of the thirtieth annual ACM symposium on Theory of computing. the thirtieth annual ACM symposium on Theory of computingACMPiotr Indyk and Rajeev Motwani. Approximate nearest neighbors: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing, pp. 604-613. ACM, 1998.
Categorical reparametrization with gumble-softmax. Eric Jang, Shixiang Gu, Ben Poole, International Conference on Learning Representations. Eric Jang, Shixiang Gu, and Ben Poole. Categorical reparametrization with gumble-softmax. In International Conference on Learning Representations 2017. OpenReviews. net, 2017.
On using very large target vocabulary for neural machine translation. Sébastien Jean, Kyunghyun Cho, Roland Memisevic, Yoshua Bengio, arXiv:1412.2007arXiv preprintSébastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. On using very large target vocabulary for neural machine translation. arXiv preprint arXiv:1412.2007, 2014.
Opennmt: Open-source toolkit for neural machine translation. Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, Alexander M Rush, arXiv:1701.02810arXiv preprintGuillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M Rush. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810, 2017.
Deep reinforcement learning for dialogue generation. Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, Dan Jurafsky, arXiv:1606.01541arXiv preprintJiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jianfeng Gao, and Dan Jurafsky. Deep reinforce- ment learning for dialogue generation. arXiv preprint arXiv:1606.01541, 2016.
Stanford neural machine translation systems for spoken language domain. Minh-Thang Luong, Christopher D Manning, International Workshop on Spoken Language Translation. Da Nang, VietnamMinh-Thang Luong and Christopher D. Manning. Stanford neural machine translation systems for spoken language domain. In International Workshop on Spoken Language Translation, Da Nang, Vietnam, 2015.
Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. A Yu, Malkov, Dmitry A Yashunin, arXiv:1603.09320arXiv preprintYu A Malkov and Dmitry A Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. arXiv preprint arXiv:1603.09320, 2016a.
Approximate nearest neighbor algorithm based on navigable small world graphs. Yury Malkov, Alexander Ponomarenko, Andrey Logvinov, Vladimir Krylov, Information Systems. 45Yury Malkov, Alexander Ponomarenko, Andrey Logvinov, and Vladimir Krylov. Approximate near- est neighbor algorithm based on navigable small world graphs. Information Systems, 45:61-68, 2014.
Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. A Yury, D A Malkov, Yashunin, abs/1603.09320CoRRYury A. Malkov and D. A. Yashunin. Efficient and robust approximate nearest neighbor search using hierarchical navigable small world graphs. CoRR, abs/1603.09320, 2016b.
Building a large annotated corpus of english: the penn treebank. Mitchell P Marcus, Mary Ann Marcinkiewicz, Beatrice Santorini, Comput. Linguist. 192Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. Building a large annotated corpus of english: the penn treebank. Comput. Linguist., 19(2):313-330, 1993.
A fast and simple algorithm for training neural probabilistic language models. Andriy Mnih, Yee Whye Teh, ICML. Andriy Mnih and Yee Whye Teh. A fast and simple algorithm for training neural probabilistic language models. In ICML, 2012.
Hierarchical probabilistic neural network language model. Frederic Morin, Yoshua Bengio, AISTATS. 5Frederic Morin and Yoshua Bengio. Hierarchical probabilistic neural network language model. In AISTATS, volume 5, pp. 246-252, 2005.
On symmetric and asymmetric lshs for inner product search. Behnam Neyshabur, Nathan Srebro, ICML. Behnam Neyshabur and Nathan Srebro. On symmetric and asymmetric lshs for inner product search. In ICML, 2015.
A neural attention model for abstractive sentence summarization. Sumit Alexander M Rush, Jason Chopra, Weston, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingAlexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 379-389, 2015.
Svd-softmax: Fast softmax approximation on large vocabulary neural networks. Kyuhong Shim, Minjae Lee, Iksoo Choi, Yoonho Boo, Wonyong Sung, Advances in Neural Information Processing Systems. 30Kyuhong Shim, Minjae Lee, Iksoo Choi, Yoonho Boo, and Wonyong Sung. Svd-softmax: Fast softmax approximation on large vocabulary neural networks. In Advances in Neural Information Processing Systems 30, pp. 5463-5473. 2017.
Asymmetric lsh (alsh) for sublinear time maximum inner product search (mips). Anshumali Shrivastava, Ping Li, Advances in Neural Information Processing Systems. Anshumali Shrivastava and Ping Li. Asymmetric lsh (alsh) for sublinear time maximum inner prod- uct search (mips). In Advances in Neural Information Processing Systems, pp. 2321-2329, 2014.
Refinements to nearest-neighbor searching ink-dimensional trees. F Robert, Sproull, Algorithmica. 61-6Robert F Sproull. Refinements to nearest-neighbor searching ink-dimensional trees. Algorithmica, 6(1-6):579-589, 1991.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, Advances in neural information processing systems. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pp. 3104-3112, 2014.
Multiscale quantization for fast similarity search. Xiang Wu, Ruiqi Guo, Ananda Theertha Suresh, Sanjiv Kumar, N Daniel, David Holtmann-Rice, Felix Simcha, Yu, NIPS. Xiang Wu, Ruiqi Guo, Ananda Theertha Suresh, Sanjiv Kumar, Daniel N Holtmann-Rice, David Simcha, and Felix Yu. Multiscale quantization for fast similarity search. In NIPS, pp. 5745-5755.
A greedy approach for budgeted maximum inner product search. Hsiang-Fu Yu, Cho-Jui Hsieh, Qi Lei, Inderjit Dhillon, NIPS. Hsiang-Fu Yu, Cho-Jui Hsieh, Qi Lei, and Inderjit Dhillon. A greedy approach for budgeted maxi- mum inner product search. In NIPS, 2017.
Navigating with graph representations for fast and scalable decoding of neural language models. Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, Yuxiong He, NIPS. Minjia Zhang, Xiaodong Liu, Wenhan Wang, Jianfeng Gao, and Yuxiong He. Navigating with graph representations for fast and scalable decoding of neural language models. In NIPS, 2018.
| [] |
[
"Dependent Gated Reading for Cloze-Style Question Answering",
"Dependent Gated Reading for Cloze-Style Question Answering"
] | [
"Reza Ghaeini ghaeinim@eecs.oregonstate.edu \nOregon State University\nCorvallisORUSA\n",
"Xiaoli Z Fern xfern@eecs.oregonstate.edu \nOregon State University\nCorvallisORUSA\n",
"Hamed Shahbazi shahbazh@eecs.oregonstate.edu \nOregon State University\nCorvallisORUSA\n",
"Prasad Tadepalli tadepall@eecs.oregonstate.edu \nOregon State University\nCorvallisORUSA\n"
] | [
"Oregon State University\nCorvallisORUSA",
"Oregon State University\nCorvallisORUSA",
"Oregon State University\nCorvallisORUSA",
"Oregon State University\nCorvallisORUSA"
] | [
"Proceedings of the 27th International Conference on Computational Linguistics"
] | We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | null | [
"https://www.aclweb.org/anthology/C18-1282.pdf"
] | 44,123,113 | 1805.10528 | f108b9bf4601a9898821a8cd06007e0682d5bf74 |
Dependent Gated Reading for Cloze-Style Question Answering
August 20-26. 2018
Reza Ghaeini ghaeinim@eecs.oregonstate.edu
Oregon State University
CorvallisORUSA
Xiaoli Z Fern xfern@eecs.oregonstate.edu
Oregon State University
CorvallisORUSA
Hamed Shahbazi shahbazh@eecs.oregonstate.edu
Oregon State University
CorvallisORUSA
Prasad Tadepalli tadepall@eecs.oregonstate.edu
Oregon State University
CorvallisORUSA
Dependent Gated Reading for Cloze-Style Question Answering
Proceedings of the 27th International Conference on Computational Linguistics
the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAugust 20-26. 20183330
We present a novel deep learning architecture to address the cloze-style question answering task. Existing approaches employ reading mechanisms that do not fully exploit the interdependency between the document and the query. In this paper, we propose a novel dependent gated reading bidirectional GRU network (DGR) to efficiently model the relationship between the document and the query during encoding and decision making. Our evaluation shows that DGR obtains highly competitive performance on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE and CBT-CN) and Who DiD What (WDW, Strict and Relaxed). Finally, we extensively analyze and validate our model by ablation and attention studies.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
Introduction
Human language comprehension is an important and challenging task for machines that requires semantic understanding and reasoning over clues. The goal of this general task is to read and comprehend the given document and answer queries.
Recently, the cloze-style reading comprehension problem has received increasing attention from the NLP community. A cloze-style query (Taylor, 1953) is a short passage of text containing a blank part, which we must fill with an appropriate token based on the reading and understanding of a related document. The recent introduction of several large-scale datasets of cloze-style question answering made it feasible to train deep learning systems for such task (Onishi et al., 2016;Hill et al., 2015;Hermann et al., 2015). Various deep learning models have been proposed and achieved reasonable results for this task (Yang et al., 2017;Dhingra et al., 2017;Munkhdalai and Yu, 2017;Cui et al., 2017;Trischler et al., 2016;Kadlec et al., 2016;Cui et al., 2016;Sordoni et al., 2016). The success of recent models are mostly due to two factors: 1) Attention mechanisms , which allow the model to sharpen its understanding and focus on important and appropriate subparts of the given context; 2) Multi-hop architectures, which read the document and/or the query in multiple passes, allowing the model to re-consider and refocus its understanding in later iterations. Intuitively, both attention mechanisms and multi-hop reading fulfill the necessity of considering the dependency aspects of the given document and the query. Such a consideration enables the model to pay attention to the relevant information and ignore the irrelevant details. Human language comprehension is often performed by jointly reading the document and query to leverage their dependencies and stay focused in reading and avoid losing relevant contextual information. Current state-of-the-art models also attempt to capture this by using the reading of the query to guide the reading of the document (Yang et al., 2017;Dhingra et al., 2017), or using the memory of the document to help interpret the query (Munkhdalai and Yu, 2017). However, these systems only consider uni-directional dependencies. Our primary hypothesis is that we can gain further improvements by considering bidirectional dependencies.
In this paper, we present a novel multi-hop neural network architecture, called Dependent Gated Reading (DGR), which addresses the aforementioned gap and performs dependent reading in both directions. Our model begins with an initial reading step that encodes the given query and document, followed by an iterative reading module (multi-hop) that employs soft attention to extract the most relevant information from the document and query encodings to augment each other's representation, which are then passed onto the next iteration of reading. Finally, the model performance a final round of attention allocate and aggregate to rank all possible candidates and make prediction. We evaluate our model on well-known machine comprehension benchmarks such as the Children's Book Test (CBT-NE & CBT-CN), and Who DiD What (WDW, Strict & Relaxed). Our experimental results indicate the effectiveness of DGR by achieving state-of-the-art results on CBT-NE, WDW-Strict, and WDW-Relaxed. In summary, our contributions are as follows: 1) we propose a new deep learning architecture to address the existing gap of reading dependencies between the document and the query. The proposed model outperforms the state-of-the-art for CBT-NE, WDW-Strict, and WDW-Relaxed by 0.5%, 0.8%, and 0.3% respectively; 2) we perform an ablation study and analysis to clarify the strengths and weaknesses of our model while enriching our understanding of the language comprehension task.
Related Work
The availability of large-scale datasets (Onishi et al., 2016;Hill et al., 2015;Hermann et al., 2015) has enabled researchers to develop various deep learning-based architectures for language comprehension tasks such as cloze-style question answering. Sordoni et al. (2016) propose an Iterative Alternative Attention (IAA) reader. IAA is a multi-hop comprehension model which uses a GRU network to search for correct answers from the given document. IAA is the first model that does not collapse the query into a single vector. It deploys an iterative alternating attention mechanism that collects evidence from both the document and the query. Kadlec et al. (2016) Introduce a single-hop model called Attention Sum Reader (AS Reader) that uses two bi-directional GRUs (BiGRU) to independently encode the query and the document. It then computes a probability distribution over all document tokens by taking the softmax of the dot product between the query and the token representations. Finally, it introduces a pointer-sum attention aggregation mechanism to aggregate the probability of multiple appearances of the same candidate. The candidate with the highest probability will be considered as the answer. Cui et al. (2017) introduce a similar single-hop model called attention-over-attention (AOA) reader which uses a two-way attention mechanism to allow the query and document to mutually attend to one another. Trischler et al. introduce EpiReader (2016), which uses AS Reader to first narrow down the candidates, then replaces the query placeholder with each candidate to yield a different query statement, and estimate the entailment between the document and the different query statements to predict the answer. Munkhdalai and Yu (2017) (NSE) propose a computational hypothesis testing framework based on memory augmented neural networks. They encode the document and query independently at the beginning and then re-encode the query (but not the document) over multiple iterations (hops). At the end of each iteration, they predict an answer. The final answer is the candidate that obtains the highest probability over all iterations. Dhingra et al. (2017) extend the AS Reader by proposing Gated Attention Reader (GA Reader). GA Reader uses a multi-hop architecture to compute the representation of the documents and query. In each iteration the query is encoded independent of the document and previous iterations, but the document is encoded iterative considering the previous iteration as well as an attention mechanism with multiplicative gating to generate query-specific document representations. GA reader uses the same mechanism for making the final predictions as the AS reader. Yang et al. (2017) further extend the GA Reader with a fine grained gating approach that uses external semantic and syntactic features (i.e. NER, POS, etc) of the tokens to combine the word and character level embeddings and produce a final representation of the words.
Among the aforementioned models, the GA Reader is the closest to our model in that we use a similar architecture that is multi-hop and performs iterative reading. The main distinct between our model and the GA Reader is the reading and encoding of the query. Instead of performing independent reading of query in each iteration, our reading and encoding of the query not only depends on the document but also the reading of previous iterations.
Although cloze-style question answering task is well studied in the literature, the potential of dependent reading and interaction between the document and the query is not rigorously explored. In this paper, we address this gap by proposing a novel deep learning model (DGR). Experimental results demonstrate the effectiveness of our model. . The data (document d and query q, depicted with red and cyan tensors respectively) flows from left to right. At the first (input) layer, the word representations are shown with black solid borders while the character representations are shown with colored dashed borders. The figure is color coded; relevant tensors and elements are shown with the same color. Note that none of the elements share parameters. The purple matrices extract relevant information between document and query representations. The black arrows between the query BiGRUs (yellow ones) pass the final hidden state of a BiGRU to another one as initialization value for its hidden state.
Dependent Gated Reading
The input to our model at the training stage can be represented as a tuple (D, Q, C, a), where D = [d 1 , · · · , d n ] is the document of length n, Q = [q 1 , · · · , q m ] is the query of length m with a placeholder, C = [c 1 , · · · , c g ] is a set of g candidates and a ∈ C is the ground truth answer. Here we assume d i , q j are some form of embedding of the individual tokens of document and query. At the testing stage, given the input document D, query Q and candidate set C, the goal is to choose the correct candidate a among C for the placeholder in Q.
DGR can be divided to two major parts: Multi-hop Reading, and Ranking & Prediction.
Multi-hop Reading of Document and Query
Recurrent networks provide a natural solution for modeling variable length sequences. Consequently, we use bi-directional Gated Recurrent Units (BiGRUs) as the main building blocks for encoding the given document and query. For the initial step of our multi-hop reading, the document D and the query q are read with two separate BiGRUs (Equations 1 and 2) whered 0 ∈ R n×r and q 0 ∈ R m×r are the first BiGRU reading sequences of D and Q respectively. h 0 consists of two parts, h 0 f and h 0 b , which record the final output of forward and backward GRU reading of Q respectively. Note that "−" in equations means that we do not care about the associated variable and its value.
d 0 , − = BiGRU d0 (D, 0) (1) q 0 , h 0 = BiGRU q0 (Q, 0)(2)
We use s ∈ [0, S] to denote the reading iteration, with S + 1 total iterations. For the initial iteration (s = 0), both BiGRUs are fed with a zero vector for the initial hidden state as shown in Equations 1 and 2. Once the document and query encodings (d s andq s respectively) are computed, we employ a soft alignment method to associate the relevant sub-components between the given document and query. In deep learning models, this is often achieved with a soft attention mechanism. We follow the same soft attention mechanism as used in the GA reader (Dhingra et al., 2017), which is described below for completeness.
Givend s andq s , we first compute the unnormalized attention weights between the i-th token of the document and the j-th token of the query as the similarity between the corresponding hidden states with Equation 3 (energy function).
e s ij = (d s i ) Tqs j , ∀i ∈ [1, n], ∀j ∈ [1, m], ∀s ∈ [0, S − 1](3)
For each document token and query token, the most relevant semantics from the other context are extracted and composed based on e s ij ∈ R n×m . Equations 4 and 5 provide the specific details of this procedure whered s i ∈ R r represents the extracted information from the current reading of the query,q s , that is most relevant to the i-th document token by attending tod s i . Similarlyq s j ∈ R r represents, for the j-th query token, the extracted relevant document information fromd s by attending toq s j .
d s i = m j=1 exp(e s ij ) m k=1 exp(e s ik )q s j , ∀i ∈ [1, n], ∀s ∈ [0, S − 1] (4) q s j = n i=1 exp(e s ij ) n k=1 exp(e s kj )d s i , ∀j ∈ [1, m], ∀s ∈ [0, S − 1](5)
To incorporate the context information, we use element-wise product of the tuples (d s i ,d s i ) or (q s j ,q s j ) to produce a new representation of the hidden states for the document and the query as described in Equations 6 and 7.
u s i =d s i d s i , ∀s ∈ [0, S − 1] (6) v s j =q s j q s j , ∀s ∈ [0, S − 1](7)
Here stands for element-wise product, and u s ∈ R r and v s ∈ R r are the new encodings of the document and query respectively.Note that GA-reader uses the same mechanism to update the document encoding but does not change the query representation according to the document.
We then pass the new document (u s ) and query (v s ) embeddings to the BiGRUs for the next iteration s + 1. Note that for query reading, we feed, h s , the final hidden state of the previous reading (without document based updates) to the BiGRU of the next iteration as the initial hidden state. Intuitively, h s provides a summary understanding of the query from the previous iteration, without the document modulated updates. By considering both h s and v s , this encoding mechanism provides a richer representation of the query. This is formally described by Equations 8 and 9.
d s+1 , − = BiGRU ds (u s , 0), ∀s ∈ [0, S − 1] (8) q s+1 , h s+1 = BiGRU qs (v s , h s ), ∀s ∈ [0, S − 1](9)
We should note that using the following configuration variations did not yield any improvement to our model: 1) Other choices for gating aggregation strategy (Equations 6 and 7) like addition, concatenation, or applying a transformation function on different sub-members of {element-wise product, concatenation and difference}. 2) Residual connection.
Ranking & Prediction
Given the final document and query encodings,d S andq S , the final stage of our model computes a score for each candidate c ∈ C. This part of our model use the same point sum attention aggregation operation as introduced by the Attention Sum (AS) reader (Kadlec et al., 2016), which is also used by the GA reader (Dhingra et al., 2017).
Let idx be the position of the the placeholder in Q, andq S idx be the associated hidden embedding of the placeholder in the given query. We first compute the probability of each token in the document to be the desired answer by computing the dot product betweenq S idx andd S j for j = 1, ..., n and then normalize with the softmax function:
y = softmax((q S idx ) TdS )(10)
where y ∈ R n gives us a normalized attention/probability over all tokens of the document. Next, the probability of each particular candidate c ∈ C for being the answer is computed by aggregating the document-level attentions of all positions in which c appears:
p(c|D, Q) ∝ i∈I(c,D) y i , ∀c ∈ C(11)
where I(c, D) indicates the positions that candidate c appears in the document D (Candidate Occurrences in Figure 1). Finally the prediction is given by a * = argmax c∈C p(c|D, Q).
Key differences from the GA reader. Given the strong similarity between our model and the GA reader, it is worth highlighting the three key differences between the two models: (a) Document gated query reading: we compute a document-specific query representations to pass to the next query reading step; (b) Dependent query reading: in each iteration, the input to the query BiGRU comes from the document gated encoding of the query from the last iteration whereas the GA Reader reads the queries independently in all iterations; (c) Dependent query BiGRU initialization: the query BiGRU is initialized with the final hidden states of the query BiGRU from the previous iteration. These key differences in query encoding are designed to better capture the interdependences between query and document and produce richer and more relevant representations of the query and enhance the comprehension and query answering performance.
Further Enhancements
Following the practice of GA reader, we included several enhancements which have been shown to be helpful in previous work.
Question Evidence Common Word Feature. To generate the final document encodingd S , an additional modification of u S−1 is introduced before applying Equation 8. Specifically, an additional Question Evidence Common Word Feature (qe-comm) (Li et al., 2016) is introduced for each document token, indicating whether the token is present in the query. Assume f i stands for the qe-comm feature of the i-th document token, therefore,
u S−1 i = [u S−1 i , f i ].
Character-level embeddings. Word-level embeddings are good at representing the semantics of the tokens but suffers from out-of-vocabulary (OOV) words and is incapable of representing sub-word morphologies. Character-level embeddings effectively address such limitations (Ling et al., 2015;Dhingra et al., 2016). In this work, we represent a token by concatenating its word embedding and character embedding. To compute the character embedding of a token w = [x 1 , · · · , x l ], we pass w to two GRUs in forward and backward directions respectively. Their outputs are then concatenated and passed through a linear transformation to form the character embedding of the token.
Experiments and Evaluation
Datasets
We evaluate the DGR model on three large-scale language comprehension datasets, Children's Book Test Named Entity (CBT-NE), Common Noun (CBT-CN), and Who Did What (WDW) Strict and Relaxed. The first two datasets are formed from two subsets of the Children's Book Test (CBT) (Hill et al., 2015). Documents in CBT consist of 20 contiguous sentences from the body of a popular children's book, and queries are formed by replacing a token from the 21 st sentence with a placeholder. We experiment on subsets where the replaced token is either a named entity (CBT-NE) or common noun (CBT-CN). Other subsets of CBT have also been studied previously but because simple language models have been able to achieve human-level performance on them, we ignore such subsets (Hill et al., 2015).
The Who Did What (WDW) dataset (Onishi et al., 2016) is constructed from the LDC English Gigaword newswire corpus. Each sample in WDW is formed from two independent articles. One article is considered as the passage to be read and the other article on the same subject is used to form the query. Missing tokens are always person named entities. For this dataset, samples that are easily answered by simple systems are filtered out, which makes the task more challenging. There are two versions for the training set (Strict and Relaxed) while using the same development and test sets. Strict is a small but focused/clean training set while Relaxed is a larger but more noisy training set. We experiment on both of these training sets and report corresponding results on both settings. Statistics of all the aforementioned datasets are summarized in Table 3 of the appendix.
Other datasets for this task include CNN and Daily Mail News (Hermann et al., 2015). Because previous models already achieved human-level performance on these datasets, following Munkhdalai and Yu (2017), we do not include them in our study.
Training Details & Experimental Setup
We use pre-trained 100-D Glove 6B vectors (Pennington et al., 2014) to initialize our word embeddings while randomly initializing the character embedding. All hidden states of BiGRUs have 128 dimensions (o = 100 and r = 128). The weights are learned by minimizing the negative log-loss (Equation 12) on the training data via the Adam optimizer (Kingma and Ba, 2014). The learning rate is 0.0005. To avoid overfitting, we use dropout (Srivastava et al., 2014) with rate of 0.4 and 0.3 for CBT and WDW respectively as regularization, which is applied to all feedforward connections. While we fix the word embedding, character embeddings are updated during the training to learn effective representations for this task. We use a fairly small batch size of 32 to provide more exploration power to the model. Table 1 shows the test accuracy of the models on CBT-NE, CBT-CN, WDW-Strict, and WDW-Relaxed. We divide the previous models into four categories: 1) Single models (rows 1-5), 2) Ensemble models (rows 6-9), 3) NSE models (rows 10-14), and 4) the FG model (row 15). Table 1 primarily focuses on comparing models that do not rely on any NLP toolkit features (i.e. POS, NER, etc), with the exception of the FG model which uses additional information about document tokens including POS, NER and word frequency information to produce the embedding of the token. From Table 1, we can see that DGR achieves the state-of-the-art results on all aforementioned datasets expect for CBT-CN. The targets of CBT-NE, WDW-Strict, and WDW-Relaxed are all Named Entities while the CBT-CN focuses on Common Noun. We believe that our architecture is more suitable for Named Entity targeted comprehension tasks. This phenomenon warrants a closer look in future work. Comparing GA Reader, FG, and DGR (the three models with similar architectures), we see that FG outperform the GA Reader on CBT-CN and WDW-Strict datasets while DGR outperforms both FG and GA Reader results on CBT-NE, WDW-Strict, WDW-Relaxed datasets with noticeable margins. This suggests
L = i − log(p(a|D, Q))(12)
Results
Method
Test Accuracy(%) CBT-NE CBT-CN WDW-Strict WDW-Relaxed AS Reader (Kadlec et al., 2016) 68 that while the NLP toolkit features such as POS and NER could help the performance of the comprehension models (specially in CBT-CN), capturing richer dependency interaction between document and query appears to play a more important role for comprehension tasks focusing on Named Entities. Finally, For each of the three datasets on which our model achieves the state-of-the-art performance, we conducted the one-sided McNemars test to verify the statistical significance of the performance improvement over the main competitor (GA reader). The obtained p-values are 0.03, 0.003, and 0.011 for CBT-NE, WDW-Strict, and WDW-Relaxed respectively, indicating that the performance gain by DGR is statistically significant.
Ablation Study
We conducted an ablation study on our model to examine the importance and the effect of proposed strategies. We investigate all settings on the development set of the CTB-NE, CBT-CN, WDW-Strict, and WDW-Relaxed datasets. Consider the three key differences of our method from the GA Reader: (a) Document gated query reading -here we compute a document-specific query representations to pass to the next reading layer; (b) Dependent query reading -the query readings are dependent from one layer to the next as the input to the next reading layer comes from the output of previous layer; (c) Dependent BiGRU initialization -query BiGRUs of a later layer are initialized with the final hidden states of previous layer's query BiGRU. Table 2 shows the ablation study results on the development set of CBT-NE, CBT-CN, WDW-Strict, and WDW-Relaxed for a variety of DGR configurations by removing one or more of the key differences with GA reader. Note that by all removing all three difference elements, configuration 6 reduces to the GA reader.
According to Table 2, DGR achieves the best development accuracy on all datasets which indicates that collectively, the three elements lead to improved effectiveness.
Effect of document dependent reading. Configuration 2 removes the document dependent reading, and retains the other two elements. Interestingly, this configuration achieved the worst performance among all variations. Without proper guiding from the document side, iteratively reading the query actually leads to worse performance than independent query reading. This suggests that document dependent reading is a critical element that helps achieve better reading of query.
Effect of Dependent Query BiGRU initialization. In Configuration 3, we remove the dependent query BiGRU initialization, which results in a performance loss ranging from 0.33% (WDW-relaxed) to 1.35% (CBT-CN), suggesting that this connection provides important information that helps the reading of the query. Note that simply adding dependent query BiGRU initialization to GA reader (configuration 4) leads to a slight improvement over GA reader, which again confirms the usefulness of this channel of information.
Effect of dependent query reading. Unfortunately, we cannot only remove (b) from our model because it will cause dimension mismatch between the document and query representation preventing the gating operation for computing the document gated query representation. Instead, we compare the GA reader (configuration 6) with configure 5, which adds dependent query reading to the GA reader. We can see that adding the dependent query reading to the GA reader actually leads to a slight performance loss. Note that further including document gated reading (configuration 3) improves the performance on CBT-NE, but still fails to outperform GA reader. This points to a potential direction to further improve our model by designing a new mechanism that is capable of document dependent gating without the dependent query reading.
Analysis
In this section, We first investigate the performance of DGR and its variations on two attributes: the document length, and query length. Then we show a layer-wise visualization of the energy function (Equation 3) for an instance from the CBT-NE dataset.
Length Study
Among the four datasets that we use in this paper, WDW-Relaxed is the biggest and the most noisy one which makes it as a good candidate for analyzing the trend and behavior of our models. Overall Figure 2 suggests that DGR achieves highly competitive performance across different document and query lengths in comparison to the other variations including the GA reader. In particular, DGR perform better or similarly to the GA reader ("DGR -(a) & (b) & (c)") in all categories except when query length is between 30 and 40 where GA reader wins with a small margin. Furthermore, we see that "DGR -(a) & (b)" wins over "DGR -(a) & (b) & (c)" in most document length categories. This suggests the positive effect of the connection offered by (c), especially for longer documents.
Attention Study
To gain insights into the influence of the proposed strategies on the internal behavior of the model, we analyze the attention distribution at intermediate layers. We show a visualization of layer-wise normalized aggregated attention weights (energy function, Equation 3 A generic pattern observed in our study is that GA reader tends to generate more uniform attention distributions while DGR produces more focused attention. In other words, each layer of DGR tends to focus on different sub-parts and examine different hypotheses, illustrating the significant impact of the proposed strategies on the attention mechanism.
Conclusion
We proposed a novel cloze-style question answering model (DGR) that efficiently model the relationship between the document and the query. Our model achieves the the state-of-the-art results on several largescale benchmark datasets such as CBT-NE, WDW-Strict, and WDW-Relaxed. Our extensive analysis and ablation studies confirm our hypothesis that using a more sophisticated method for modeling the interaction between document and query could yield further improvements. Alessandro Sordoni, Phillip Bachman, and Yoshua Bengio. 2016
B Rule-based Disambiguation Study
In this section, we present a simple rule-based detection strategy for CBT-NE dataset which disambiguates about 30% and 18% of the samples in CBT-NE development and test sets. For each query q, assume w is previous/next next word in the placeholder which start with upper case character. If such a w exists, we look for w in the document d and collect all words that could appears next/before w. After removing all collected words that are not in the candidate list C, the samples is disambiguated and solved if we end up with a single word (answer). We refer to the set of such samples as disambiguated set. Table 4 shows the statistics of this rule-based strategy on the rule-based disambiguated test set of CBT-NE. Furthermore, Table 5 shows a data sample in CBT-NE that is correctly disambiguated with our rule-based approach. Table 4: Statistics and performance of the proposed rule-based strategy on CBT-NE dataset. Figure 4 shows the performance of DGR and its variations on the set of data samples in CBT-NE test set that could be disambiguated with the proposed rule-based strategy. Although we use the lower case words in the training process, all models perform substantially well on disambiguating such samples. This observation could demonstrate the effectiveness of the general architecture.
C Attention Study
In this section, we show visualizations of 8 samples of layer-wise normalized attention (energy function, see Equation 3 in the main paper). Each column in Figures 5-12, represents the same layer in "DGR" and "DGR - doc a 1 Instead of answering , Jimmy Skunk began to laugh . 2 " Who 's a bug ? " 3 demanded Old Mr. Toad , more crossly than before . 4 " There is n't any bug , Mr. Toad , and I beg your pardon , " replied Jimmy , remembering his politeness . 5 " I just thought there was . 6 You see , I did n't know you were under that piece of bark . 7 I hope you will excuse me , Mr. Toad . 8 Have you seen any fat beetles this morning ? " 9 " No , " said Old Mr. Toad grumpily , and yawned and rubbed his eyes . 10 " Why , " exclaimed Jimmy Skunk , " I believe you have just waked up ! " 11 " What if I have ? " 12 demanded Old Mr. Toad . 13 " Oh , nothing , nothing at all , Mr. Toad , " replied Jimmy Skunk , " only you are the second one I 've met this morning who had just waked up . " 14 " Who was the other ? " 15 asked Old Mr. Toad . 16 " Mr. Blacksnake , " replied Jimmy . 17 " He inquired for you . " 18 Old Mr. Toad turned quite pale . 19 " I -I think I 'll be moving along , " said he . 20 XVII OLD MR. TOAD 'S MISTAKE If is a very little word to look at , but the biggest word you have ever seen does n't begin to have so much meaning as little " if . " query 21 If Jimmy @placeholder had n't ambled down the Crooked Little Path just when he did ; if he had n't been looking for fat beetles ; if he had n't seen that big piece of bark at one side and decided to pull it over ; if it had n't been for all these " ifs , " why Old Mr. Toad would n't have made the mistake he did , and you would n't have had this story . cands b Blacksnake, Jimmy, Mr., Skunk, Toad, XVII, bug, morning, pardon, second ans c Skunk pred d Skunk a doc, Document b cands, Candidates c ans, Answer d pred, Prediction Table 5: Example of a disambiguated sample in CBT-NE dataset with the proposed rule-based approach.
Figure 1
1depicts a high-level view of our proposed Dependent Gate Reading (DGR) model, which follows a fairly standard multi-hop architecture, simulating the multi-step reading and comprehension process of humans.
Figure 1 :
1A high-level view of dependent gated reading model(DGR)
Figure 2 :
2Test accuracy of DGR and its variations against the length of the document (A), and length of the query (B) on the WDW-Relaxed dataset. The bar on top of each figure indicates the number of samples in each interval. Darker color in the bars illustrates more samples.
Figure 2
2depicts the performance of DGR and its variations against the length of document (left), and the length of query (right). A bar on top of each diagram indicates the frequency of samples in each intervals. Each data sample is added to the closet interval.
Figure 3 :
3) for candidate set over the query (for more examples look at Section C of the appendix). In each figure, the top plots show the layer-wise attention of DGR and the bottom plots show the layer-wise attention of the GA reader, i.e., "DGR -(a) & (b) & (c)". Moreover, the left and middle plot show the aggregated attention of candidates over the whole query while the right plot depicts the aggregated attention of the candidates for the placeholder in the query in the final layer. Layer-wise normalized attention visualization of "DGR" (top) and "DGR -(a) & (b) & (c)" (bottom) for a sample from the CBT-NE test set. Darker color illustrates higher attention. Figures only show the aggregated attention of candidates. The gold answer is "sahib".
(a) & (b) & (c)". Also, each row is allocated to a specific model (Top: DGR, and Bottom: DGR -(a) & (b) & (c)).
Figure 4 :
4Performance of DGR and its variations on the rule-based disambiguated test set of CBT-NE.
Figure 5 :Figure 6 :Figure 10 :
5610Layer-wise normalized attention visualization of "DGR" (top) and "DGR -(a) & (b) & (c)" (bottom) for a sample from the CBT-NE test set. Darker color illustrates higher attention. Figures only show the aggregated attention of candidates. The gold answer is "butler". Layer-wise normalized attention visualization of "DGR" (top) and "DGR -(a) & (b) & (c)" (bottom) for a sample from the CBT-NE test set. Darker color illustrates higher attention. Figures only show the aggregated attention of candidates. The gold answer is "prince". Layer-wise normalized attention visualization of "DGR" (top) and "DGR -(a) & (b) & (c)" (bottom) for a sample from the CBT-NE test set. Darker color illustrates higher attention. Figures only show the aggregated attention of candidates. The gold answer is "first".
Table 1 :
1Performance of proposed model (DGR) on the test set of CBT-NE, CBT-CN, WDW-Strict, and WDW-Relaxed datasets.Method
Development Accuracy(%)
CBT-NE CBT-CN WDW-Strict WDW-Relaxed
1) DGR
77.90
73.80
71.78
72.26
2) DGR -(a)
75.60
72.25
71.04
71.82
3) DGR -(c)
77.50
72.45
71.29
71.93
4) DGR -(a) & (b)
77.85
73.05
71.67
72.20
5) DGR -(a) & (c)
76.00
72.85
71.37
72.13
6) DGR -(a) & (b) & (c)
77.65
73.00
71.61
72.16
Table 2 :
2Ablation study results. Performance of different configurations of the proposed model on the development set of the CBT-NE, CBT-CN, WDW-Strict, and WDW-Relaxed datasets
Table 3 :
3Dataset statistics
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, abs/1409.0473CoRRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473.
A thorough examination of the cnn/daily mail reading comprehension task. Danqi Chen, Jason Bolton, Christopher D Manning, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, GermanyLong Papers1Danqi Chen, Jason Bolton, and Christopher D. Manning. 2016. A thorough examination of the cnn/daily mail reading comprehension task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers.
Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. Kyunghyun Cho, Bart Van Merrienboer, Dzmitry Aglar Gülçehre, Fethi Bahdanau, Holger Bougares, Yoshua Schwenk, Bengio, Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merrienboer, Ç aglar Gülçehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Ma- chine Translation. Empirical Methods in Natural Language Processing, pages 1724-1734.
Consensus attention-based neural networks for chinese reading comprehension. Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu, COLING 2016, 26th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers. Osaka, JapanYiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, and Guoping Hu. 2016. Consensus attention-based neural networks for chinese reading comprehension. In COLING 2016, 26th International Conference on Computa- tional Linguistics, Proceedings of the Conference: Technical Papers, December 11-16, 2016, Osaka, Japan, pages 1777-1786.
Attention-over-attention neural networks for reading comprehension. Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, Guoping Hu, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, and Guoping Hu. 2017. Attention-over-attention neural networks for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 593-602.
Tweet2vec: Character-based distributed representations for social media. Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, William W Cohen, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, GermanyShort Papers2Bhuwan Dhingra, Zhong Zhou, Dylan Fitzpatrick, Michael Muehl, and William W. Cohen. 2016. Tweet2vec: Character-based distributed representations for social media. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Papers.
Gated-attention readers for text comprehension. Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W Cohen, Ruslan Salakhutdinov, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Bhuwan Dhingra, Hanxiao Liu, Zhilin Yang, William W. Cohen, and Ruslan Salakhutdinov. 2017. Gated-attention readers for text comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 1832-1846.
Teaching machines to read and comprehend. Karl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems. Montreal, Quebec, CanadaKarl Moritz Hermann, Tomás Kociský, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Process- ing Systems 28: Annual Conference on Neural Information Processing Systems 2015, December 7-12, 2015, Montreal, Quebec, Canada, pages 1693-1701.
The goldilocks principle: Reading children's books with explicit memory representations. Felix Hill, Antoine Bordes, Sumit Chopra, Jason Weston, abs/1511.02301CoRRFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. 2015. The goldilocks principle: Reading children's books with explicit memory representations. CoRR, abs/1511.02301.
Text understanding with the attention sum reader network. Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, GermanyLong Papers1Rudolf Kadlec, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. 2016. Text understanding with the atten- tion sum reader network. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 1: Long Papers.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, abs/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Dataset and neural recurrent sequence labeling model for open-domain factoid question answering. Peng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, Wei Xu, arXiv:1607.06275arXiv preprintPeng Li, Wei Li, Zhengyan He, Xuguang Wang, Ying Cao, Jie Zhou, and Wei Xu. 2016. Dataset and neural recur- rent sequence labeling model for open-domain factoid question answering. arXiv preprint arXiv:1607.06275.
Finding function in form: Compositional character models for open vocabulary word representation. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luís Marujo, Tiago Luís, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalWang Ling, Chris Dyer, Alan W. Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luís Marujo, and Tiago Luís. 2015. Finding function in form: Compositional character models for open vocabulary word representa- tion. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1520-1530.
Reasoning with memory augmented neural networks for language comprehension. Tsendsuren Munkhdalai, Hong Yu, abs/1610.06454Tsendsuren Munkhdalai and Hong Yu. 2017. Reasoning with memory augmented neural networks for language comprehension. ICLR, abs/1610.06454.
Who did what: A largescale person-centered cloze dataset. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David A Mcallester, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USATakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, and David A. McAllester. 2016. Who did what: A large- scale person-centered cloze dataset. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 2230-2235.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word repre- sentation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
| [] |
[
"CONATION: English Command Input/Output System for Computers",
"CONATION: English Command Input/Output System for Computers"
] | [
"Kamlesh Sharma kamlesh0581@gmail.com \nResearch Scholar\n\n",
"DrT V Prasad tvprasad2002@yahoo.com \nProfessor & Head Dept. of Comp. Sc. & Engg\nLingaya's University\nFaridabadIndia\n"
] | [
"Research Scholar\n",
"Professor & Head Dept. of Comp. Sc. & Engg\nLingaya's University\nFaridabadIndia"
] | [] | In this information technology age, a convenient and user friendly interface is required to operate the computer system on very fast rate. In human being, speech being a natural mode of communication has potential to being a fast and convenient mode of interaction with computer. Speech recognition will play an important role in taking technology to them. It is the need of this era to access the information with in seconds. This paper describes the design and development of speaker independent and English command interpreted system for computer. HMM model is used to represent the phoneme like speech commands. Experiments have been done on real world data and system has been trained in normal condition for real world subject. | null | [
"https://arxiv.org/pdf/1305.0625v1.pdf"
] | 16,868,460 | 1305.0625 | 59cebea941591f8bc0a6bb87d1d42d0cca137ba4 |
CONATION: English Command Input/Output System for Computers
Kamlesh Sharma kamlesh0581@gmail.com
Research Scholar
DrT V Prasad tvprasad2002@yahoo.com
Professor & Head Dept. of Comp. Sc. & Engg
Lingaya's University
FaridabadIndia
CONATION: English Command Input/Output System for Computers
Speech RecognitionForward VariableOccurrence ProbabilityHMM
In this information technology age, a convenient and user friendly interface is required to operate the computer system on very fast rate. In human being, speech being a natural mode of communication has potential to being a fast and convenient mode of interaction with computer. Speech recognition will play an important role in taking technology to them. It is the need of this era to access the information with in seconds. This paper describes the design and development of speaker independent and English command interpreted system for computer. HMM model is used to represent the phoneme like speech commands. Experiments have been done on real world data and system has been trained in normal condition for real world subject.
INTRODUCTION
The primarily human computer interaction is done by keyboard and mouse as pointing device work as input and monitor and printer work as output. Keyboard, although a popular medium is not very convenient as it requires a certain amount of skill for effective usage. A mouse on the other hand requires a good hand-eye coordination. It is also cumbersome for entering nontrivial amount of text data and hence requires use of an additional media such as keyboard. Physically challenged people find computers difficult to use. Partially blind people find reading from a monitor difficult.
With the integration of computers and telecommunications, the mode of information access becomes an important issue. The designs of the prevalent human machine interfaces are more suitable for easier interpretation of information by computers than by human beings. The concept of machine being able to interact with people in a mode that is natural as well as convenient for human beings is very appealing. Issuing spoken commands to a machine to get useful work done and to get the response is a no long dream now. This has motivated research in speech recognition as well as speech synthesis. Considerable progress has been made and a few commercial speech products of varying capabilities are available for use in quite a few languages.
English command Input/Output system for computer is a fascinating field spanning several areas of computer science and mathematics. Reliable speech recognition is a hard problem, requiring a combination of many techniques, however modern methods have been able to achieve an impressive degree of accuracy. This project attempts to examine those techniques, and to apply them to build a simple system. These are exciting technologies that change the way to interact with computer. To talk the computer using a set of predefine commands and instruction and computer will respond in the same way. For example to say: "file open", and the computer will open a new file: "select the file". or "Edit find" and the computer will do all this work according to the word spoken to the system. The intent in developing this project is to ability to command and control the computer through voice. Speech recognition is a technology that allows the computer to identify and understand words spoken by a person using a microphone. [4] [5] The ultimate goal of this paper is to outline a system that can recognize all words that are spoken by any person and perform corresponding command. Computer software that understands the speech and conversation with the computer. This conversation would include person and computer, speaking as commands or in response to events, input or other feedback. Speaking is easier and more intuitive than selecting buttons and menu item, human speech has evolved over many thousand of year to become an efficient method of sharing information and giving instruction. The dynamic nature of the world only emphasized this need strongly. [1] The system recognizes voice of any individual, records it, and matches with the respective command available and performs the action required. Database connected by commands to execute the function, wish to operate. Speech recognition systems have been around for over twenty years, but the early systems were very expensive and required powerful computers to run. The technology behind speech output has also change. Early system used discrete speech, i.e. the user had to speak one word at a time, with short pause between the words.
RELATED WORK
Dragon Dictate is the only discrete speech system still available commercially. Over the past few years most systems have used continuous speech, allowing the user to speak in a more natural way. The main continuous speech systems currently available for the PC are Dragon Naturally Speaking and IBM Via Voice. Microsoft has included their own speech recognition system within recent versions of Windows. There is now a version of IBM Via Voice for recent Apple MAC Computer. The aim of speaker recognition is to recognize the speaker while speech recognition is related to the detection of speech. Speech recognition has been a goal of research for more than four decades. [6][7]
COMMANDS RECOGNITION USING HMM
We need to recognise a word using the existing models of words that we have. Sound recorder need to record the sound when it detects the presence of a word. This recorded sound is then passed through feature vector extractor model. The output of the above module is a list of features taken every 10 msec. These features are then passed to the recognition module for recognition. The feature vectors generated by the feature vector generator module act as the list of observation for the recognition module. Probability of generation of the observation given a model, P(O| λ), is calculated for each of the model using find probability function. The word corresponding to the HMM, that gives the probability that is highest and is above the threshold, is considered to be spoken.
Forward Variable
Forward variable was used to find the probability of list of occurrence given a HMM. For a model ¸ with N states, P(O/ λ ) probability of observation, in terms of forward variable α, given the model is define as
N P(O/ λ ) = ∑ α T (i) i=1 where α (t+1) is recursive defined as N α (t+1) = [∑α T (i)a ij ]b j (O t+1 ) i=1 where α 1 is π i b i (O 1 )
Occurrence Probability
For the forward variable to work we need to find b i (O t ). This is probability of a given occurrence for a particular state. This value can be calculated by Multivariate normal distribution formula. Probability of observation X occurring in state i is given as:
(1/(2π) D/2 |V i |)exp(-(1/2)*(O t -µ i ) T V -1 (O t -µ i ))
where D is dimension of the vector, µ i is matrix representing the mean vector, V i is the covariance matrix,
|V i | is the determinant of matrix V i , V -1 is the inverse of matrix V i.
Mean vector µ i is obtained by:
µ i = (1/N)* ∑ O t O t єi
Covariance Matrix Vi can be obtained by:
Vi = (1/N)* ∑ (O t -µ i ) T * (O t -µ i ) O t єi
Variance is calculated by finding the distance vector between an observation and the mean. Transpose of the distance vector is taken and it is multiplied with the distance vector. This operation gives a N x N where N is the dimension of the system. [2]
TRAINED THE SYSTEM
To train the system we required three parameters:
• No of states the HMM model should have N.
• The size of the feature vector D.
• One or more filenames each containing a training set.
For generating an initial HMM we take the N equally placed observations (feature vector) from the first training set. Each one is used to train a separate state. After training the states have a mean vector which is of size D. And a variance matrix of size D X D containing all zeros. Then for each of the remaining observations, we find the Euclidean distances between it and the mean vector of the states. We assign an observation to the closest state for training. The state assigned to consecutive observations are tracked to find the transitional probabilities.
Segmental K-means algorithm tries to modify the initial model so as to maximise P(O, I/ λ ), where O are the training sets used for training and I is a state sequence in the given HMM. The maximised (optimal) path for a training set is denoted by I*. Those observations that were assigned to a different state then the one in which they should be present according the optimal path are then moved to the state. This improves P(O,I*/ λ ). The model is evaluated again so with this changed assignments of observations. The above process iteratively till there are no more reassignment needed. The calculation of mean, variance, and transitional probabilities are done as shown before. [8] Viterbi algorithm is useful identifying the best path that a signal can take in a HMM. Find the best path is a search problem. Viterbi uses dynamic programming to reduce the search space. For the first observation sequence the out probability of a state being the start state. This done by taking a product of initial probability and the observation probability for the state. For every other observation all the states try to find a predecessor such that the probability of the predecessor multiplied by the transition probability from the predecessor to itself is maximised.
IMPLEMENTATION OF COMMAND
INPUT/OUTPUT SYSTEM
The implementation of the command Input / Output system is done by CHMM. Continuous HMM library, which supports vector as observations, has been implemented in the project. The library uses probability distribution functions that are mention in section 3. The system has a model for each word that the system can recognize. The list of words can be considered as language model. While recognizing the system need to know where to locate the model for each word and what word the model corresponds to. This information is stored in a flat file called models in a directory called HMMs. The difference in case of HMM is that the symbol does not uniquely identify a state. The new state is determined by the symbol and the transition probabilities from the current state to a candidate state. The system is trained before a word is recognize as mention in section 4. When a sound is given to the system to recognise, it compares each model with the word and finds out the model that most closely matches with it. The word corresponding to that HMM model is given as the output. [
1][2][3]
Training the system for a new word requires the sound files for that word. Feature for the sound file can be extracted using the extract feature command. Train command can be used to train the system. The command needs information such as the number of states that the model should have the size of the feature vector, and the file to be used for training. First argument to the command should be a number indicating the number of states. Second argument is the size of the vector. After this one or more file containing the training data. The output of the Train command is the trained HMM in XML format which should be written to a file and put in the hmms directory. An entry needs to be made in the models file present in the same directory. For recognition sound is recorded using the Raw Recorder program. Then extract the feature to get an mfcc file. Recognise command takes one or more filenames as argument. It tries to recognise the word for each file. We propose to implement this system using the .Net API technology. The use of APIs limit the user's prerequisite of DOT NET knowledge required to develop a working project in DOT NET.
EXPERIMENTAL RESULTS
Training
To train the system we used 20 users. Each user trained the system . Training of the system was done in a calm and peaceful environment so that recognition accuracy may be more. When user trained the system, a separate profile is created for each user.
Recognition
Recognition was tried on two kinds of sounds • Known user: The user whose voice we used for training. • Unknown user: The user whose voice we not used for training. The Result of experiment as shown in table: Table 2 shows the command recognition probability and the graphical representation shows in graph 2 on the last page. The system is experimented for 10 number of user. The corresponding recognition percentage result which shown by a graph. This experiment tried to find out how many users is recognition correctly. The outcomes have been exhibited in Table 3 and Graph 3. training time if the training time of the system has been increased then accuracy automatically increased. Time is directly proportional to accuracy if training time increase then accuracy increased. 2 Accuracy of the system is increased when we trained the system in very peaceful environment and provide the same environment at the time of system used. 3 Accuracy of the system increases when good quality input hardware like microphone is used 4 A graphical representation shows how time plays an important role in accuracy. A table containing number of users and time that user used to train the system is given in Table 4 and Graph 4.
Figure 1
1Figure 1 Structural Chart
c o g n itio n P e rc e n ta g e
1
Accuracy of the experiments depends on the
Table 1 :
1Recognition ResultType of user
No of
sound
Correct
recognition
Incorrect
Recognition
Known User
20
18
2
Unknown
user
10
6
0
Table 2 :
2Command Recognition Probability Number of ExperimentsCommands
Number of
Testing
Recognition
Probability
Activate
20
100%
Deactivate
20
100%
Welcome
20
100%
Word
20
100%
Excel
20
100%
Save
20
90%
Close
20
100%
Notepad
20
100%
Menu
20
98%
Exit
20
100%
Escape
20
100%
Left
20
100%
Right
20
100%
Up
20
100%
Down
20
100%
EnterTheNumricState
20
90%
ExitTheNumricState
20
90%
EnterAlphabeticState
20
90%
ExitTheAlphabeticState 20
90%
Plus
20
100%
Multiply
20
98%
Divide
20
95%
Minus
20
100%
Star
20
100%
Shut down
20
100%
PenDriveFormat
20
95%
Scanning
20
98%
Ok
20
100%
Enter
20
100%
Run
20
100%
Table 3 :
3Experimental ResultsUser
Recognition Percentage
A
85
B
80
C
90
D
95
E
75
F
50
G
65
H
77
I
83
J
98
CONCLUSIONS AND FUTURE WORKThis paper presents a scheme proposed to control computer systems through voice of different users. The key factor in designing such system is the target audience. For example, physically handicapped people should be able to wear a headset and have their hands and eyes free in order to operate the system. Today, while considering this question, and uses where these technologies will be needed and desire, which would warrant R&D expenditures. There are a number of scenarios where speech recognition is either being delivered, developed for, researched or seriously discussed like computer and video games, precision surgery, domestic applications, wearable computers etc. There are several challenges the system needs to deal with in the future. First, the overall robustness of the system must be improved to facilitate implementation in real life applications involving telephone and computer systems. Second, the system must be able to reject irrelevant speech that does not contain valid words or commands. Third, the recognition process must be developed so that commands can be set in continuous speech. And finally, the voice systems must be able to become viable on low-cost processors. Thus, this will enable the technology to be applied in almost any product.As with many contemporary technologies, such as the Internet, online payment systems and mobile phone functionality. , development is at least partially driven by the trio of often perceived evils that are "games, gambling and girls (pornography)". Though these applications are outside the educational sphere, it is important to remember that many ICT
. K Samudravijaya, Hindi Speech, School of Technology, Tata Institute of Fundamental ResearchSamudravijaya K, Hindi Speech Recognition, School of Technology, Tata Institute of Fundamental Research, 2004.
Speech recognition for Hindi language. C-Dac India, Speech recognition for Hindi language, C-DAC India, available at http://www.cdacmumbai.in/design/corporate_site/override/pdf- doc/speech_recognition_for_hindi.pdf
A Novel Approach of Speaker Verification, School of Technology. Tata Institute of Fundamental ResearchA Novel Approach of Speaker Verification, School of Technology, Tata Institute of Fundamental Research, 2004.
Swar: The Voice Operated PC. Kamlesh Sharma, T V Dr, Prasad, Proc. of National Conference on Soft Computing & Artificial Intelligence. of National Conference on Soft Computing & Artificial IntelligenceKamlesh Sharma, Dr. T. V. Prasad, Swar: The Voice Operated PC, Proc. of National Conference on Soft Computing & Artificial Intelligence, 15-16 January 2008.
Voice Operated Computer Application. Kamlesh Sharma, T V Dr, Prasad, Proc. of 2 nd National Conference on Emerging Trend in Computer Science & Information Technology. of 2 nd National Conference on Emerging Trend in Computer Science & Information TechnologyKamlesh Sharma, Dr. T. V. Prasad, Voice Operated Computer Application, Proc. of 2 nd National Conference on Emerging Trend in Computer Science & Information Technology, 18 April 2008.
C-Dac Research & Development Centre, Speech Research, Text to Speech. Research & Development Centre C-DAC: Speech Research, Text to Speech, 2001-2010, http://www.kolkatacdac.in/html/txttospeeh/tts.htm
A feature-based hierarchical speech recognition system for Hindi. K Samudravijaya, R Ahuja, N Bondale, T Jose, S Krishnan, P Poddar, P V S Rao, R Raveendran, S¯adhan¯a. 23Samudravijaya K, Ahuja R, Bondale N, Jose T, Krishnan S, Poddar P, Rao P V S, Raveendran R , A feature-based hierarchical speech recognition system for Hindi. S¯adhan¯a 23: 313-340, 1998.
| [] |
[
"Cross-Domain Neural Entity Linking Declaration of Authorship Eidesstattliche Erklärung",
"Cross-Domain Neural Entity Linking Declaration of Authorship Eidesstattliche Erklärung"
] | [
"Hassan Soliman \nSaarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n\n",
"DrHeike Adel \nSaarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n\n",
"Mohamed Gad-Elrab \nSaarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n\n",
"Msc Dragan \nSaarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n\n"
] | [
"Saarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n",
"Saarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n",
"Saarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n",
"Saarlandes Bosch Center for Artificial Intelligence\nComputer Science Department\nUniversität\n"
] | [] | Entity Linking is the task of matching a mention to an entity in a given knowledge base (KB). It contributes to annotating a massive amount of documents existing on the Web to harness new facts about their matched entities. However, existing Entity Linking systems focus on developing models that are typically domain-dependent and robust only to a particular knowledge base on which they have been trained. The performance is not as adequate when being evaluated on documents and knowledge bases from different domains. Approaches based on pre-trained language models, such as Wu et al. (2020) [1], attempt to solve the problem using a zero-shot setup, illustrating some potential when evaluated on a general-domain KB. Nevertheless, the performance is not equivalent when evaluated on a domain-specific KB. To allow for more accurate Entity Linking across different domains, we propose our framework: Cross-Domain Neural Entity Linking (CDNEL). Our objective is to have a single system that enables simultaneous linking to both the general-domain KB and the domain-specific KB. CDNEL works by learning a joint representation space for these knowledge bases from different domains. It is evaluated using the external Entity Linking dataset (Zeshel) constructed by Logeswaran et al. (2019) [2] and the Reddit dataset collected by Botzer et al. (2021) [3], to compare our proposed method with the state-of-the-art results. The proposed framework uses different types of datasets for fine-tuning, resulting in different model variants of CDNEL. When evaluated on four domains included in the Zeshel dataset, these variants achieve an average precision gain of 9%. v This work would not have been possible without the support of many people. Many thanks to my advisors, Dr. | 10.48550/arxiv.2210.15616 | [
"https://export.arxiv.org/pdf/2210.15616v1.pdf"
] | 253,157,904 | 2210.15616 | 717635db9565e057cb6af75b609326848948db6f |
Cross-Domain Neural Entity Linking Declaration of Authorship Eidesstattliche Erklärung
February 1, 2022 28 Sep 2022 Datum/Date: February 1, 2022
Hassan Soliman
Saarlandes Bosch Center for Artificial Intelligence
Computer Science Department
Universität
DrHeike Adel
Saarlandes Bosch Center for Artificial Intelligence
Computer Science Department
Universität
Mohamed Gad-Elrab
Saarlandes Bosch Center for Artificial Intelligence
Computer Science Department
Universität
Msc Dragan
Saarlandes Bosch Center for Artificial Intelligence
Computer Science Department
Universität
Cross-Domain Neural Entity Linking Declaration of Authorship Eidesstattliche Erklärung
February 1, 2022 28 Sep 2022 Datum/Date: February 1, 2022Master's Thesis in Computer Science Supervisor Prof. Dr. Dietrich Klakow Advisors Dr. Heike Adel Dr. Mohamed Gad-Elrab MSc. Dragan Milchevski Reviewer Prof. Dr. Dietrich Klakow Dr. Volha Petukhova Unterschrift/Signature: iii
Entity Linking is the task of matching a mention to an entity in a given knowledge base (KB). It contributes to annotating a massive amount of documents existing on the Web to harness new facts about their matched entities. However, existing Entity Linking systems focus on developing models that are typically domain-dependent and robust only to a particular knowledge base on which they have been trained. The performance is not as adequate when being evaluated on documents and knowledge bases from different domains. Approaches based on pre-trained language models, such as Wu et al. (2020) [1], attempt to solve the problem using a zero-shot setup, illustrating some potential when evaluated on a general-domain KB. Nevertheless, the performance is not equivalent when evaluated on a domain-specific KB. To allow for more accurate Entity Linking across different domains, we propose our framework: Cross-Domain Neural Entity Linking (CDNEL). Our objective is to have a single system that enables simultaneous linking to both the general-domain KB and the domain-specific KB. CDNEL works by learning a joint representation space for these knowledge bases from different domains. It is evaluated using the external Entity Linking dataset (Zeshel) constructed by Logeswaran et al. (2019) [2] and the Reddit dataset collected by Botzer et al. (2021) [3], to compare our proposed method with the state-of-the-art results. The proposed framework uses different types of datasets for fine-tuning, resulting in different model variants of CDNEL. When evaluated on four domains included in the Zeshel dataset, these variants achieve an average precision gain of 9%. v This work would not have been possible without the support of many people. Many thanks to my advisors, Dr.
Motivation
Entity Linking can be defined as the process of matching a mention, e.g., "Paris" in a textual context, e.g., "Paris is a famous American singer and actress, who is born in New
York City" with a record (i.e., entity) in a knowledge base (KB) (e.g., "Paris Hilton") that fits the surrounding context. [4].
Neural Entity Linking models often consist of two main components: 1) Candidate
Generation module, which acts as a staging step to generate candidate entities similar to a given mention in context. 2) Candidate Ranking module, which acts as a ranking system to rank these candidates according to their similarity to the input mention.
Neural Entity Linking systems need to be trained on annotated documents with entities from different KBs. In most cases, these systems are trained on a single general-domain KB such as Wikipedia, and it is assumed that they can be generalized to entities from a domain-specific KB, as explained by Wu et al. (2020) [1].
The problem is that mentions in domain-specific documents cannot be linked to entities from a general domain KB. More specifically, using the general-domain KB is insufficient to understand and analyze documents with domain-specific entities because domainspecific documents may contain entities that do not yet exist in the general-domain KB. This problem led to the necessity of training models for Entity Linking to domainspecific KBs. In the same regard, it is not sufficient to use the domain-specific KB alone.
Therefore, it is essential to develop approaches that can simultaneously link to entities from multiple KBs of different domains. (2020) [5]. However, their performance in terms of end-to-end accuracy is not as adequate when evaluated on domain-specific documents (unseen entities from a new KB).
Limitations of the State of the Art
It is a challenge for them to be enriched with domain-specific knowledge.
Other previous entity linking systems such as AIDA by Hoffart et al. (2011) [6] and Wikify by Mihalcea and Csomai (2007) [7] could not handle out-of-KB entities. The lack of domain-specific entities in their KB caused these models to perform poorly, as noted by Hamaguchi et al. (2018) [8].
Goals of Thesis
The main goal of this work is to provide a domain-agnostic Entity Linking (which can be easily extended to new domains by adding new KB) using neural models. It can also leverage the power of fine-tuning pre-trained language models and context-aware embeddings for all relevant text processing parts. Our goal is to develop an approach trained to link two (multiple) KBs simultaneously. The provided method should overcome the following challenges:
• Merging of KBs: We aim at learning a new representation space that can represent entities from two or more KBs, allowing them to be directly compared to a given input.
• Alignment of Entities: When combining two or more knowledge bases, entities may be identical (overlapping). Therefore, these overlapping entities have similar representations.
• Overfitting to domain-specific KB: Fine-tuned language models are likely to overfit to the domain-specific KB, especially if these new domains have a small dataset of annotated mentions.
Research Questions
Building systems that link to multiple knowledge bases presents many scientific and technical challenges. This thesis aims to address and answer the following research questions:
• RQ1: How can we use pre-trained language models to merge entities from multiple KBs?
• RQ2: How can we extract and exploit overlapping entities in the alignment of entities?
• RQ3: To what extent does fine-tuning on domain-specific datasets affect the results of simultaneous Entity Linking?
• RQ4: Can we use data augmentation to reduce the overfitting of the fine-tuned models to the domain-specific KB?
Structure of the Thesis
The Master thesis work is organized as follows: The second chapter briefly reviews the history of Entity Linking and shows how it has evolved in recent years, with an example of modern techniques. In Chapter 3, we take a closer look at neural approaches that combine multiple KBs. In Chapter 4, we describe the dataset used in the fine-tuning process and the various statistics they exhibit. We also introduce a dataset that we plan to use for data augmentation.
In Chapter 5, we propose different approaches to extend the base model with a domainspecific KB. In Chapter 6, we present a description of the experimental setup as well as the results of the conducted experiments. We also provide a detailed analysis of the outcome. The experiments aim to evaluate the extensibility of the base model with a new domain-specific KB through fine-tuning. In Chapter 7, we summarize the approaches and contributions proposed in this Master thesis and discuss possible areas for future work.
Chapter 2 Background
This chapter presents the history of Entity Linking and its effective approaches. In addition, Neural Entity Linking is explained with an example of a state-of-the-art method.
Entity Linking
The Entity Linking task has been addressed by numerous approaches before Deep Learning took on the problem. Graph-based techniques were predominant, such as AIDA by Hoffart et al. (2011) [6], which depends on a weighted graph of a mention and candidate entities to obtain the best mention-entity mapping, as shown in (2011) [15], and MSNBC by Cucerzan (2010) [13].
Neural Entity Linking
Generic Example of Neural Entity Linking
A general example of NEL systems is previously shown in Figure 2.2. It assumes that the input text contains mentions that have already been recognized. The role of this system is to link the recognized mentions to the correct and relevant entities from a given knowledge base, using the power of neural networks. is encoded in a vector space, along with each entity in a given knowledge base, e.g.,
Paris
Hilton. This vector space is used in the Candidate Generation module. The similarity between the input and each of the entities is computed. Based on this similarity, relevant entities are retrieved and serve as candidate entities further explored by the following module. The main objective of the Candidate Ranking module is to learn the joint context between the input mention and each of the candidate entities. It assigns a score indicating how similar the input is to each candidate entity. Based on these scores, a ranking of these entities is performed. The highest score is given to the correct (gold) entity "Paris Hilton".
Sequence Modeling based on Transformers
An essential aspect of NEL is the representation of vectors and modeling sequences.
Although this vectorization can be achieved in several ways, we are concerned with transformer-based language models since they can provide rich representations in the form of embeddings.
Practical approaches rely on the Transformer architecture by Vaswani et al. (2017) [21] due to its power in sequence modeling using the Multi-head Attention Mechanism, as shown in Figure 2 The BERT architecture provides a transfer learning mechanism by pre-training on massive general-domain documents using masked language modeling and next sentence prediction objectives, as shown in Figure 2.5. This system can understand and encode text into a rich representation space. Semantic knowledge about the tokens in the input text is incorporated into these encodings (embeddings). This encoder-based architecture can Chapter 2. Background 9 be used to obtain the similarity between a given input mention and the various entities in the knowledge base to generate candidate entities, as shown in Figure 2.6.
Bi-Encoder and Cross-Encoder Architecture
A Bi-Encoder is based on two model parts, e.g., two BERT models. Each can encode sentences in a textual format. One model part is for encoding an input mention in a textual context. The other is for encoding entities in a KB with a textual description.
Semantic knowledge about a particular sentence (input mention or entity description)
is incorporated into these encodings (embeddings). This architecture proves robust in semantic search, as it can recognize synonyms and spelling variations, as stated by
Reimers and Gurevych (2019) [22]. As shown in Figure 2.7, the similarity between any two sentences can be measured between their encodings using a similarity metric, such as Dot Product or Cosine Similarity.
On the other hand, Cross-Encoder is based on a single model part, e.g., the BERT model.
It takes two sentences simultaneously as input (input mention and entity description)
and produces a single score indicating how similar they are. It is more complex than the Bi-Encoder because it performs attention across the two concatenated sentences.
However, it takes more time to be trained. It can give an output score between 0 and 1, representing the similarity between the input mention with each of the generated candidate entities. The Cross-Encoder is used in the Candidate Ranking module by concatenating the input mention with the description of each candidate entity. It generates an embedding for these concatenated sentences. A classification layer on the top takes this embedding as input and gives an output score for each candidate entity. Based on these scores, a final ranking of the similarity of these entities to the given input mention is performed. The highest score is given to the correct (gold) entity.
Chapter 3
Related Work
This chapter looks at the neural entity linking approaches that inspired our work, focusing on neural methods in the literature that address generalization to different domains within the same system.
It is important to point out that for each approach, we state its limitations and constraints that do not restrict our methodology.
Entity linking with Neural Attention
One of the early works that used the attention mechanism is Deep Joint Entity Disambiguation with Local Neural Attention by Ganea and Hofmann (2017) [23], as shown in The approach takes as input the context word embeddings for the mention, in addition to the candidate entity embeddings and the mention-entity priors. The output is a score for each candidate entity and context pair. At this point, they obtained state-of-the-art results with moderate computational cost.
A limitation of this work is that they use information from mention-entity priors in their final computed scores.
Other work comes from Onoe and Durett (2020) [24], who use information about finegrained entity types when linking mentions. Their approach models fine-grained entity properties, as shown in Figure 3.2.
The entity typing model computes a binary probability representing the membership of an input mention to each type. The final decision is based on summing these probabilities for each candidate entity. Thanks to this technique, they state that their Entity Linking
Large-scale Language Models Pre-training for Entity
Linking This section looks in depth at approaches that attempt to incorporate large-scale language models into the task of Entity Linking. One example is the work by Peters et al. For an input mention, its representation is modeled by pooling over word tokens in its span. In addition, a self-attention block is deployed over all mention representations so that it can encode the interactions between several entities in a sentence.
It is constrained by using a pre-built index of entities (alias table) to match the mention span for the Candidate Generation step.
Another method based on fine-tuned BERT architecture is presented by Humeau et al. (2019) [26]. It generates candidates using a Poly-Encoder that learns global selfattention features. As shown in Figure 3.4, they have conducted additional experiments using Bi-Encoder and Cross-Encoder architectures.
The Poly-Encoder not only allows caching of candidate representations like the Bi-Encoder, but also adds an attention mechanism for global input features and a given candidate entity to allow richer interactions as in the Cross-Encoder.
The base model we use by Wu et al. (2020) [1], which is explained in detail in Section 3.4, states that the Bi-Encoder can be a solid model for retrieval. Using the Bi-Encoder in combination with the Cross-Encoder is sufficient for the Entity Linking task.
Another work that fits into this category is the work by Logeswaran et al. (2019) [2]. It is presented in more detail in Section 3.4.
Linking to KBs with Arbitrary Schemas
Various methods have been studied in the literature based on learning a single representation space for available types of entity information. In this context, Gupta et al. (2017) [27] proposed a neural system for Entity Linking. The system learns a dense and unified representation for each entity by using multiple sources of information, such as its description, contexts around its mentions, and its fine-grained types, as shown in The method uses average unigram and bigram embeddings followed by dense layers to obtain representations of mentions and descriptions. They state that their model generalizes to a new dataset derived from Wikinews. Their work is limited by including information from category labels, and they do not use an attention mechanism between mentions and entities.
Another work by Vias and Ballesteros (2020) [5] is based on the work by Wu et al. The generalization is based on converting entities from arbitrary KB with several attributevalue pairs (relational information) into a string representation that the previously mentioned models can use. They find that these models, based on large-scale pre-trained language models, e.g., the BERT model, can generalize to linking mentions to unseen entities, including those for which no textual descriptions are available. One way to convert attribute-value pairs into a string is to add special tokens called attribute separators to these string representations, as shown in Figure 3.7. These tokens are used to represent frequently occurring attributes in the KB. They also generate more flexible string representations by shuffling entity attributes before converting them to strings. They randomly remove attribute separators to improve the generalization to unseen attributes.
However, the results obtained with unseen entities of a KB with attribute-value pairs are not the same quality as those obtained when the unseen entities have textual descriptions instead. In addition, their work does not learn to encode entities with textual descriptions from multiple domains in the same representation space. The authors propose a new zero-shot Entity Linking task and construct a new dataset, as shown in Figure 3.8. They construct the dataset using multiple subdomains in Fandom 1 , and automatically extract labeled mentions using hyperlinks. They use a domain-adaptive pre-training strategy. The model undergoes unsupervised pre-training on the target-domain data only (U tgt ), followed by a final fine-tuning phase on the labeled source-domain data (F src ). The intuition behind this is that the representation capacity is limited, so models should prioritize the quality of representation of the target domain.
Zero-Shot Entity Linking
Any series of pre-training stages can be chained. For example, They use the Bi-Encoder and Cross-Encoder architecture, as shown in Figure 3.9. In the first stage, candidates are generated in a dense space defined by a Bi-Encoder (two encoders) that embeds the mention context and entity descriptions independently. Each candidate entity is then re-ranked with a Cross-Encoder that takes as input a concatenation of the textual context of a mention and the textual description of each of these candidate entities.
U W B −→ U src+tgt −→ U tgt −→ F src
To extend their model to include entities from a new domain-specific KB, they do the following:-
• They assume having a single vector space trained on a general-domain KB and is used directly on a domain-specific KB without re-training.
• At the inference time, they encode only the new entities of the domain-specific KB in this single vector space.
• Although the Bi-Encoder has not seen or trained on these entities before, it models them in the vector space. Our methodology is based on the work by Wu et al. (2020) [1]. We build on their model using more variants as described later in Chapter 5.
Chapter 4 Datasets
The proposed approach assumes a general-domain dataset, with mentions extracted from Wikipedia articles and annotated on Wikipedia entities. We aim to search for domain-specific datasets, including mentions extracted from domain-specific articles and annotated on domain-specific entities.
This chapter shows the datasets used in the fine-tuning process and their different statistics. We also introduce a dataset that we propose to use for data augmentation.
Dataset Gathering
This section presents several datasets that serve as domain-specific datasets, which will be used to fine-tune and evaluate the proposed method.
Zeshel
Overlapping Entities
The overlapping entities between each domain and the general-domain KB (Wikipedia) are needed to align them explicitly through a similar representation. Since Zeshel does not contain overlapping entities, we propose to extract pairs of entities with the same title between both domains as a first step. We will refer to them as "fuzzy overlapping Table 4.2.
A threshold is set and tested by sampling pairs whose similarity value is above this threshold. These pairs are manually checked to verify their semantic similarity. The number of these filtered similar overlapping entities can be found in Table 4
Cybersecurity
The dataset for this domain consists of ten Cybersecurity documents annotated by a
Overlapping Entities
To extract the overlapping entities between Cybersecurity (domain-specific KB) and
Dataset Augmentation
In this section, we present the use of data augmentation to provide additional mentions for fine-tuning to reduce overfitting to the domain-specific KBs from the Zeshel dataset.
Reddit
The Our framework (CDNEL) builds on BLINK to improve its results, specifically when linking to entities from domain-specific KBs. The key idea is to fine-tune BLINK using various proposed modifications in this chapter. The goal of these modifications is to better represent entities from multiple KBs to help in the downstream task of Entity Linking.
BLINK
The proposed methodology assumes a base model (BLINK). Initially, it is trained on annotated data from Wikipedia containing mentions corresponding to entities from the general-domain KB (Wikipedia).
Candidate Generation Phase
This section explains BLINK's dense space-based retrieval method for generating candidate entities.
As shown in Figure 5.1, the model architecture consists of two main components. The context encoder, which is on the top left, is used to encode the input mention with context, and the candidate encoder, which is on the bottom right, is used to encode the entities with textual descriptions. As a result, we have an embedding for each entity v r and an embedding for each input mention v m . The model learns to assign the highest score to the correct (gold) candidate entity to which the input mention should be linked and reduce the score for other negative candidate entities.
CDNEL
In this section, we explain our proposed modifications and improvements to BLINK. In addition, we discuss in detail several variants (configurations) of our framework (CD-NEL) and how they build on BLINK to include entities from a domain-specific KB with better representations.
These modifications are only applied to the Candidate Generation phase. For the Candidate Ranking phase, CDNEL uses the same architecture and components as BLINK.
Proposed Modifications
Fine-tuning the Context and Candidate Encoders on Mentions with Context
Annotated on the Domain-specific KB (C )
We do not modify the Candidate Generation architecture from BLINK in this modifi- where C p and C q are randomly sampled batches of non-overlapping entities. In this modification, we experiment with fine-tuning only the candidate encoder, as shown in Figure 5.3.
L θ = o1,o2∈O − v o1 v o2 + log p∈Cp exp(v o1 v p ) + log q∈Cq exp(v o2 v q )(L θ = o1,o2∈O − v o1 v o2 + log p∈Cp exp(v o1 v p ) + log q∈Cq exp(v o2 v q ) + λ · |v o1 − v o2 | 2 (5.5)
Model Variants
The The main experiments are performed without considering the D variants since the C variants prove to be more accurate in the downstream task of Entity Linking.
Summary
In our framework (CDNEL), several proposed modifications are presented with the goal of fine-tuning BLINK to provide a better representation space for entities from multiple
Experimental Setup
This section explains the configurations used for the fine-tuning and evaluation processes.
In addition, the evaluation metrics and their relevance for the assessment and evaluation of the proposed approach are demonstrated.
Model Configurations
As mentioned in Chapter 5, there are several variants of CDNEL. They are summarized in Thus, the model learns to make more negative samples at a greater distance for each input sample. However, this has the disadvantage of requiring a larger memory. We experimented with batch sizes of 64, 32, and 16. According to our cluster specifications, we could only do the fine-tuning using a batch size of 16.
The infrastructure used for fine-tuning and evaluation is a Tesla V100 GPU cluster, available with 16 GB or 32 GB configurations. We used the 32 GB configuration with 2
CPUs and a total main memory of 64 GB.
We did not experiment with fine-tuning with different values for the learning rate and the number of epochs. Instead, we used the proposed values from BLINK for these hyperparameters. In addition, we used early stopping to store the best model checkpoint based on the model performance on the validation set.
The number of candidate entities k can typically be 10, 64, or 100. However, this comes at the cost of fine-tuning time and the runtime of the evaluation. We limited the number to 10 since we conducted several experiments on different domains using CDNEL variants.
The maximum number of tokens for the input mention with context and entities with descriptions is 128. The larger, the better, as more context and information is kept for both mentions and entities with more tokens. However, this requires a larger memory and a longer runtime.
Evaluation Metrics
There is only one gold ground truth for each input mention in the evaluation process.
In the final step of the Candidate Ranking phase, the 10 most similar entities to an input mention are ranked from top to bottom. This ranking is evaluated using AP@10.
This metric is the average precision of the gold entity if it is present among the 10 most similar entities. Otherwise, it would be zero, as shown in Equation 6.1.
AP@10 = 1 GT E 10 k=1 p@k × rel@k (6.1)
p@k is defined as the number of gold entities among the top similar k entities divided by k. GT E represents the number of ground truth entities. In our case, it is always one since there is only one gold entity for each mention. rel@k is a relevance function. It is an indicator function that equals one if the entity at rank 1 is relevant and equals 0 otherwise.
The metric AP@10 is calculated for all the mentions N in the evaluation set, and its mean MAP@10 is calculated as shown in Equation 6.2.
MAP@10 = 1 N N i=1 AP@10 i (6.2)
MAP@10 is chosen to provide a quantitative assessment of the quality of the ranking.
However, AP@1 is used to evaluate the model's overall quality, as shown in Equation
6.3. AP@1 = 1 N N i=1 rel@1 (6.3)
It represents the end-to-end evaluation of the system and a specific assessment for the highest scoring candidate entity among the 10 most similar entities to a mention. Its average is calculated for N mentions in the evaluation set.
Experiments and Results
This The results in Table 6.3 show that CA is the best-performing model. This observation suggests that using data augmentation of Reddit mentions annotated on the generaldomain KB (Wikipedia) leads to better results when used for fine-tuning. This improvement is due to the general-domain nature of the Reddit mentions, which in turn helps the fine-tuned model learn a better representation of the textual context around the mentions even if they should be annotated on the domain-specific KB. This augmentation is also helpful when the fine-tuning dataset for the domain-specific KB is tiny.
We perform the statistical significance testing described by Dror et al. Precision between two algorithms (systems). We use the randomization test to check for statistical significance between BLINK and the best-performing CDNEL variant (CA) with a significance level (alpha) of 0.05. The p-value results are shown in Table 6.4.
The observed difference between BLINK and CA is statistically significant for all domains. This significance test illustrates the strength of CA in Entity Linking for mentions that should be annotated on domain-specific KB.
Overfitting of the Fine-tuned Models to the Domain-specific KB
In this experiment, we aim to answer the research question, "Can we use data augmentation to reduce the overfitting of fine-tuned models to the domain-specific KB?" This is interesting because fine-tuned models have a high probability of overfitting to the domain-specific KBs. The results in the Table 6.5 show that COA generally has the highest scores compared to BLINK and other CDNEL variants. However, it performs worse on American Football.
On the other hand, CA consistently either resembles or outperforms BLINK for all domains. This observation suggests that the use of data augmentation of general-domain mentions helps reduce the overfitting of the fine-tuned models to the domain-specific KB.
This suggestion is supported by the fact that the CDNEL variant C, which contains no data augmentation, performs worse than BLINK for two domains, Doctor Who and
Fallout.
We perform the same statistical significance test that we performed on the results obtained when evaluation is performed on mentions annotated on the domain-specific KB.
We choose the same test because the evaluation metrics are the same. We use Fisher's randomization test to check for statistical significance between BLINK and the CDNEL variant (CA) with a significance level (alpha) of 0.05. The p-value results are shown in Table 6.6. The observed difference between BLINK and CA is not statistically significant for almost all domains. This significance test illustrates the comparability of CA with BLINK in
Entity Linking for mentions that should be annotated on the general-domain KB. These results support the claim that CA performs well for mentions that should be annotated on the domain-specific KB as well as mentions that should be annotated on the generaldomain KB (Wikipedia).
Mentions Qualitative Assessment
In this part, we take a closer look at the mentions annotated by BLINK and the finetuned models. We aim to assess how the fine-tuned models learned to link mentions to the general-domain and domain-specific KBs and how the results of these annotations differ from those of BLINK. This assessment is shown in Table 6.7.
Reddit Russia Vladimir Putin
The only real purpose of the item is to naturally observe the isogin smog abilityused by acrophies, who will only use the ability when the entire player party has the darkness status. "Putin" and "Russia" and correctly linking the mention "Putin" to the corresponding entity "Vladimir Putin".
Intrinsic Evaluation of Embeddings
In are shown in Table 6.8.
The We perform the statistical significance testing described by Dror et al. (2019) [31].
They recommend using Fisher's randomization (permutation) to test the statistical significance of the difference in MRR between two systems. We use the randomization test to check for statistical significance between BLINK and the best-performing CDNEL variant (COA) with a significance level (alpha) of 0.05. The results of the p-value are shown in Table 6.9.
The observed difference between BLINK and COA is statistically significant for all domains except American Football, which is expected based on the results in Table 6 These results support our goal of having a single system, CA in our case, that performs well for mentions that should be annotated on the domain-specific KB as well as mentions that should be annotated on the general-domain KB (Wikipedia). Based on the findings of this thesis work, for fine-tuning BLINK on a domain-specific dataset, we recommend including data augmentation in the form of general-domain mentions from Reddit when combining a general-domain KB with a domain-specific KB to perform simultaneous Entity Linking to both KBs within the same system. In addition, further fine-tuning on the overlapping entities helps better represent their embeddings, which are as close as possible in the joint representation space.
Future Work
There is a space for adopting the proposed framework for other domain-specific KBs.
Possible actions of this work can go in the direction of combining more than two KBs and enabling simultaneous linking to them within the same system. A deeper analysis can be performed on the evaluation of the Entity Linking task by measuring the Cosine Similarity between the embeddings of the input mention and the gold entity. This analysis will provide a quantitative assessment of the system's quality of Entity Linking compared to the base model. In addition, further studies can be conducted to generalize the framework to domain-specific KBs that do not contain information in the form of textual descriptions. Further experiments can be performed using larger datasets in the scenario when mentions annotated on both KBs are from the same document.
List of Figures
List of Tables
Since 2016 ,
2016various neural systems have addressed the problem of Entity Linking due to the Deep Learning revolution in Natural Language Processing (NLP). It has recently gained popularity due to the extensive capabilities of neural models in information extraction and semantic text understanding, as noted by Sevgili et al. (2021)
approaches work well on general-domain documents such as general news, as shown in Logeswaran et al. (2019) [2], Wu et al. (2020) [1] and Vyas and Ballesteros
Figure 2 . 1 .
21DBpedia Spotlight is another work by Mendes et al. (2011) [9] that depends on the prior probabilities between mentions and entities, usually represented by the number of times the mention is used to link the entity. WAT from Piccinno et al. (2014) [10] uses voting algorithms in addition to building the mention-entity graph. Babelfy by Moro et al. (2014) [11] uses a unified graph-based approach between entity linking and word sense disambiguation. PBOH is another work by Ganea et al. (2016) [12], in which input mentions are jointly and collectively linked across an entire document using an effective graphical model. Various publicly available datasets and annotated corpora are typically used to evaluate entity linking systems such as AIDA by Hoffart et al. (2011) [6], TAC KBP 2010 by Ji et al. (2010) [13], KORE-50 by Hoffart et al. (2011) [14], ACE2004 by Ratinov et al.
Figure 2 . 1 :
21Mention-entity graph example by Hoffart et al. (2011) [6].
Figure 2 .
23 shows the general components of the complete NEL system. The input text example "Paris is a famous American singer and actress, who is born in New York City"
Figure 2 . 2 :
22Model architecture adopted byKolitsas et al. (2018) [17].
Figure 2 . 3 :
23General architecture for a Neural Entity Linking system as stated bySevgili et al. (2021) [4].
. 4 .
4Accordingly, large-scale language models such as BERT by Devlin et al. (2019) [19] used the encoder architecture employed by the Transformer model to implement a robust semantic text understanding system that depends on the context provided.
Figure 2 . 4 :
24Encoder and decoder architectures constituting the Transformer model by Vaswani et al. (2017)[21].
Figure 2 . 5 :
25Pre-training and fine-tuning of BERT model by Devlin et al. (2019)[19].
Figure 2 . 6 :
26A simplified example of the representation space of an input mention and various entities in a KB, which can be used in the Candidate Generation phase.
Figure 2 . 7 :
27Difference between Bi-Encoder and Cross-Encoder architectures. The Bi-Encoder is used in the Candidate Generation module. The input mention is encoded by one of the two encoders of the Bi-Encoder, while the other encoder encodes the entity descriptions in a given KB. The 100 or k (hyperparameter) entity embeddings most similar to the input embedding are retrieved and serve as candidate entities.
Figure 3 .
31, and it inspired the other models in this section.
Figure 3 . 1 :
31Local neural attention mechanism deployed by Ganea and Hofmann (2017)[23].
Figure 3 . 2 :
32Entity type prediction model for Entity Linking by Onoe and Durett (2020)[24]. model can be effectively generalized to other domains. However, their work is limited because they need to include entity type information in the Candidate Generation step.The work byPeters et al. (2019) [25] also falls into this category and is discussed in more detail in the next section.
( 2019 )
2019[25]. They propose a general method for embedding multiple knowledge bases into the BERT model to improve its contextual word representation. The result is the knowledge-enhanced BERT (KnowBERT), as shown inFigure 3.3.
Figure 3 . 3 :
33Enhancing the mention-span representations with knowledge from the KB using word-to-entity-span attention by Peters et al. (2019)[25].
Figure 3 . 4 :
34Three encoder architectures investigated by Humeau et al. (2019)[26].
Figure 3 . 5 .
35Figure 3.5.
Figure 3 . 5 :
35Encoding different sources of information provided for an entity byGupta et al. (2017) [27].Encoders for different sources of information about an entity are introduced and optimized so that the final embedding of the entity is similar to all encoded representations.However, this work is limited by the quality of the LSTM cells. They used bidirectional LSTMs in their encoders, which are less effective at sequence modeling than the BERT model.Another work is byGillick et al. (2019) [28], which is one of the early methods to use dense embeddings and Cosine Similarity to compare entity representations for the Candidate Generation step. It included an encoder for fine-grained entity types (Wikipedia category labels), as shown inFigure 3.6.
Figure 3 . 6 :
36Dual encoder architecture deployed by Gillick et al. (2019)[28].
( 2020 )
2020[1] andLogeswaran et al. (2019) [2], but they generalize these models and allow them to handle arbitrary KBs containing entities represented by an arbitrary set of attribute-value pairs.
Figure 3 . 7 :
37Three different ways to represent an entity with arbitrary attribute-values by Vias and Ballesteros (2020)[5].
on the survey by Sevgili et al. (2021) [4], there is a surge of models that address the problem of extension of knowledge bases of new domains in a zero-shot fashion. Zero-shot learning in the context of Entity Linking means that entities from a new KB do not appear in the training set. The basic idea is to train the system on a domain with rich labeled data (general-domain KB). During the inference time, the entities of the domain-specific KB are encoded without re-training the model. In this regard, a work by Logeswaran et al. (2019) [2] presents an effective domainadaptive pre-training (DAP) to solve the domain shift problem associated with linking to unseen entities of a new domain. Their experiments show that their model is robust when linking mentions to entities from specialized domains such as legal cases or company project descriptions.
Figure 3 . 8 :
38Multiple training and test domains, and the extraction of labelled mentions using hyper-links by Logeswaran et al. (2019) [2].
which means that the model is first pre-trained on the open corpus, which is a combination of Wikipedia and the BookCorpus datasets used in BERT, then pre-trained on the combined source (src) and target (tgt) domains, then pre-trained on the target-domain data only, and finally fine-tuned on the source-domain labeled data. Their methodology is limited by the use of BM25, a variation of TF-IDF, in their Candidate Generation step. Wu et al. (2020) [1] proposes another approach (BLINK) based on the previous work by Logeswaran et al. (2019) [2], but they do not use domain-adaptive pre-training. It is a complete domain-independent approach trained only on labeled data from a general domain, e.g., Wikipedia. They show that models based on BERT reach new peaks in large-scale Entity Linking when used in a zero-shot setup. They present a two-stage zero-shot algorithm for Entity Linking based on fine-tuning BERT architecture, where a short textual description defines each entity. They trained the model on annotated data containing mentions corresponding to entities from the general-domain KB (Wikipedia).
Figure 3 . 9 :
39Full pipeline for the Bi-Encoder and Cross-Encoder architecture adopted byWu et al. (2020) [1].
•
They rely on the performance of the Bi-Encoder to include the domain-specific entities in the single vector space of the general domain without additional training of the model. However, they do not incorporate the information from the overlapping entities between the general-domain KB (Wikipedia) and domain-specific KB in their alignment in the single vector space.
zero-shot entity linking dataset (Zeshel) is constructed by Logeswaran et al. (2019) [2] from Fandom 1 . It contains multiple worlds (domains). For each domain, there are entities with textual descriptions and labeled mentions extracted from articles about that domain. The number of entities and the division of mentions into training, validation, and evaluation sets are shown in
extraction based on the title is done because it is impossible to compare all available pairs manually. The general-domain KB (Wikipedia) has 5903538 entities, and each of the domain-specific KBs has on average around 20000 entities. The number of possible pairwise comparisons is enormous. As a second step, to check whether the extracted fuzzy overlapping entities are semantically similar, we propose to use a Sentence-Transformer model (Roberta-large) by Reimers et al. (2019) [22], which is the best-performing model according to the Semantic Text Similarity (STS) benchmark by Cer et al. (2017) [29]. For each entity pair in the fuzzy overlapping entities, the textual description of each entity in the pair is given to the Sentence-Transformer model, which provides an output vector representation. The two representations of the pair of entities are compared based on Cosine Similarity. This comparison is performed for all pairs, and their average similarity scores for each domain can be viewed in
. 4 :
4modified version of AIDA by Hoffart et al. (2011) [6] extended with the Cybersecurity domain. From these documents, 1755 mentions are extracted. They are annotated on both Cybersecurity (domain-specific KB) based on Mitre Attack 2 and Wikipedia (general-domain KB). Mitre Attack is a globally accessible knowledge base of adversary tactics and techniques based on real-world observations. To prepare a training dataset for fine-tuning, we divided the documents into nine documents for fine-tuning and one for evaluation. To obtain silver ground truth annotations for the mentions in the nine documents, we annotated them on Cybersecurity (domain-specific KB) and Wikipedia (general-domain KB) using the BLINK model by Wu et al. (2020) [1]. We then extracted the mentions where the models BLINK and modified AIDA match in their annotations. There are 710 mentions agreed on by both systems. We consider these mentions as silver ground truth annotations used for fine-tuning. The number of entities in each domain and the Cybersecurity domain used in the fine-tuning and evaluation of the proposed approach breakdown of mentions into training, validation, and evaluation sets are shown inTable 4.4.We created a gold ground-truth standard for the left-out document for evaluation through manual annotation. The resulting evaluation set includes 145 mentions. Manual annotation is performed using the Inception annotation interface fromKlie et al. (2018) [30] as shown inFigure 4.1.
Figure 4 . 1 :
41Cybersecurity document manual annotation by Inception.
Reddit mentions dataset is constructed by Botzer et al. (2021) [3]. These are mentions extracted from Reddit posts and comments by Reddit users and annotated on Wikipedia entities. The breakdown of these mentions into training, validation, and evaluation sets is shown in Table 4.5. We used Reddit mentions because they are annotated on the general-domain KB (Wikipedia) and do not have very domain-specific terms and mentions. These general-domain mentions are used to provide more data for domain-specific datasets. This augmentation helps if these domain-specific datasets have only a small set of annotated mentions. Thus, it can serve as a method to increase the size of the dataset. The augmented data for each domain-specific dataset is used to fine-tune the proposed approach. Moreover, these mentions can reduce the overfitting of the fine-tuned models to the domain-specific KB. They are annotated on the general-domain KB (Wikipedia). The Reddit mentions evaluation set is used to test the overfitting of these models and ensure that they can properly link mentions, whether they should be linked to the domainspecific KB or the general-domain KB. explains the inner workings of the base model and presents the various proposed approaches to extend it to include entities of domain-specific KBs. A Neural Entity Linking model should serve as the base model. It should be able to handle general-domain documents. We select the model (BLINK) by Wu et al. (2020) [1] as the base model to run our experiments. It achieved state-of-the-art results on generaldomain datasets such as TAC KBP 2010 by Ji et al. (2010) [13]. Wu et al. (2020) [1] states that to allow BLINK to add a domain-specific KB, its entities must have descriptions in a textual format. The model encodes these new entities to obtain their embeddings. For a new input mention with textual context, its embedding is compared to all embeddings of entities from the domain-specific and general-domain KBs using Cosine Similarity.
Figure 5. 1 :
1BLINK: Candidate Generation. Both encoders consist of the same components. The first component, BERT, takes as input the mention with a context in the context encoder and the entity textual description in the candidate encoder. The output of the [CLS] token from BERT in both encoders serve as fixed-size v m and v r embeddings. This part aims to approximate these two embeddings in the vector space by maximizing their Cosine Similarity. BERT weights of the context and candidate encoders are θ m and θ r , respectively. Both encoders are trained to learn the best parameters θ m and θ r to maximize the Dot Product between v m and v r and minimize it for other negative samples. The loss function used to achieve this is shown in Equation 5.1, where C e is a randomly sampled batch of negative entities. L(v m , v r ) θm,θr = : Input mention representation v r : Entity description representation v e : Negative entity description representation The model is trained with mentions from Wikipedia articles annotated on Wikipedia entities extracted in May 2019. This model serves as the base model that can be extended to domain-specific KBs. After training is completed, embeddings for Wikipedia entities can be computed and stored in an index table for inference. During inference time, the input mention text is encoded using the context encoder of the Bi-Encoder, and the similarity in terms of Cosine Similarity is calculated with all the embeddings in the stored index table. The most similar k (hyperparameter) entities are extracted and serve as input to the Cross-Encoder in the next phase (Candidate Ranking phase).5.1.2 Candidate Ranking Phase The figure 5.1 is simplified for the Candidate Generation components while adding the new components for the Candidate Ranking module on the right, as shown in Figure 5.2.
Figure 5 . 2 :
52BLINK: Candidate Ranking.The generated candidate entities are formatted in a string representation that the Cross-Encoder can process. These string representations consist of the entity title concatenated with its textual description and separated by a special token[ENT], as shown on the right inFigure 5.2. Each of these string representations is concatenated with the context of the input mention and serves as input to the Cross-Encoder so that it can learn the joint context between the textual context of the input mention and the textual description of each of the candidate entities. Depending on the output, it assigns a score indicating how similar they are.The loss function of the Cross-Encoder can be represented in Equation 5.2, where v m,e is the embedding generated by BERT for the concatenated text of the input mention and the candidate entity description, and W are the classification layer weights (Fully Connected Neural Network) at the top of the Cross-Encoder. C e is the set of negative candidate entities.L(v m,e ) W = −v m,e W + log k∈Ce exp(v m,k W ) (5.2)v m,e : Representation for an input mention concatenated with the gold candidate entity v m,k : Representation for an input mention concatenated with a negative candidate entity
T
cation. We instead fine-tune both the context and candidate encoders on mentions with context annotated on the domain-specific KB. The aim of fine-tuning is to make the representation for the input mention with context and the representation of the correct entity from the domain-specific KB close together. In addition, the representation ofthe input mention and other (wrong) entities from set C e are further apart. The loss function to achieve this is shown in Equation 5.3, where C e is a randomly sampled batch of wrong entities. : Training data consisting of mentions m (in context) and their corresponding entities r in the domain-specific KB v m : Input mention representation v r : Entity description representation v e : Negative entity description representation Fine-tuning on the Overlapping Entities between the General-domain KB and the Domain-specific KB (O) In this modification, the Candidate Generation module is further fine-tuned on the overlapping entities between the general-domain KB and the domain-specific KB to make them explicitly closer and similar in the representation space. This learning is done by maximizing the Dot Product between v o1 and v o2 representing each entity in an overlapping entity pair. In addition, we make each of v o1 and v o2 far away from non-overlapping entities. The loss function to achieve this is shown in Equation 5.4,
Set of overlapping entities v o1 : First entity representation of an overlapping entity pair v o2 : Second entity representation of an overlapping entity pair v p : Representation of an entity that does not overlap with o 1 v q : Representation of an entity that does not overlap with o 2 In addition, we performed a preliminary experiment in which we added the Mean Square Error (MSE ) between v o1 and v o2 to the loss function, as shown in the last term in Equation 5.5, where λ is a hyperparameter between 0 and 1 that controls how much weight this term adds to the loss function. However, no gains are observed in the downstream task of Entity Linking. Instead, we continued our experiments using Equation 5.4.
Fine
-tuning the Context and Candidate Encoders on Mentions with Context from Reddit Annotated on the General-domain KB (Wikipedia) for Data Augmentation (A) In this modification, we fine-tune the Candidate Generation module on mentions with context annotated on the domain-specific KB as well as mentions with context from Reddit annotated on the general-domain KB (Wikipedia). These general-domain mentions act as augmented data to reduce overfitting to the domain-specific KB. The fine-tuning is performed in the same way as in (C ). The loss function used for the fine-tuning is shown in Equation 5.6, where C e is a randomly sampled batch of wrong entities.
T
: Training data consisting of mentions m (in context) and their corresponding entities r in the general-domain KB or in the domain-specific KB v m : Input mention representation v r : Entity description representation v e : Negative entity description representation Fine-tuning only the Candidate Encoder on Mentions with Context Annotated on the Domain-specific KB (D)
Figure 5 . 3 :
53CDNEL (D): Candidate Generation.The weights of the context encoder θ m are frozen without change. The weights of the candidate encoder θ r are modified and learned such that the Dot Product between v m and v r is maximized and minimized for other negative samples. The loss function used to achieve this is shown in Equation 5.7, where C e is a randomly sampled batch of negative entities.L(v m , v r ) θm = −v T m v r + log e∈Ce exp(v T m v e ) (5.7)v m : Input mention representation v r : Entity description representation v e : Negative entity description representation The idea behind modifying only the weights of the candidate encoder θ r and leaving the weights of the context encoder θ m unchanged is to focus the training on the candidate encoder weights to learn how to encode the entities of the domain-specific and the general-domain KBs to fit into the same vector space. This modification can help reduce the overfitting of the model to the domain-specific KB since the parameters of the context encoder learned from the general-domain dataset are not modified. Moreover, it reducesthe number of parameters used for fine-tuning, which increases the speed of training. In addition, the memory occupied by these parameters is reduced, allowing the model to use larger batch sizes when fine-tuning is done using the same cluster machine.
modifications: C, O and A proposed in the previous section are used to form different variants of our framework (CDNEL). The main idea is based on providing different dataset components for fine-tuning the Candidate Generation module of the base model (BLINK), as shown in Figure 5.4.
Figure 5 . 4 :
54Dataset components used in the fine-tuning. Mentions with context annotated on the domain-specific KB must be obtained for finetuning (C ). Extracting the overlapping entities between the general-domain KB and the domain-specific KB is used to perform further fine-tuning (O). In addition, mentions from Reddit annotated on the general-domain KB (Wikipedia) are used for data augmentation to reduce overfitting to the domain-specific KB (A). These different types of data lead to different variants of CDNEL. The variants used to perform the main experiments in Chapter 6 are shown in Figure 5.5.
Figure 5 . 5 :
55CDNEL model variants and datasets used in the fine-tuning. These four variants of CDNEL are as follows:-• C : The context and candidate encoders of the Candidate Generation module are fine-tuned on mentions with context annotated on the domain-specific KB. • CO: The context and candidate encoders of the Candidate Generation module are fine-tuned on mentions with context annotated on the domain-specific KB. In addition, the module is further fine-tuned on the overlapping entities between the general-domain KB and the domain-specific KB. • CA: The context and candidate encoders of the Candidate Generation module are fine-tuned on mentions with context annotated on the domain-specific KB as well as mentions with context from Reddit annotated on the general-domain KB (Wikipedia) for data augmentation. • COA: The context and candidate encoders of the Candidate Generation module are fine-tuned on mentions with context annotated on the domain-specific KB as well as mentions with context from Reddit annotated on the general-domain KB (Wikipedia) for data augmentation. In addition, the module is further fine-tuned on the overlapping entities between the general-domain KB and the domain-specific KB. In addition to these four variants, there are two more variants involving modification (D). These variants are not used to run the main experiments, but are used in the analysis section of Chapter 6. They are as follows:-• D: The candidate encoder of the Candidate Generation module is fine-tuned on mentions with context annotated on the domain-specific KB. • DA: The candidate encoder of the Candidate Generation module is fine-tuned on mentions with context annotated on the domain-specific KB as well as mentions with context from Reddit annotated on the general-domain KB (Wikipedia) for data augmentation.
KBs, which should help in Entity Linking. There are different variants of CDNEL. The main experiments are based on the C variants. They have in common the fine-tuning of the context and candidate encoders of the Candidate Generation module on mentions with context annotated on the domain-specific KB. For the other model variants, additional knowledge from other types of datasets is used for fine-tuning. For example, we perform further fine-tuning on the overlapping entities between the general-domain KB and the domain-specific KB, represented by (O). In addition, we used data augmentation by including additional mentions in the fine-tuning that are extracted from Reddit and annotated on the general-domain KB (Wikipedia). This data augmentation is represented by (A). presents a description of the experimental setup with the results of the conducted experiments. In addition, further analysis of the results is provided. The experiments aim to evaluate the extensibility of the base model (BLINK) to domainspecific KBs. This extensibility is performed by fine-tuning BLINK on the domainspecific datasets. The performance of the fine-tuned model is tested for the Entity Linking task with mentions that should be annotated on the domain-specific KB. In addition, the model's performance is tested on mentions that should be annotated on the general-domain KB (Wikipedia). This additional test is performed to check the overfitting of the fine-tuned model to the domain-specific KB.
6. 2 :
2Hyperparameters configurations used in the fine-tuning and evaluation of BLINK and CDNEL variants The size of the training and validation batches plays a vital role in fine-tuning. The larger, the better, since the rest of the samples in the batch for each input sample act as negative samples. The larger the number of negative samples, the greater the loss.
section presents the different experiments conducted in this Master thesis. The main experiments are performed using the C variants of our framework (CDNEL) demonstrated in Section 5.2.2 in Chapter 5, leveraging fine-tuning pre-trained language models to merge entities from multiple KBs. In addition, we performed a qualitative assessment of a sample of mentions and investigated how the fine-tuned models can correctly link them. Moreover, we conducted an intrinsic evaluation of the joint representation space of these entities from multiple KBs. 6.2.1 Fine-tuning on Mentions Annotated on the Domain-specific KB In this experiment, we address the research question, "To what extent does fine-tuning on domain-specific datasets affect the results of simultaneous Entity Linking?". The main experiment is conducted using the Zeshel dataset by Logeswaran et al. (2019) [2]. We experimented with four different domains: American Football, Doctor Who, Fallout, and Final Fantasy. For each domain, we fine-tuned the C variants of CDNEL: C, CO, CA, and COA. They are then evaluated against their respective domain evaluation set, consisting of separate mentions annotated on that domain. The results are shown in
Figure 6 . 1 :
61AP@1 of BLINK vs. CDNEL variants evaluated on mentions annotated on each domain-specific KB.
provides a code implementation of the statistical significance tests used in NLP. According to Smucker et al. (2007) [32], Fisher's randomization test (permutation test) is recommended to test the statistical significance of the difference in Mean Average
Figure 6 . 2 :
62MAP@10 of BLINK vs. CDNEL variants evaluated on mentions annotated on each domain-specific KB.
Figure 6 . 3 :
63AP@1 of BLINK vs. CDNEL variants evaluated on Reddit mentions annotated on the general-domain KB (Wikipedia). P-values at the significance level (alpha) of 0.05 for the evaluation results of each domain.
Figure 6 . 4 :
64MAP@10 of BLINK vs. CDNEL variants evaluated on Reddit mentions annotated on the general-domain KB (Wikipedia). Mention Domain BLINK CA Putin can't do much. Russia has no leverage over us and are already feeling huge pressure from American and EU sanctions (one big reason Putin threw his hat in with Trump and the GOP, to try and lift those sanctions).
this experiment, we evaluate the joint representation space of the entities of each domain-specific KB from the Zeshel dataset and the general-domain KB (Wikipedia). These domain-specific KBs are from American Football, Doctor Who, Fallout, and Final Fantasy. The overlapping entities between them and Wikipedia KB are 22928, 3611, 752, and 413, respectively. In this setting, we do not consider the downstream task of Entity Linking. We only evaluate the embeddings generated by each variant of CDNEL. The intrinsic evaluation of American Football and Doctor Who could not be performed for all overlapping entities between them and the general-domain KB (Wikipedia). The number of overlapping entities is large and leads to an enormous number of pairwise comparisons of their embeddings. This has the disadvantage of requiring more memory and exceeding the memory limit of our cluster. However, we draw a random sample of 1000 overlapping entity pairs and perform the evaluation using this sample. The evaluation is based on the overlapping entities between each domain-specific KB and the general-domain KB (Wikipedia). We measure the Mean Reciprocal Rank (MRR) between the two entities of each overlapping entity pair using the Cosine Similarity between their embeddings as a similarity metric. In addition, we measure the Cosine Similarity itself between the embeddings of this entity pair and then average it for all pairs of overlapping entities. The results of MRR and Average Cosine Similarity (ACS )
This may require more epochs when further fine-tuning on the overlapping entities of American Football. We use the same number of epochs for all domains regardless of the number of their overlapping entities. The evaluation indicates a better joint representation space for the entity embeddings generated by our framework (CDNEL). This observation indicates that further finetuning of the base model (BLINK) on the overlapping entities between the domainspecific KB and the general-domain KB helps obtain higher quality embeddings of entities where the overlapping entities are as close as possible in the joint representation space.
Figure 2 . 1 6 Figure 2 . 2 7 Figure 2 . 3 7 Figure 2 . 4 8 Figure 2 . 5 9 Figure 2 . 9 Figure 2 . 7 10 Figure 3 . 1 12 Figure 3 . 2 12 Figure 3 . 3 13 Figure 3 . 4 14 Figure 3 . 5 14 Figure 3 . 6 15 Figure 3 . 7 16 Figure 3 . 8 17 Figure 3 . 9 18 Figure 4 . 1 28 Figure 5 29 Figure 5 33 Figure 5 . 4 34 Figure 5 . 5 35 Figure 6 . 1 41 Figure 6 . 2 42 Figure 6 . 3 44 Figure 6 . 4
21622723724825929271031123212331334143514361537163817391841285295335434553561416242634464Mention-entity graph example by Hoffart et al. (2011) [6]. . . . . . Model architecture adopted by Kolitsas et al. (2018) [17]. . . . . . General architecture for a Neural Entity Linking system as stated by Sevgili et al. (2021) [4]. . . . . . . . . . . . . . . . . . . . . . . . . . . Encoder and decoder architectures constituting the Transformer model by Vaswani et al. (2017) [21]. . . . . . . . . . . . . . . . . . . . . . Pre-training and fine-tuning of BERT model by Devlin et al.(2019) [19]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 A simplified example of the representation space of an input mention and various entities in a KB, which can be used in the Candidate Generation phase. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Difference between Bi-Encoder and Cross-Encoder architectures. . Local neural attention mechanism deployed by Ganea and Hofmann (2017) [23]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Entity type prediction model for Entity Linking by Onoe and Durett (2020) [24]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Enhancing the mention-span representations with knowledge from the KB using word-to-entity-span attention by Peters et al. (2019) [25]. . Three encoder architectures investigated by Humeau et al. (2019) [26]. Encoding different sources of information provided for an entity by Gupta et al. (2017) [27]. . . . . . . . . . . . . . . . . . . . . . . . . . . Dual encoder architecture deployed by Gillick et al. (2019) [28]. . Three different ways to represent an entity with arbitrary attributevalues by Vias and Ballesteros (2020) [5]. . . . . . . . . . . . . . . . . . . Multiple training and test domains, and the extraction of labelled mentions using hyper-links by Logeswaran et al. (2019) [2]. . . . . . . . . Full pipeline for the Bi-Encoder and Cross-Encoder architecture adopted by Wu et al. (2020) [1]. . . . . . . . . . . . . . . . . . . . . . . . Cybersecurity document manual annotation by Inception. . . . . . 24 Figure 5.1 BLINK: Candidate Generation. . . . . . . . . . . . . . . . . . . . . .2 BLINK: Candidate Ranking. . . . . . . . . . . . . . . . . . . . . . .3 CDNEL (D): Candidate Generation. . . . . . . . . . . . . . . . . . Dataset components used in the fine-tuning. . . . . . . . . . . . . . CDNEL model variants and datasets used in the fine-tuning. . . . AP@1 of BLINK vs. CDNEL variants evaluated on mentions annotated on each domain-specific KB. . . . . . . . . . . . . . . . . . . . MAP@10 of BLINK vs. CDNEL variants evaluated on mentions annotated on each domain-specific KB. . . . . . . . . . . . . . . . . . . . AP@1 of BLINK vs. CDNEL variants evaluated on Reddit mentions annotated on the general-domain KB (Wikipedia). . . . . . . . . . . MAP@10 of BLINK vs. CDNEL variants evaluated on Reddit mentions annotated on the general-domain KB (Wikipedia). . . . . . . . . 45
Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Limitations of the State of the Art . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Goals of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.4 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.5 Structure of the Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Entity Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Neural Entity Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Generic Example of Neural Entity Linking . . . . . . . . . . . . . . . . . . 6 2.4 Sequence Modeling based on Transformers . . . . . . . . . . . . . . . . . . 8 2.5 Bi-Encoder and Cross-Encoder Architecture . . . . . . . . . . . . . . . . . 9 Entity linking with Neural Attention . . . . . . . . . . . . . . . . . . . . . 11 3.2 Large-scale Language Models Pre-training for Entity Linking . . . . . . . 12 3.3 Linking to KBs with Arbitrary Schemas . . . . . . . . . . . . . . . . . . . 13 3.4 Zero-Shot Entity Linking . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Dataset Gathering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.1 Zeshel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 4.1.2 Cybersecurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 Dataset Augmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 4.2.1 Reddit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 ix 5.1 BLINK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 5.1.1 Candidate Generation Phase . . . . . . . . . . . . . . . . . . . . . 28 5.1.2 Candidate Ranking Phase . . . . . . . . . . . . . . . . . . . . . . . 29 5.2 CDNEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 5.2.1 Proposed Modifications . . . . . . . . . . . . . . . . . . . . . . . . 31 5.2.2 Model Variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 5.2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.1.1 Model Configurations . . . . . . . . . . . . . . . . . . . . . . . . . 37 6.1.2 Evaluation Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 6.2 Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 Mentions Qualitative Assessment . . . . . . . . . . . . . . . . . . . 44 6.2.4 Intrinsic Evaluation of Embeddings . . . . . . . . . . . . . . . . . 46 6.3 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 6.3.1 Fine-tuning on Mentions Annotated on a Mixed-domain KB . . . 481 Introduction
1
1.1 2 Background
5
2.1 3 Related Work
11
3.1 4 Datasets
21
4.1 6 Experimental Evaluation
37
6.1 6.2.1 Fine-tuning on Mentions Annotated on the Domain-specific KB . 40
6.2.2 Overfitting of the Fine-tuned Models to the Domain-specific KB . 42
6.2.3 6.3.2 Fine-tuning on Mentions Annotated on both Domains . . . . . . . 49
7 Conclusion
51
Introduction
A Neural Entity Linking (NEL) system solves the task of Entity Linking using neural methods. They depend heavily on neural networks' robust representation learning capabilities through solid encoder architectures. An example work byEshel et al. (2017) [16] that uses GRU plus attention to encode the left and right context of a mention. Kolitsaset al. (2018) [17] uses pre-trained Word2Vec vectors by Mikolov et al. (2013) [18] as word embeddings and uses a bidirectional LSTM on top of these embeddings to obtain context-aware word embeddings for the input mention with context, as shown in Figure2.2.
Since 2019, numerous works have used the BERT model by Devlin et al. (2019) [19] to
obtain contextualized embeddings of mention context and entities, as Broscheit et al.
(2019) [20], Logeswaran et al. (2019) [2], and Wu et al. (2020) [1].
Table 4 . 1 .
41The fine-tuning and evaluation of the proposed approach are done separately for each domain. Training and validation sets are used for fine-tuning, while the approach is evaluated using the evaluation set.Domain
Entities
Mentions
Fine-tuning
Evaluation
Training Validation
American Football 31929
3000
320
578
Doctor Who
40281
6360
640
1334
Fallout
16992
2500
320
466
Final Fantasy
14044
4360
640
1041
Table 4
4.1: Different Fandom domains used in the fine-tuning and evaluation of the proposed approach These domains are used separately to extend the general-domain KB (Wikipedia). This extension is consistent with our goal of having a single model that can simultaneously link to multiple domains.
.3.Domain
Fuzzy Overlapping Entities
Cosine Similarity
Max
Average
Min
American Football
24074
0.9965 0.8313
-0.1306
Doctor Who
10458
0.9284 0.4135
-0.2777
Fallout
2876
0.9855 0.3913
-0.1835
Final Fantasy
1495
0.9934 0.4047
-0.2060
Table 4.2: Average Cosine Similarity measures for the fuzzy overlapping entities for
each domain
Domain
Fuzzy Overlapping Entities
(Title Exact Matching)
Similar Overlapping Entities
(Descriptions Semantic Matching)
American Football
24074
22928
Doctor Who
10458
3611
Fallout
2876
752
Final Fantasy
1495
413
Table 4 .
43: Filtered similar overlapping entities for each domain in Fandom These filtered similar overlapping entities between each domain-specific KB and the general-domain KB (Wikipedia) are used in the fine-tuning process to explicitly modify their vector representations to be more similar in the single vector space.
Wikipedia (general-domain KB), Wikidata Query Service 3 is used, which has a mapping between Wikidata entity ID and Mitre Attack entity ID. Each ID is an identifier for an entity in Wikidata and Mitre Attack KBs, respectively. The entities that have an entry in this mapping are extracted, and their Wikidata ID is converted to the corresponding Wikipedia ID. Only 44 entities have an entry in this mapping, and they act as overlapping entities between the two domains.Table 4.5: Reddit dataset used in the fine-tuning and evaluation of the proposed approachDomain Entities
Mentions
Fine-tuning
Evaluation
Training Validation
Reddit
5903538
7711
409
1328
Table 6 .1.
6These variants share the same hyperparameters used in fine-tuning and evaluation. Since
BLINK is not fine-tuned on mentions from the domain-specific datasets, the correspond-
ing hyperparameters for fine-tuning do not exist, as shown in Table 6.2.
Table 6 .
61: Modifications used to form different variants of our framework (CDNEL)
Hyperparameters
BLINK
(Zero-Shot)
CDNEL Variants
(C, CO, CA, COA, D, DA)
Training Batch Size
-
16
Validation Batch Size
-
16
Learning Rate
-
0.00003
Number of Epochs
-
5
Candidate Generation Top K
10
10
Mention with Context Max Length
128
128
Entity with Description Max Length 128
128
Table
Table 6 .3.
6For each domain, the CDNEL variants are compared to the performance of BLINK
with respect to AP@1 and MAP@10. They almost consistently perform better than
BLINK, visually seen in Figures 6.1 and 6.2. These figures show the AP@1 and MAP@10
reported in Table 6.3. This visualization is done to have a clearer view of the difference
in performance of BLINK compared to the CDNEL variants.
Table 6 .
64: P-values at the significance level (alpha) of 0.05 for the evaluation results
of each domain.
The results of the CDNEL variants are almost comparable to those of BLINK in terms of AP@1 and MAP@10, and the difference in performance is only slight compared toAmerican Football
Doctor Who
Fallout
Final Fantasy
AP@1
MAP@10 AP@1
MAP@10 AP@1
MAP@10 AP@1
MAP@10
BLINK 0.8479
0.8973
0.8509
0.8985
0.8509
0.8987
0.8494
0.8987
C
0.8517
0.8974
0.8170
0.8529
0.8042
0.8452
0.8577
0.8970
CO
0.8517
0.8940
0.8524
0.8859
0.8321
0.8676
0.8592
0.8930
CA
0.8614 0.8964
0.8622
0.8992
0.8592
0.8959
0.8773
0.9076
COA
0.8163
0.8387
0.8773 0.9063
0.8637 0.8859
0.8810 0.9045
Table 6.5: Evaluation on Reddit mentions annotated on the general-domain KB
(Wikipedia).
BLINK is pre-trained on mentions from Wikipedia and performs well on mentions that
should be annotated on Wikipedia KB. However, our goal is to have a model that per-
forms well on mentions that should be annotated on the general-domain KB (Wikipedia)
along with other mentions that should be annotated on a domain-specific KB. In this
regard, we aim for CDNEL to achieve better or at least similar results to BLINK when
evaluation is only done on mentions that should be annotated on the general-domain
KB (Wikipedia). We performed the evaluation on mentions from Reddit and the results
are presented in Table 6.5.
the previous experiment where the evaluation was performed using separate mentions
annotated on the domain-specific KBs. This slight discrepancy can be seen in Figures
6.3 and 6.4, which visualize the results from Table 6.5. They show the performance of
the CDNEL variants compared to the BLINK model in terms of AP@1 and MAP@10.
Table 6 .
67: Mentions with context correctly linked by CDNEL variant (CA) and
wrongly linked by BLINK.
This table shows examples of mentions extracted from a couple of domains and the
result of the annotations by BLINK compared to the best-performing CDNEL variant
(CA). The correct annotations are in bold.
The assessment shows that the fine-tuned model variant CA performs well on mentions
that should be annotated on a domain-specific KB, e.g., Final Fantasy, as it was able
to indicate to which exact version of Final Fantasy, the mention "Acrophies" should
belong. In addition, it demonstrates strong capabilities in linking mentions that should
be annotated on the general-domain KB (Wikipedia), such as distinguishing between
table shows the results for the C variants of our framework (CDNEL) compared to BLINK. The results show that the fine-tuned model variants CO and COA, which are further fine-tuned on the overlapping entities between the domain-specific KB andAmerican Football
Doctor Who
Fallout
Final Fantasy
MRR
ACS
MRR
ACS
MRR
ACS
MRR
ACS
BLINK/COA 0.1922
1.0
0.3523 2.4999e-5 3.3244e-5 3.3244e-5 6.0529e-5 6.0529e-5
Significance
Table 6.9: P-values at the significance level (alpha) of 0.05 for the intrinsic evaluation
results of each domain.
the general-domain KB, perform best in terms of MRR and Average Cosine Similarity,
respectively. However, the performance is comparable to that of BLINK only for the
domain American Football. This comparability can be attributed to the very large
number of overlapping entities of American Football compared to the other domains.
Table 6 .
6This test is performed on the Zeshel dataset by mixing eight domains into a single mixeddomain KB. Mentions of these domains are mixed and split into training, validation, and test sets. These domains and the number of their relative entities are listed inTable 6.10. Since the mentions of these domains are mixed, we only report the total number of mentions in each of the training, validation, and evaluation sets.We did this mixing because it is time-consuming to run the experiment separately for each domain. For the same reason, and because of the complexity of pairwise comparisons, we did not experiment with extracting the overlapping entities between the total number of entities in the mixed-domain KB (159689) and Wikipedia KB (5903538). In addition, we aim only to test the D variants and whether fine-tuning only the candidate encoder can contribute to the Entity Linking task compared with the C variants.Table 6.11: Evaluation on mentions annotated on the mixed-domain KB and Reddit mentions annotated on the general-domain KB (Wikipedia). The evaluation dataset contains mentions from the mixed domains annotated on the mixed-domain KB and other mentions from Reddit annotated on the general-domain KB (Wikipedia). The evaluation is done separately on them, as shown in Table 6.11. The results show that CA has almost the highest scores compared to BLINK and the other variants. This result indicates that the best setting is fine-tuning both the context and the candidate encoders. However, DA outperforms CA only with respect to MAP@10 when evaluated on Reddit mentions annotated on the general-domain KB. This observation may indicate that DA might contribute to reducing the overfitting of the fine-tuned model to the mixed-domain KB. However, we recommend continuing experiments using the C variants of CDNEL.6.3.2 Fine-tuning on Mentions Annotated on both DomainsIn this experiment, we test the scenario when mentions annotated on the domain-specific KB and the general-domain KB are in the same document. In our case, these mentions are extracted from Cybersecurity documents and used in fine-tuning. Some of these mentions are annotated on the domain-specific KB (Cybersecurity), while others are annotated on the general-domain KB (Wikipedia). We did not use data augmentation in this setting. We tested CDNEL variants C and CO, which do not include data augmentation in their fine-tuning.The evaluation dataset contains separate mentions from Cybersecurity documents annotated on both domains. The evaluation is performed separately for each domain, as shown inTable 6.12.The results show that the difference in the evaluation performance between BLINK and CDNEL is not consistent. We cannot state whether specifying mentions annotated on both domains for fine-tuning gives any gains. This inconsistency can be due to the small number of input samples used for fine-tuning and evaluation. The training set we use hasCybersecurity Mentions Wikipedia Mentions Combined MentionsTable 6.12: Evaluation on mentions from the same document annotated on the domain-specific KB (Cybersecurity) and the general-domain KB (Wikipedia).only 646 annotated mentions on both KBs. We leave this for further experimentation using larger datasets of mentions from the same document and are annotated on both domains.In this research project, the main goal of our framework (CDNEL) is to have a single system that allows simultaneous linking to more than one KB. In our case, these arethe general-domain KB (Wikipedia) and the domain-specific KB, such as Final Fantasy and Cybersecurity. This system is built by fine-tuning pre-trained language models, e.g., BERT. Knowledge learned from these models is obtained in the form of contextaware embeddings for the textual context of the mention and the textual descriptions of the entities. In addition, we bring in knowledge from the domain-specific datasets by fine-tuning these language models using four domains from the Zeshel dataset. We introduce different model variants of our framework (CDNEL). The main experiments are performed using the C variants, which have in common fine-tuning the Candidate Generation module on mentions annotated on the domain-specific KB. The other model variants attempt to use additional knowledge from other types of datasets for fine-tuning. For example, we do further fine-tuning on the overlapping entities between the general-domain KB and the domain-specific KB, represented by (O). In addition, we use data augmentation by including additional mentions in the fine-tuning that are extracted from Reddit and annotated on the general-domain KB (Wikipedia), represented by (A).We propose to construct overlapping entities between the general-domain KB and the domain-specific KB using specialized models in semantic text similarity. In addition, we propose to augment the data with mentions from Reddit annotated on the generaldomain KB (Wikipedia) to reduce overfitting of the fine-tuned models to the domainspecific dataset.Briefly, the contributions of this thesis work are summarized in two points as follows:-• 1) Extending a New Domain: Merging general-domain and domain-specificKBs into a single representation space for simultaneous linking by fine-tuning on domain-specific mentions and using data augmentation to reduce overfitting.• 2) Handling Overlapping Entities: Aligning them with similar representations in a single representation space by explicitly learning to maximize their Dot Product similarity. Our framework (CDNEL) improves over the base model BLINK when evaluated against mentions annotated on the domain-specific KB. This evaluation is performed on four domain-specific KBs. In particular, CA is the best-performing model variant, which is fine-tuned on mentions with context annotated on the domain-specific KB as well as mentions with context from Reddit annotated on the general-domain KB (Wikipedia) for data augmentation. It improves upon BLINK for all domains and achieves an average gain of 9.5% with respect to AP@1. The difference in results between the two systems is statistically significant. Another evaluation is performed using mentions from Reddit annotated on the generaldomain KB. This experiment aims to test the overfitting of the CDNEL variants to the domain-specific KBs. The results show that CA is either similar or performs better than BLINK for all domains, proving the comparability of its performance with BLINK, which performs well by default on general-domain mentions. On the other hand, the model variant COA, which is further fine-tuned on the overlapping entities between the general-domain KB and the domain-specific KB, has a representation space of the overlapping entities in which they are the most similar with respect to Cosine Similarity.10: Different domains are mixed and used in fine-tuning and evaluating the
D variants of CDNEL.
6.3 Analysis
This section presents experiments conducted to test the D variants of our framework
(CDNEL). In addition, we experiment with testing two of the C variants in a different
document setting.
6.3.1 Fine-tuning on Mentions Annotated on a Mixed-domain KB
This experiment tests fine-tuning only the candidate encoder versus fine-tuning both
the context and candidate encoders. This way of fine-tuning can be interesting because
preventing the parameters of the context encoder from changing can help reduce the
overfitting of the model to the domain-specific KB.
Table 4 .
41 Different Fandom domains used in the fine-tuning and evaluation of the proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 22Table 4.2 Average Cosine Similarity measures for the fuzzy overlapping entities for each domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Table 4.3 Filtered similar overlapping entities for each domain in Fandom . . 23 Table 4.4 Cybersecurity domain used in the fine-tuning and evaluation of the proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Table 4.5 Reddit dataset used in the fine-tuning and evaluation of the proposed approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Table 6.1 Modifications used to form different variants of our framework (CD-NEL) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Table 6.2 Hyperparameters configurations used in the fine-tuning and evaluation of BLINK and CDNEL variants . . . . . . . . . . . . . . . . . . . . 38 Table 6.3 Evaluation on mentions annotated on each domain-specific KB. . . 41 Table 6.4 P-values at the significance level (alpha) of 0.05 for the evaluation results of each domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Table 6.5 Evaluation on Reddit mentions annotated on the general-domain KB (Wikipedia). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Table 6.6 P-values at the significance level (alpha) of 0.05 for the evaluation results of each domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 Table 6.7 Mentions with context correctly linked by CDNEL variant (CA) and wrongly linked by BLINK. . . . . . . . . . . . . . . . . . . . . . . . . 45 Table 6.8 Intrinsic evaluation of overlapping entities between each domainspecific KB and Wikipedia KB. . . . . . . . . . . . . . . . . . . . . . . . . 46 Table 6.9 P-values at the significance level (alpha) of 0.05 for the intrinsic evaluation results of each domain. . . . . . . . . . . . . . . . . . . . . . . 47 Table 6.10 Different domains are mixed and used in fine-tuning and evaluating the D variants of CDNEL. . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 Table 6.11 Evaluation on mentions annotated on the mixed-domain KB and Reddit mentions annotated on the general-domain KB (Wikipedia). . . . 49 Table 6.12 Evaluation on mentions from the same document annotated on the domain-specific KB (Cybersecurity) and the general-domain KB (Wikipedia). 50
Fandom, https://www.fandom.com. 1 Fandom, https://www.fandom.com.
Fandom, https://www.fandom.com. 1 Fandom, https://www.fandom.com.
Mitre Attack, https://attack.mitre.org/. 2 Mitre Attack, https://attack.mitre.org/.
Wikidata Query Service, https://query.wikidata.org/. 3 Wikidata Query Service, https://query.wikidata.org/.
Acknowledgements
Scalable zero-shot entity linking with dense entity retrieval. Ledell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, Luke Zettlemoyer, 10.18653/v1/2020.emnlp-main.519Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Bonnie Webber, Trevor Cohn, Yulan He, and Yang Liuthe 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Association for Computational LinguisticsLedell Wu, Fabio Petroni, Martin Josifoski, Sebastian Riedel, and Luke Zettle- moyer. Scalable zero-shot entity linking with dense entity retrieval. In Bon- nie Webber, Trevor Cohn, Yulan He, and Yang Liu, editors, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 6397-6407. Association for Compu- tational Linguistics, 2020. doi: 10.18653/v1/2020.emnlp-main.519. URL https: //doi.org/10.18653/v1/2020.emnlp-main.519.
Zero-shot entity linking by reading entity descriptions. Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Honglak Lee, Lajanugen Logeswaran, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, and Honglak Lee. Zero-shot entity linking by reading entity descriptions.
Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. Anna Korhonen, David R. Traum, and Lluís Màrquezthe 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyLong Papers1In Anna Korhonen, David R. Traum, and Lluís Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Volume 1: Long Papers, pages 3449-3460.
. 10.18653/v1/p19-1335Association for Computational LinguisticsAssociation for Computational Linguistics, 2019. doi: 10.18653/v1/p19-1335. URL https://doi.org/10.18653/v1/p19-1335.
Reddit entity linking dataset. Nicholas Botzer, Yifan Ding, Tim Weninger, 10.1016/j.ipm.2020.102479Inf. Process. Manag. 5832021Nicholas Botzer, Yifan Ding, and Tim Weninger. Reddit entity linking dataset. Inf. Process. Manag., 58(3):102479, 2021. doi: 10.1016/j.ipm.2020.102479. URL https://doi.org/10.1016/j.ipm.2020.102479.
Neural entity linking: A survey of models based on deep learning. Özge Sevgili, Artem Shelmanov, Mikhail Y Arkhipov, Alexander Panchenko, Chris Biemann, Özge Sevgili, Artem Shelmanov, Mikhail Y. Arkhipov, Alexander Panchenko, and Chris Biemann. Neural entity linking: A survey of models based on deep learning.
. Corr, CoRR, abs/2006.00575, 2020. URL https://arxiv.org/abs/2006.00575.
Linking entities to unseen knowledge bases with arbitrary schemas. Yogarshi Vyas, Miguel Ballesteros, Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-TürIz Beltagy, Steven Bethard, Ryan Cotterell, TanmoyYogarshi Vyas and Miguel Ballesteros. Linking entities to unseen knowledge bases with arbitrary schemas. In Kristina Toutanova, Anna Rumshisky, Luke Zettle- moyer, Dilek Hakkani-Tür, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy
Yichao Chakraborty, Zhou, 10.18653/v1/2021.naacl-main.65Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021Association for Computational Linguistics2021Chakraborty, and Yichao Zhou, editors, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 834-844. Association for Computational Linguistics, 2021. doi: 10.18653/v1/2021. naacl-main.65. URL https://doi.org/10.18653/v1/2021.naacl-main.65.
. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, Gerhard Weikum, Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen Fürstenau, Man- fred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum.
Robust disambiguation of named entities in text. Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, UK2011ACLRobust disambiguation of named entities in text. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, EMNLP 2011, 27- 31 July 2011, John McIntyre Conference Centre, Edinburgh, UK, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 782-792. ACL, 2011. URL https://aclanthology.org/D11-1072/.
Wikify!: linking documents to encyclopedic knowledge. Rada Mihalcea, Andras Csomai ; Alberto, H F Laender, Ricardo A Baeza-Yates, Deborah L Mcguinness, Bjørn Olstad, André O Øystein Haug Olsen, Falcão, Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management. Mário J. Silva,the Sixteenth ACM Conference on Information and Knowledge ManagementLisbon, PortugalRada Mihalcea and Andras Csomai. Wikify!: linking documents to encyclopedic knowledge. In Mário J. Silva, Alberto H. F. Laender, Ricardo A. Baeza-Yates, Deb- orah L. McGuinness, Bjørn Olstad, Øystein Haug Olsen, and André O. Falcão, edi- tors, Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, CIKM 2007, Lisbon, Portugal, November 6-10, 2007, pages 233-242.
. 10.1145/1321440.1321475ACMACM, 2007. doi: 10.1145/1321440.1321475. URL https://doi.org/10.1145/ 1321440.1321475.
Knowledge transfer for out-of-knowledge-base entities : A graph neural network approach. Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, Yuji Matsumoto, Takuo Hamaguchi, Hidekazu Oiwa, Masashi Shimbo, and Yuji Matsumoto. Knowl- edge transfer for out-of-knowledge-base entities : A graph neural network approach.
10.24963/ijcai.2017/250Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. Carles Sierrathe Twenty-Sixth International Joint Conference on Artificial IntelligenceMelbourne, AustraliaIn Carles Sierra, editor, Proceedings of the Twenty-Sixth International Joint Con- ference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19- 25, 2017, pages 1802-1808. ijcai.org, 2017. doi: 10.24963/ijcai.2017/250. URL https://doi.org/10.24963/ijcai.2017/250.
Dbpedia spotlight: shedding light on the web of documents. Pablo N Mendes, Max Jakob, Andrés García-Silva, Christian Bizer, 10.1145/2063518.2063519Proceedings the 7th International Conference on Semantic Systems, I-SEMANTICS 2011. Chiara Ghidini, Axel-Cyrille Ngonga Ngomo, Stefanie N. Lindstaedt, and Tassilo Pellegrinithe 7th International Conference on Semantic Systems, I-SEMANTICS 2011Graz, AustriaACMACM International Conference Proceeding SeriesPablo N. Mendes, Max Jakob, Andrés García-Silva, and Christian Bizer. Dbpe- dia spotlight: shedding light on the web of documents. In Chiara Ghidini, Axel- Cyrille Ngonga Ngomo, Stefanie N. Lindstaedt, and Tassilo Pellegrini, editors, Pro- ceedings the 7th International Conference on Semantic Systems, I-SEMANTICS 2011, Graz, Austria, September 7-9, 2011, ACM International Conference Pro- ceeding Series, pages 1-8. ACM, 2011. doi: 10.1145/2063518.2063519. URL https://doi.org/10.1145/2063518.2063519.
From tagme to WAT: a new entity annotator. Francesco Piccinno, Paolo Ferragina, 10.1145/2633211.2634350Proceedings of the First ACM International Workshop on Entity Recognition & Disambiguation. David Carmel, Ming-Wei Chang, Evgeniy Gabrilovich, Bo-June Paul Hsu, and Kuansan Wang14ACMFrancesco Piccinno and Paolo Ferragina. From tagme to WAT: a new entity anno- tator. In David Carmel, Ming-Wei Chang, Evgeniy Gabrilovich, Bo-June Paul Hsu, and Kuansan Wang, editors, ERD'14, Proceedings of the First ACM International Workshop on Entity Recognition & Disambiguation, July 11, 2014, Gold Coast, Queensland, Australia, pages 55-62. ACM, 2014. doi: 10.1145/2633211.2634350. URL https://doi.org/10.1145/2633211.2634350.
Entity linking meets word sense disambiguation: a unified approach. Andrea Moro, Alessandro Raganato, Roberto Navigli, Trans. Assoc. Comput. Linguistics. 2Andrea Moro, Alessandro Raganato, and Roberto Navigli. Entity linking meets word sense disambiguation: a unified approach. Trans. Assoc. Comput. Linguis- tics, 2:231-244, 2014. URL https://tacl2013.cs.columbia.edu/ojs/index. php/tacl/article/view/291.
Probabilistic bag-of-hyperlinks model for entity linking. Marina Octavian-Eugen Ganea, Aurélien Ganea, Carsten Lucchi, Thomas Eickhoff, Hofmann, Jacqueline Bourdeau, Jim Hendler, Roger Nkambou, Ian Horrocks, and Ben YOctavian-Eugen Ganea, Marina Ganea, Aurélien Lucchi, Carsten Eickhoff, and Thomas Hofmann. Probabilistic bag-of-hyperlinks model for entity linking. In Jacqueline Bourdeau, Jim Hendler, Roger Nkambou, Ian Horrocks, and Ben Y.
Zhao, 10.1145/2872427.2882988Proceedings of the 25th International Conference on World Wide Web. the 25th International Conference on World Wide WebMontreal, CanadaACMZhao, editors, Proceedings of the 25th International Conference on World Wide Web, WWW 2016, Montreal, Canada, April 11 -15, 2016, pages 927-938. ACM, 2016. doi: 10.1145/2872427.2882988. URL https://doi.org/10.1145/2872427. 2882988.
Overview of the tac 2010 knowledge base population track. Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, Joe Ellis, Third text analysis conference. 3Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. Overview of the tac 2010 knowledge base population track. In Third text analysis conference (TAC 2010), volume 3, pages 3-3, 2010.
KORE: keyphrase overlap relatedness for entity disambiguation. Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, Gerhard Weikum, 10.1145/2396761.239683221st ACM International Conference on Information and Knowledge Management, CIKM'12. Xuewen Chen, Guy Lebanon, Haixun Wang, and Mohammed J. ZakiMaui, HI, USAACMJohannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. KORE: keyphrase overlap relatedness for entity disambiguation. In Xue- wen Chen, Guy Lebanon, Haixun Wang, and Mohammed J. Zaki, editors, 21st ACM International Conference on Information and Knowledge Management, CIKM'12, Maui, HI, USA, October 29 -November 02, 2012, pages 545-554. ACM, 2012. doi: 10.1145/2396761.2396832. URL https://doi.org/10.1145/2396761.2396832.
Local and global algorithms for disambiguation to wikipedia. Lev-Arie Ratinov, Dan Roth, Doug Downey, Mike Anderson, The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference. Dekang Lin, Yuji Matsumoto, and Rada MihalceaPortland, Oregon, USAThe Association for Computer LinguisticsLev-Arie Ratinov, Dan Roth, Doug Downey, and Mike Anderson. Local and global algorithms for disambiguation to wikipedia. In Dekang Lin, Yuji Matsumoto, and Rada Mihalcea, editors, The 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 1375-1384. The Association for Computer Linguistics, 2011. URL https://aclanthology.org/P11-1138/.
Named entity disambiguation for noisy text. Yotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, Omer Levy, Proceedings of the 21st Conference on Computational Natural Language Learning. Roger Levy and Lucia Speciathe 21st Conference on Computational Natural Language LearningVancouver, CanadaYotam Eshel, Noam Cohen, Kira Radinsky, Shaul Markovitch, Ikuya Yamada, and Omer Levy. Named entity disambiguation for noisy text. In Roger Levy and Lucia Specia, editors, Proceedings of the 21st Conference on Computational Natural Lan- guage Learning (CoNLL 2017), Vancouver, Canada, August 3-4, 2017, pages 58-68.
. 10.18653/v1/K17-1008Association for Computational LinguisticsAssociation for Computational Linguistics, 2017. doi: 10.18653/v1/K17-1008. URL https://doi.org/10.18653/v1/K17-1008.
End-to-end neural entity linking. Nikolaos Kolitsas, Octavian-Eugen, Thomas Ganea, Hofmann, 10.18653/v1/k18-1050Proceedings of the 22nd Conference on Computational Natural Language Learning. Anna Korhonen and Ivan Titovthe 22nd Conference on Computational Natural Language LearningBrussels, BelgiumNikolaos Kolitsas, Octavian-Eugen Ganea, and Thomas Hofmann. End-to-end neural entity linking. In Anna Korhonen and Ivan Titov, editors, Proceedings of the 22nd Conference on Computational Natural Language Learning, CoNLL 2018, Brussels, Belgium, October 31 -November 1, 2018, pages 519-529. Asso- ciation for Computational Linguistics, 2018. doi: 10.18653/v1/k18-1050. URL https://doi.org/10.18653/v1/k18-1050.
Distributed representations of words and phrases and their compositionality. Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, Jeffrey Dean, Tomás Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. Distributed representations of words and phrases and their compositionality. In
J C Christopher, Léon Burges, Zoubin Bottou, Kilian Q Ghahramani, Weinberger, Advances in Neural Information Processing Systems 26: 27th Annual Conference on Neural Information Processing Systems. Lake Tahoe, Nevada, United StatesProceedings of a meeting heldChristopher J. C. Burges, Léon Bottou, Zoubin Ghahramani, and Kilian Q. Wein- berger, editors, Advances in Neural Information Processing Systems 26: 27th An- nual Conference on Neural Information Processing Systems 2013. Proceedings of a meeting held December 5-8, 2013, Lake Tahoe, Nevada, United States, pages 3111-3119, 2013. URL https://proceedings.neurips.cc/paper/2013/hash/ 9aa42b31882ec039965f3c4923ce901b-Abstract.html.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the. Jill Burstein, Christy Doran, and Thamar SoloriotheJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019
Association for Computational Linguistics. 10.18653/v1/n19-1423Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. Minneapolis, MN, USA1Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 4171-4186. As- sociation for Computational Linguistics, 2019. doi: 10.18653/v1/n19-1423. URL https://doi.org/10.18653/v1/n19-1423.
Investigating entity knowledge in BERT with simple neural end-to-end entity linking. Samuel Broscheit, 10.18653/v1/K19-1063Proceedings of the 23rd Conference on Computational Natural Language Learning. Mohit Bansal and Aline Villavicenciothe 23rd Conference on Computational Natural Language LearningHong Kong, ChinaAssociation for Computational LinguisticsSamuel Broscheit. Investigating entity knowledge in BERT with simple neural end-to-end entity linking. In Mohit Bansal and Aline Villavicencio, editors, Pro- ceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong Kong, China, November 3-4, 2019, pages 677-685. Associ- ation for Computational Linguistics, 2019. doi: 10.18653/v1/K19-1063. URL https://doi.org/10.18653/v1/K19-1063.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need.
Ulrike In Isabelle Guyon, Samy Von Luxburg, Hanna M Bengio, Rob Wallach, S V N Fergus, Roman Vishwanathan, Garnett, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAIn Isabelle Guyon, Ulrike von Luxburg, Samy Bengio, Hanna M. Wallach, Rob Fergus, S. V. N. Vishwanathan, and Roman Garnett, editors, Advances in Neu- ral Information Processing Systems 30: Annual Conference on Neural Informa- tion Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 5998-6008, 2017. URL https://proceedings.neurips.cc/paper/2017/hash/ 3f5ee243547dee91fbd053c1c4a845aa-Abstract.html.
Sentence-bert: Sentence embeddings using siamese bert-networks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wanthe 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. Sentence-bert: Sentence embeddings using siamese bert-networks. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3- 7, 2019, pages 3980-3990. Association for Computational Linguistics, 2019. doi: 10.18653/v1/D19-1410. URL https://doi.org/10.18653/v1/D19-1410.
Deep joint entity disambiguation with local neural attention. Eugen Octavian, Thomas Ganea, Hofmann, 10.18653/v1/d17-1277Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Martha Palmer, Rebecca Hwa, and Sebastian Riedelthe 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsOctavian-Eugen Ganea and Thomas Hofmann. Deep joint entity disambigua- tion with local neural attention. In Martha Palmer, Rebecca Hwa, and Sebas- tian Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9- 11, 2017, pages 2619-2629. Association for Computational Linguistics, 2017. doi: 10.18653/v1/d17-1277. URL https://doi.org/10.18653/v1/d17-1277.
Fine-grained entity typing for domain independent entity linking. Yasumasa Onoe, Greg Durrett, The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceYasumasa Onoe and Greg Durrett. Fine-grained entity typing for domain inde- pendent entity linking. In The Thirty-Fourth AAAI Conference on Artificial In- telligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial In- telligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8576-8583. AAAI Press, 2020. URL https://aaai.org/ojs/ index.php/AAAI/article/view/6380.
Knowledge enhanced contextual word representations. Matthew E Peters, Mark Neumann, Robert L Logan, I V , Roy Schwartz, Vidur Joshi, Sameer Singh, Noah A Smith, 10.18653/v1/D19-1005Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wanthe 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsMatthew E. Peters, Mark Neumann, Robert L. Logan IV, Roy Schwartz, Vidur Joshi, Sameer Singh, and Noah A. Smith. Knowledge enhanced contextual word representations. In Kentaro Inui, Jing Jiang, Vincent Ng, and Xiaojun Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3-7, 2019, pages 43-54. As- sociation for Computational Linguistics, 2019. doi: 10.18653/v1/D19-1005. URL https://doi.org/10.18653/v1/D19-1005.
Polyencoders: Architectures and pre-training strategies for fast and accurate multisentence scoring. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, Jason Weston, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. Poly- encoders: Architectures and pre-training strategies for fast and accurate multi- sentence scoring. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020. URL https://openreview.net/forum?id=SkxgnnNFvH.
Entity linking via joint encoding of types, descriptions, and context. Nitish Gupta, Sameer Singh, Dan Roth, 10.18653/v1/d17-1284doi: 10.18653/ v1/d17-1284Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Martha Palmer, Rebecca Hwa, and Sebastian Riedelthe 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsNitish Gupta, Sameer Singh, and Dan Roth. Entity linking via joint encoding of types, descriptions, and context. In Martha Palmer, Rebecca Hwa, and Sebastian Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 2681-2690. Association for Computational Linguistics, 2017. doi: 10.18653/ v1/d17-1284. URL https://doi.org/10.18653/v1/d17-1284.
Learning dense representations for entity retrieval. Daniel Gillick, Sayali Kulkarni, Larry Lansing, Alessandro Presta, Jason Baldridge, Eugene Ie, Diego García-Olano, Proceedings of the 23rd Conference on Computational Natural Language Learning. Mohit Bansal and Aline Villavicenciothe 23rd Conference on Computational Natural Language LearningCoNLL; HongDaniel Gillick, Sayali Kulkarni, Larry Lansing, Alessandro Presta, Jason Baldridge, Eugene Ie, and Diego García-Olano. Learning dense representations for entity retrieval. In Mohit Bansal and Aline Villavicencio, editors, Proceedings of the 23rd Conference on Computational Natural Language Learning, CoNLL 2019, Hong
. China Kong, 10.18653/v1/K19-1049Association for Computational LinguisticsKong, China, November 3-4, 2019, pages 528-537. Association for Computational Linguistics, 2019. doi: 10.18653/v1/K19-1049. URL https://doi.org/10.18653/ v1/K19-1049.
. M Daniel, Mona T Cer, Eneko Diab, Iñigo Agirre, Lucia Lopez-Gazpio, Specia, Daniel M. Cer, Mona T. Diab, Eneko Agirre, Iñigo Lopez-Gazpio, and Lucia Specia.
Semantic textual similarity -multilingual and cross-lingual focused evaluation. abs/1708.000551Semeval-2017 task 1: Semantic textual similarity -multilingual and cross-lingual focused evaluation. CoRR, abs/1708.00055, 2017. URL http://arxiv.org/abs/ 1708.00055.
The inception platform: Machine-assisted and knowledgeoriented interactive annotation. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart De Castilho, Iryna Gurevych, The 27th International Conference on Computational Linguistics: System Demonstrations. Dongyan ZhaoSanta Fe, New MexicoAssociation for Computational LinguisticsJan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. The inception platform: Machine-assisted and knowledge- oriented interactive annotation. In Dongyan Zhao, editor, COLING 2018, The 27th International Conference on Computational Linguistics: System Demonstra- tions, Santa Fe, New Mexico, August 20-26, 2018, pages 5-9. Association for Com- putational Linguistics, 2018. URL https://aclanthology.org/C18-2002/.
The hitchhiker's guide to testing statistical significance in natural language processing. Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart, 10.18653/v1/P18-1128Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Australia. the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, AustraliaLong Papers1Iryna Gurevych and Yusuke MiyaoRotem Dror, Gili Baumer, Segev Shlomov, and Roi Reichart. The hitchhiker's guide to testing statistical significance in natural language processing. In Iryna Gurevych and Yusuke Miyao, editors, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL 2018, Melbourne, Aus- tralia, July 15-20, 2018, Volume 1: Long Papers, pages 1383-1392. Associa- tion for Computational Linguistics, 2018. doi: 10.18653/v1/P18-1128. URL https://aclanthology.org/P18-1128/.
A comparison of statistical significance tests for information retrieval evaluation. D Mark, James Smucker, Ben Allan, H F Carterette ; Alberto, Ricardo A Laender, Deborah L Baeza-Yates, Bjørn Mcguinness, Olstad, 10.1145/1321440.1321528Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, CIKM. Øystein Haug Olsen, and André O. Falcãothe Sixteenth ACM Conference on Information and Knowledge Management, CIKMLisbon, PortugalACMMark D. Smucker, James Allan, and Ben Carterette. A comparison of statisti- cal significance tests for information retrieval evaluation. In Mário J. Silva, Al- berto H. F. Laender, Ricardo A. Baeza-Yates, Deborah L. McGuinness, Bjørn Olstad, Øystein Haug Olsen, and André O. Falcão, editors, Proceedings of the Sixteenth ACM Conference on Information and Knowledge Management, CIKM 2007, Lisbon, Portugal, November 6-10, 2007, pages 623-632. ACM, 2007. doi: 10.1145/1321440.1321528. URL https://doi.org/10.1145/1321440.1321528.
| [] |
[
"On Effectively Learning of Knowledge in Continual Pre-training",
"On Effectively Learning of Knowledge in Continual Pre-training"
] | [
"Cunxiang Wang wangcunxiang@westlake.edu.cn ",
"Fuli Luo \nDamo Academy\nAlibaba Group\n\n",
"Yanyang Li \nThe Chinese University of Hong\nKong\n",
"Runxin Xu \nPeking University\n\n",
"Fei Huang \nDamo Academy\nAlibaba Group\n\n",
"Yue Zhang zhangyue@westlake.edu.cn ",
"\nZhejiang University\n\n",
"\nSchool of Engineering\nWestlake University\n\n",
"\nInstitute of Advanced Technology\nWestlake Institute for Advanced Study\nChina\n"
] | [
"Damo Academy\nAlibaba Group\n",
"The Chinese University of Hong\nKong",
"Peking University\n",
"Damo Academy\nAlibaba Group\n",
"Zhejiang University\n",
"School of Engineering\nWestlake University\n",
"Institute of Advanced Technology\nWestlake Institute for Advanced Study\nChina"
] | [] | Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks. However, by asking models to do cloze-style tests, recent work finds that PLMs are short in acquiring knowledge from unstructured text. To understand the internal behaviour of PLMs in retrieving knowledge, we first define knowledgebaring (K-B) tokens and knowledge-free (K-F) tokens for unstructured text and ask professional annotators to label some samples manually. Then, we find that PLMs are more likely to give wrong predictions on K-B tokens and attend less attention to those tokens inside the self-attention module. Based on these observations, we develop two solutions to help the model learn more knowledge from unstructured text in a fully self-supervised manner. Experiments on knowledge-intensive tasks show the effectiveness of the proposed methods. To our best knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training. 1 | 10.48550/arxiv.2204.07994 | [
"https://arxiv.org/pdf/2204.07994v1.pdf"
] | 248,227,557 | 2204.07994 | 472174c154cd4da30384d3d4df85c0ea30664179 |
On Effectively Learning of Knowledge in Continual Pre-training
Cunxiang Wang wangcunxiang@westlake.edu.cn
Fuli Luo
Damo Academy
Alibaba Group
Yanyang Li
The Chinese University of Hong
Kong
Runxin Xu
Peking University
Fei Huang
Damo Academy
Alibaba Group
Yue Zhang zhangyue@westlake.edu.cn
Zhejiang University
School of Engineering
Westlake University
Institute of Advanced Technology
Westlake Institute for Advanced Study
China
On Effectively Learning of Knowledge in Continual Pre-training
Pre-trained language models (PLMs) like BERT have made significant progress in various downstream NLP tasks. However, by asking models to do cloze-style tests, recent work finds that PLMs are short in acquiring knowledge from unstructured text. To understand the internal behaviour of PLMs in retrieving knowledge, we first define knowledgebaring (K-B) tokens and knowledge-free (K-F) tokens for unstructured text and ask professional annotators to label some samples manually. Then, we find that PLMs are more likely to give wrong predictions on K-B tokens and attend less attention to those tokens inside the self-attention module. Based on these observations, we develop two solutions to help the model learn more knowledge from unstructured text in a fully self-supervised manner. Experiments on knowledge-intensive tasks show the effectiveness of the proposed methods. To our best knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training. 1
Introduction
Pre-trained language models (PLMs), such as BERT (Devlin et al., 2019) and GPT (Radford et al., 2018), have greatly improved the performance of many NLP tasks in the past few years. Pre-training has been regarded as a promising way for acquiring common knowledge from unstructured plain text. However, how to learn more knowledge for PLMs is still an unsolved problem (Petroni et al., 2019), especially in those tasks which need explicit usage of knowledge. There are mainly two common ways to enhance PLMs with more knowledge. One is to introduce structured knowledge bases (Zhang et al., 2019;Wang et al., 2021b) while the other uses unstructured text. Compared with structured knowledge bases, unstructured text is easier * Equal contribution † The corresponding author 1 The codes and data will be released upon acceptance.
The chemist Gay-Lussac discovered that in water hydrogen was present in twice the amount of oxygen.
Knowledge-baring Tokens; Knowledge-free Tokens to acquire and construct. In addition, with freer format, unstructured text can express better complex knowledge. We focus on enhancing the ability of PLMs in acquiring knowledge from unstructured text. First of all, we explore which tokens in the text embody factual knowledge in a more fine-grained manner (i.e., token-level). This not only helps us better understand the model's behaviour of memorizing and utilizing knowledge, but also motivates us to design methods for better acquiring knowledge. In particular, for a piece of text, the tokens which are essential for humans to understand the text's factual knowledge are considered as knowledgebaring; otherwise, they are knowledge-free. One example is presented in Figure 1.
We analyze PLMs' behaviours on knowledge by manually annotating whether each token in samples is knowledge-baring. As shown in Figure 2 (a), we find that PLMs perform worse on knowledgebaring tokens in the cloze-style test. In addition, shown in Figure 2 (b), the transformer-based model is likely to gain less attention on knowledge-baring tokens.
Intuitively, for better acquiring knowledge from unstructured text, the model is expected to mask-recover more knowledge-baring words when trained on the unstructured text and get less influence from knowledge-free words. To this end, based on our observation, we propose two solutions at the mask policy and attention levels of the PLM: (1) In the mask policy, we have two methods. The first method is to perform random masking on the training corpus before each training iteration and find out which masks the model fails to predict correctly. These incorrectly predicted tokens are regarded as knowledge-baring tokens for masking in this training iteration. The second is that we feedforward on the training data before each training iteration and use the attention to determine which tokens are more likely to be knowledge-baring for masking. (2) At the attention level, we adopt the visibility matrix to prevent knowledge-free tokens from affecting other tokens during self-attention.
Extensive experiments are conducted on three tasks. Specifically, to check whether the model has learned the knowledge from unstructured text, we let the model perform on the LAMA Probing task, a standard cloze-style test. To test whether the model can utilize the learned knowledge, we also introduce two probing task, namely Closed-book QA and Knowledge Graph Reasoning. Note that there is no labelled data for finetuning for the three tasks, they are only used for probing how much knowledge the model has learned from unstructured text. Besides, the training corpus contains all needed knowledge of evaluation and testing. The test examples of three tasks are presented in Table 4. Experiments on the three tasks show the effectiveness of the proposed methods, achieving up to 6.1 and 5.5 points absolute improvement in the LAMA Probing task on two datasets, up to 6.7 points absolute improvement in the Closed-book QA task and 2.6 points absolute improvement in the KG Reasoning task.
To our knowledge, we are the first to explore the relationship between PLMs' behaviour and knowledge in the token-level and the first to research fully self-supervised learning of knowledge in continual pre-training.
Probing the behaviour of PLMs in Retrieving Knowledge
To better probe how PLMs learn knowledge from unstructured text, we start to identify the type and role of each word. Inspired by knowledge graphs as well as our observations, we find that knowledge in a sentence is largely embodied by a few keywords. For the remaining words, even if they are deleted, we can still receive the factual knowledge the sentence conveys.
• knowledge-baring: For a given text, if the deletion of one token will make it relatively hard for humans to obtain the factual knowledge contained in the text correctly, we take the token as knowledge-baring;
• knowledge-free: For a given text, if the deletion of one token still allows humans relatively easy to obtain the factual knowledge contained in the text correctly, we take the token as knowledge-free.
One example is shown in Figure 1. Note that knowledge-free tokens are not totally free of knowledge. They certainly have some kind of knowledge, such as linguistic and semantic knowledge. They are just relatively less important to the factual knowledge, which we emphasize in this work.
We randomly sample 100 cases from the LAMA SQuAD dataset and LAMA Google RE dataset (Petroni et al., 2019), respectively and then use the tokenizer of RoBERTa to tokenize each sentence. We ask three annotators, who are all Ph.D. students, manually label each token as knowledgebaring and knowledge-free. The inter-annotator agreement for samples of LAMA SQuAD/LAMA Google RE is 0.920/0.938, respectively. The statistic of labelled tokens is shown in Table 1.
We also use the Stanford CoreNLP toolkit (Manning et al., 2014) to conduct part-of-speech tagging analysis on those samples. We find that the most knowledge-baring tokens are nouns (64.2%), verbs (11.6%), numbers (9.2%) and adjective words (6.5%) while most knowledge-free tokens are preposition or subordinating conjunctions (25.1%), comma and punctuation (23.6%), determiners (15.2%) and verbs (11.7%) for the two sets of samples. We also put the detailed results in the Appendix Table 10. From the results, we can see that we do not limit the scope of knowledge to entities or nouns. We expand it to nouns, verbs, numbers, adjective words, etc. To better understand the model's behaviour on comprehending knowledge, we mainly explore two questions: (1) Does the model perform better on knowledge-baring contents or knowledge-free contents? (2) Can the model's attention scores reveal its association with knowledge?
Accuracy on Knowledge-Baring and Knowledge-Free Tokens
To investigate the first question, we first mask each token of the sentences in both datasets. For example, if one sentence contains 10 separate tokens, we derive 10 sentences with "<mask>" on each token after processing this sentence. If one word is tokenized to several tokens, we mask those tokens together. The detail is shown in the Table 8 (a) in Appendix. Then, we ask the model to predict the mask(s) of processed sentences.
To better understand the influence of pre-training on model learning knowledge, we use the original PLM as well as the continued pre-trained model to predict on the processed sentences. For continual pre-training, we first find the Wikipedia snippets where the sentences are from and then train the model using the pre-training objective with the snippets for 100 iterations.
The performances of RoBERTa and continued pre-trained RoBERTa on two types of tokens on two datasets are presented in Table 2. From the result, we find that the model performs much worse on knowledge-baring tokens than on knowledgefree tokens, which is 14.9% to 55.1% on SQuAD and 38.6% to 83.4% on Google RE. Even if the model is continual pre-trained, the accuracy of knowledge-baring tokens is still lower than that of K-F tokens, which is 39.2% to 82.8% on SQuAD and 67.2% to 93.5% on Google RE. The results show that it is more difficult for models to learn factual knowledge from unstructured text than nonknowledge.
Attention on Knowledge-Baring and Knowledge-Free Tokens
For the second question, we feed-forward the model on the sentences without masking them. For each token, we calculate the sum of all tokens' received attention weights and sum up for all layers and heads. The received attention (RcAtt) weight of token t in the model is
RcAtt t = L i=1 H j=1 N k=1 att ijkt (1)
where L is the layer number, H is the head number and N is the token number; att ijkt means in layer i head j , the attention score token k to token t . We sort all the tokens by their RcAtt scores for each sentence and divided them into 10 percent segments. Next, we calculate the proportion of knowledge-baring tokens in each segment. Same as the previous question, we not only use the original PLM to predict, but also test the continued pretrained model.
The results are presented in Table 3. We can see that the attention scores strongly correlate with whether the tokens are knowledge-baring. The K-B tokens are more likely to receive less attention, while the K-F tokens are more likely to receive more attention. When the model is continual pretrained, this phenomenon still exists but at a slightly reduced level.
Conclusions. Based on the above two probing experiments, we can conclude that: (1) PLMs perform worse on knowledge-baring words (i.e., with higher prediction error); (2) The knowledge-baring words are more likely to receive less attention than knowledge-free ones. Table 3: The relationship between knowledge-baring proportion (in red) and the level of receiving attention (the first row). The head X-Y% indicates those tokens rank in bottom X-Y% on attention receiving, for example, 0-10% means those tokens receive least attention. The cell with red color is the K-B proportion of those tokens. RoBERTa-Cont is the continued pre-trained RoBERTa. The last column is the the Spearman's rank correlation coefficient between the level of receiving attention and K-B proportion. We can see that tokens receiving more attention are less likely to be K-B.
Methods
In this section, we propose two methods based on the conclusion of the above probing experiments, making PLMs learn more knowledge from unstructured text.
Backbone Model
We choose the RoBERTa model as our baseline model. Moreover, we choose the original pre-training objective of RoBERTa as our baseline. The RoBERTa model is built on the encoder of the Transformer model (Vaswani et al., 2017). For each layer of RoBERTa, it consists of a multihead self-attention layer and a position-wise feedforward network. For i th layer, the self-attention output of j th head is
A j = softmax( Q j K T j √ d k )V j(2)
where d k is the dimension of Q, K, V vectors.
Mask Policy
Initially, RoBERTa randomly chooses tokens from the input text to mask. However, recent work (Wang et al., 2021a) shows that it is inefficient to memorize knowledge. Therefore, we aim to enable the model to focus on learning knowledge-baring content. Because we do not provide any label information to the model during training, the model needs to find the K-B tokens from the input text without any supervision. From the Section 2, we find that the K-B tokens are related to whether the model can accurately predict the token and attention weight the token receive. Hence we provide two corresponding selective mask policies for the model to find and mask the K-B tokens. Note that the two selective mask policies are mutually exclusive, so we compare their performance rather than combine them.
RoBERTa-Sel-I. Since the model performs much worse on knowledge-baring tokens than on knowledge-free tokens, we can use this feature to find out K-B tokens from unstructured text. Before each training iteration, we randomly mask some tokens of the training text and predict on the masks, and then we Select out tokens that are Inaccurately predicted and treat them as K-B tokens. Besides finding K-B tokens, this policy also helps the model to avoid learning those tokens which it has already learned previously.
RoBERTa-Sel-A. As the knowledge-baring tokens are more likely to receive less attention, we can make use of the attention score each token receives. Before each training iteration, we let the model forward on the non-masked training text, and then we calculate each token's attention weights, which is the same as Eq 1. Next, we Select out the tokens that get the least Attention and treat them as K-B tokens.
After finding knowledge-baring tokens, we first randomly mask them and then randomly choose to mask all remaining tokens. For example, we set the first-phase mask language modelling (MLM) probability as 15%, and second-phase MLM probability as 10%, if the text has 100 tokens and we find 20 K-B tokens using one of our methods, we first mask 100×15%=15 tokens from the K-B tokens and then mask 100×10%=10 tokens from the left 85 tokens. The two-phase masks will be combined for pre-training. Salient Span Mask (SSM) (Guu et al., 2020) uses a trained NER tagger and a regular expression to identify named entities and date from the raw corpus. These salient spans are selected and masked within a sentence for pre-training. We also conduct the SSM experiments on our dataset as a comparison. But note that the SSM policy is not fully self-supervised because it requires external labelled data to train a NER tagger and prior knowledge to design the expression while our methods are free of any external information and only relied on models themselves.
Visibility Matrix
In addition to making the model pay more attention to K-B tokens during the continual pre-training, we also consider making the model pay less attention to knowledge-free tokens. To achieve this goal, we adopt the concept of visibility matrix from Dong et al. (2019) and Bao et al. (2020). Using the visibility matrix, we expect those tokens that harm knowledge memorization cannot influence other tokens. Figure 3 is the illustration of the visibility matrix. During the self-attention process, if token q can attend to token p, in other words, the hidden state of token q can be influenced by the hidden state of token p, we consider token q is visible to token p, otherwise, it is invisible. After adding visibility matrix mechanism to self-attention module, the self-attention output of i layer and j head in Eq 2 Algorithm 1 Detecting "harmful" tokens.
Special Dataset Construction:
(1) Forward RoBERTa on the training data.
(2) Select tokens which receive the least 10% attentions.
(3) Ask the whole words which contain those tokens from the training corpus.
(4) The masked train set is served as the special dataset. Initialization:
(1) Set a positive real number threshold τ .
(2) Tokenize the special validation data, collect all tokenized tokens that appear more than τ times to a set T .
(4) Add special tokens "<s>", "</s>", "<pad>" and "<mask>" to the set T .
A j = softmax( Q j K T j √ d k + M * )V j(3)
where M * ∈ R n×n , M * pq = −∞ if token q is visible to token p and M * pq = 0 if token q is invisible to token p.
By conducting pilot experiments on making manually chosen irrelevant tokens invisible by other tokens, we find it effective to boost performance on the three tasks. So, we continue to design an algorithm to detect those tokens which will hurt the performance of the model. Since the training data does not have any label, we construct a special dataset from the training data to find the "harmful" tokens. The algorithm is presented in Algorithm 1. For each time, we make one token invisible, and check whether it will help the evaluation performance on the special dataset.
Tasks
Note that there are three main differences between the proposed visibility matrix and the mask matrix used in recent works (Dong et al., 2019;Bao et al., 2020): 1) The visibility matrix is independent on input masks while mask matrix only make the masked tokens invisible; 2) We have designed an automated algorithm to search invisible tokens rather than by random masking; 3) The invisible tokens can still see themselves while the tokens in mask matrix cannot. We adopt three tasks to evaluate the usage of knowledge from unstructured text in this work: LAMA probing, Closed-book QA, and Knowledge Graph (KG) Reasoning. The examples of the three tasks are presented in Table 4. These tasks are slightly different from the ordinal machine learning tasks, as the training data and evaluation/test data have different formats.
We use the LAnguage Model Analysis Probing (Petroni et al., 2019) task to directly evaluate how much knowledge can PLM obtain from unstructured text. For each example, the training case contains a passage and the validation/test case contains a cloze-style query and answer pair. The model needs to learn knowledge from training passages and use the knowledge to fill the "<mask>" tokens in the validation/test cloze-style sentences.
We use the Closed-book QA task and the Knowledge Graph Reasoning task to testify whether the PLM can utilize its learned knowledge in downstream tasks.
For each sample in the Closed-book QA task, the training case contains a sentence, while the validation/test case contains a cloze-style QA pair, whose question has one or several "<mask>" tokens after the "?". The needed knowledge of validation/test questions is in the training sentences. The model needs to learn knowledge from training sentences and use the knowledge to fill the "<mask>" tokens in the validation/test cloze-style questions. For each sample in the KG Reasoning task, the training case contains a sentence, while the validation/test case contains a cloze-style triple, whose object is replaced with one or several "<mask>" tokens. The needed knowledge of validation/test triples is in the training sentences. The model needs to learn knowledge from training sentences and use the knowledge to fill the "<mask>" tokens in the validation/test cloze-style triples. To make the model adapt to the cloze-style triples answer, for 20% training sentences, we add the corresponding triple at the end of each sentence and remove the triple from the validation/test set.
Data. The task data originate from public released datasets. For the LAMA SQuAD dataset, we link the probes to SQuAD1.1 dataset (Rajpurkar et al., 2016) and find the related questions and passages of each case. Then we use the passages as training data and probes as the validation/test data to construct the dataset for LAMA Probing task. Moreover, we use the recovered probing sentences as the training data and the questions concatenated with "<mask>" as the validation/testing data for the Closed-book QA task. For the LAMA Google RE dataset, we use the snippet of each case as training data and probe sentences as the validation/test data for the LAMA Probing task. Furthermore, we use passages as the training data and use the <subject, relation, object> triples as the validation/test data for the KG Reasoning task.
Note that for the three tasks, all needed knowledge of validation and test questions can be directly extracted from the training set.
For each task and dataset, we use Algorithm 1 to find "harmful" tokens automatically. In practice, we use the original RoBERTa-large model or the continued pre-trained RoBERTa-large model to evaluate. After finding those tokens, we make them invisible to all other tokens during training, validation and testing periods. An example of the processed visibility matrix is shown in Figure 3.
Experiments
Settings. We adopt the RoBERTa-large model as our base model, and conduct continual pre-training on it. We follow most of the traditional pre-training hyper-parameters of RoBERTa , such as training batch size, optimization method and model configurations. However, some specific parameters are modified when applying our methods. We present needed hyper-parameters at Section A in the Appendix. Table 6 shows the results on three tasks. Specifically, the LAMA probing task is used to explicitly evaluate how much knowledge is stored from unstructured text. Moreover, the Closed-book QA and the KG Reasoning tasks are used to explicitly validate the model's ability in making use of knowledge on the other formats.
Overall Results
Firstly, we investigate the masking policy (Section 3.2) in continual pre-training. It can be found that our proposed two selective mask policies (RoBERTa-Sel-I and RoBERTa-Sel-A) outperform the original random mask policy (RoBERTa-Cont), obtaining up to 6.1/5.1, 6.5 and 1.4 absolute improvement on three tasks, respectively. It indicates that our methods can enhance the RoBERTa with more domain specific knowledge in the continual pre-training process.
Furthermore, we find that model trained with Visibility Matrix (VM) mechanism (Section 3.3) can substantially achieve better accuracy. For example, RoBERTa-Cont-VM outperforms RoBERTa-Cont by 4.9/4.4, 5.5 and 1.7 absolute gains on three tasks, respectively. Since RoBERTa-Sel-I is superior to RoBERTa-Sel-A on two tasks and three datasets, we further only present the results of RoBERTa-Sel-I combined with the Visibility Matrix mechanism. The combination of selective mask policy Sel-I and visibility matrix (RoBERTa-Sel-I-VM) performs best in the LAMA Google RE, Closed-book QA and KG Reasoning.
Finally, we observe that at the same continual pre-training iterations, our models generally give higher accuracy than RoBERTa-Cont on all tasks, showing that our methods can also benefit in the efficiency of learning knowledge. In addition, though SSM introduces external tools (a trained NER tagger) and prior knowledge (expression to identify dates), our methods performs better than it. It is mainly because SSM only mask entities while leaves other kinds of tokens, which are also important for knowledge probing in the two task. SSM outperforms our methods on KG Reasoning, it is natural since KG Reasoning queries contain only entities and relations.
On Knowledge-Baring Tokens
We also evaluate the continual pre-training on K-B tokens to see whether the improvement comes from the model's better understanding of K-B tokens. The evaluation data statistic is shown in Table 8 (a) in Appendix.
The results are presented in Table 7. From this table, we can see that our methods can help model better comprehend K-B tokens, showing that the overall better results in Table 6 comes models' comprehension of K-B tokens.
Discovery on Invisible Tokens
We find that the three tokens "<s>", "</s>" and "." receiving much attention, consistently ranking on the top 20% in one piece of text. However, if we make one or more of them invisible to other tokens, the performance on the three tasks will decrease by at least 5 points. Though they cannot be viewed as knowledge-baring tokens, they are still crucial for knowledge learning. We hypothesize they can store the general knowledge information of the text.
Related Work
Continual Pre-training of PLMs. Gururangan et al. (2020) reveals that continual pre-training on specific domains will contribute to the performance in downstream tasks within the same domains, and continual pre-training on some task's input data will also boost the performance on those datasets. Guu et al. (2020) proposed Salient span masking (SSM) which is using a NER tagger and rules to detect named entities and date, and then they mask at least one salient span each time when pretraining. On the contrary, we do not introduce any external information or prior knowledge to determine masks. Gu et al. (2020) first uses the training pairs of downstream tasks to help continuepretrain a PLM. They find which tokens deleted from the input of task's training data will influence the confidence of prediction of the finetuned model, and they focus on masking those tokens when continual pre-training. Ye et al. (2021) proposed a two-loop meta-learned policy in continual pre-training BART for Closed-book QA Tasks, Knowledge-Intensive Tasks (Petroni et al., 2021) and abstractive summarization. They first continue to pre-train the BART with a passage and second train it with a (q,a) pair, and then they use the validation loss on the pair to update the parameters of mask policies. The main difference between our work and the above two works is that their works use labelled datasets to help continual pre-training, while ours does not use any labelled data.
Knowledge Probing in PLMs. LAMA (LAnguage Model Analysis) probe (Petroni et al., 2019) first uses the cloze-style test to evaluate how much knowledge in PLMs, they manually transfer some questions of SQuAD (Rajpurkar et al., 2016) and some triples of Google RE, T-REx (Elsahar et al., 2018) and ConceptNet (Liu and Singh, 2004) to cloze-style prompts. In this work, we create two variants x on LAMA probing and use the LAMA probing test and the variants to evaluate how much knowledge the model has learned from unstructured text. Despite increasing research in knowledge and PLMs, relatively less work associate knowledge from text with testing questions. Roberts et al. (2020) and Fedus et al. (2021) use a set of query&answer (QA) pairs to finetune the model and use another set of QA pairs to test it, which have no explicit correlation with pre-training data. We cannot exactly know whether the model learn from the training data or just solve questions by overlap between the finetuning data and test data or simply by spurious cues (Niven and Kao, 2019). In contrast, we impose restrictions on the continual pre-training data and the test questions as well as get rid of finetuning process to ensure the model can only acquire needed knowledge from the training data.
Conclusion
We probe the behaviour of the pre-trained language models on unstructured text about the knowledgebaring and knowledge-free tokens, by asking those models to do the cloze-style test on our annotated data. We find that: (1) The model performs worse on K-B tokens;
(2) The model gathers less attention on K-B tokens. To enable the model to better acquire knowledge from unstructured text, we consider two selective mask policies and adopt the visibility matrix mechanism to help the model focus on K-B tokens when learning from unstructured text. To our knowledge, we are the first to explore fully self-supervised learning of knowledge in continual pre-training.
*Ethics / Impact Statement
A Hyper-parameters
The traditional hyper-parameters for continual pretraining RoBERTa can be seen at Table 9. Moreover, for RoBERTa-Sel-I and RoBERTa-Sel-A, we set the first-phase MLM probability as 15% and the second-phase MLM probability as 10%. For the RoBERTa-SSM, we adopt a publicly released NER model, which is based on RoBERTabase and trained on the conll2003 dataset, 4 and a regular expression to identify named entities and date, respectively. In the LAMA Probing task, all models are trained for 100 iterations. For the visible mechanism, we use the original RoBERTa-large to find the knowledge-free tokens. In the Closedbook QA task, models are trained for 500 iterations. For the visible matrix mechanism, we set τ as 3 for the two datasets. 4 huggingface.co/andi611/roberta-base-ner-conll2003
B Details of POS analysis on Samples
We present a detailed results of part-of-speech tagging analysis of annotated samples in Table 10.
C Mask Analysis
To compare three different mask policies, namely RoBERTa-Cont, RoBERTa-Sel-I and RoBERTa-Sel-A we conduct 10-iteration continual pretraining on the 200 samples in Section 2 and record their masked tokens.
Then We take part-of-speech analysis on the mask tokens for the three mask policies, which is presented in Table 11. From the result, we can see that our two selective mask policies choose more nouns, numbers, verbs and adjective words to mask than the random mask policies.
We also calculate the K-B / K-F ratio of masked tokens for the three mask policies and list the result in Table 12. From the table, it can be seen that our two selective mask policies can significantly increase the proportion of K-B tokens in the masked tokens.
Figure 1 :
1Examples of knowledge-baring (K-B) tokens and knowledge-free (K-F) tokens.
Figure 2 :
2The RoBERTa's behaviour on probing samples: (a) the model performs worse on knowledgebaring tokens than on knowledge-free tokens; (b) knowledge-baring tokens are likely to receive less attention in the self-attention process.
Figure 3 :
3The illustration of the visibility matrix. The orange square means the left token can see the top token while the gray square means it cannot. In this example, the token "are" and "the" are invisible to other tokens.
Table 1 :
1The number of tokens that are knowledge-
baring and knowledge-free we have labelled for the
samples of the two dataset.
Table 2 :
2The probing accuracy on two types of tokens
for original model (RoBERTa-Orig) and continued pre-
trained model (RoBERTa-Cont) along with the original
pre-training mask policy. Both models perform worse
on knowledge-baring tokens.
0∼10% 10∼20% 20∼30% 30∼40% 40∼50% 50∼60% 60∼70% 70∼80% 80∼90% 90∼100% Corr*Original RoBERTa 85.7% 78.9% 72.3% 69.3% 58.1% 50.6% 46.4% 22.4%
5.5%
0.5%
-1.0
RoBERTa-Cont 75.1% 72.8% 64.9% 65.0% 57.4% 53.3% 53.9% 40.7%
9.4%
0.5%
-0.98
(a) On LAMA SQuAD samples
0∼10% 10∼20% 20∼30% 30∼40% 40∼50% 50∼60% 60∼70% 70∼80% 80∼90% 90∼100% Corr*
Original RoBERTa 97.6% 92.2% 84.7% 75.4% 70.9% 59.1% 53.2% 42.9% 10.6%
4.9%
-1.0
RoBERTa-Cont 81.7% 77.6% 77.3% 75.8% 68.9% 63.4% 61.9% 50.2% 38.1%
5.8%
-1.0
(b) On LAMA Google RE samples
LAMA ProbingTrain Text: ... Kenya ranks low on Transparency International's Corruption Perception Index (CPI), a metric which attempts to gauge the prevalence of public sector corruption in various countries. ... Test Query: On the CPI scale, Kenya ranks <mask>. Test Answer: low Closed-book QA Train Text: ... The capital of the Ottoman empire was Istanbul.... Test Query: What was the capital of the Ottoman empire? <mask> Test Answer: IstanbulKG Reasoning
Train Text: Shlomo Shriki, Israeli painter and artist, born in Morocco (1949), grew up and was educated in Kibbutz Yifat.
Test Query: Shlomo Shriki, place of birth, <mask>
Test Answer: Morocco
Table 4 :
4Examples of three tasks. The training text are all unstructured text and label-free. In validation/test, the model need to predict on the <mask> token.Training
Passages
Validation
Queries
Testing
Queries
LAMA Probing
(LAMA SQuAD)
271
152
152
LAMA Probing
(LAMA Google RE)
5516
2758
2758
Closed-book QA
271
152
152
KG Reasoning
5516
2206
2205
Table 5 :
5The statistics of three tasks (four datasets).
LAMA SQuAD LAMA Google RE Closed-book QA KG ReasoningRoBERTa-Orig
16.4
24.6
0.0
2.6
RoBERTa-Cont
33.6 (+0.0)
58.4 (+0.0)
37.9 (+0.0)
28.1 (+0.0)
RoBERTa-SSM
37.5 (+3.9)
62.6 (+4.2)
42.7 (+4.8)
31.2 (+3.1)
RoBERTa-Sel-A
35.9 (+2.3)
62.4 (+4.0)
44.4 (+6.5)
27.7 (-0.4)
RoBERTa-Sel-I
39.7 (+6.1)
63.5 (+5.1)
43.6 (+5.7)
29.5 (+1.4)
RoBERTa-Cont-VM
38.5 (+4.9)
62.8 (+4.4)
43.4 (+5.5)
29.6 (+1.7)
RoBERTa-Sel-I-VM
37.2 (+3.6)
63.9 (+5.5)
44.8 (+6.7)
30.7 (+2.6)
Table 6 :
6The accuracy on three knowledge intensive tasks. The first block denotes the results of original and
continued pre-trained RoBERTa. The second and third blocks show the performance of improved models in terms
of Selective mask policy (Section 3.2) and Visibility Matrix (Section 3.3). The numbers in brackets show the
absolute improvements compared to the continued pre-trained RoBERTa.
LAMA-SQuAD LAMA-Google RE
RoBERTa-Orig
13.9%
38.6%
RoBERTa-Cont
38.4%
67.2%
RoBERTa-Sel-A
41.8%
71.4%
RoBERTa-Sel-I
42.6%
71.6%
RoBERTa-Cont-VM
41.9%
71.0%
Table 7 :
7The probing results on the annotated knowledge-baring tokens.
Our used data is processed from open source datasets, including LAMA SQuAD / LAMA Google RE 2 and SQuAD 1.1 3 .#Sentences #Masked Tokens
LAMA SQuAD
(Knowledge-Baring)
609
739
LAMA SQuAD
(Knowledge-Free)
524
532
LAMA Google RE
(Knowledge -baring)
1268
1715
LAMA Google RE
(Knowledge -free)
865
975
(a) Data after every token of the 200 samples
is masked separately, which is used for
accuracy analysis in Section 2.1.
#Sentences
# Tokens
LAMA SQuAD
100
1471
LAMA Google RE
100
2903
(b) Data used for attention analysis
in Section 2.2.
Table 8 :
8Data statistics after the 200 samples are processed for analysis in Section 2.1 and Section 2.2.Hyperparam
Learning Rate
1e-4
Train Batch Size
256 (passages)
MLM propability
0.15
Max Tokens Length
512
Optimizer
Adam
Adam
1e-6
Adam β1
0.9
Adam β2
0.98
Weight Decay
0.01
Learning Rate Decay
Linear
Table 9 :
9The hyper-parameters for continual pretraining RoBERTa in this work.
(b) In LAMA Google RE samplesKnowledge-
Baring
Tokens
NN
243
0.329
NNP
179
0.242
JJ
68
0.092
NNS
49
0.066
VBN
46
0.062
VBD
21
0.028
CD
20
0.027
VBZ
19
0.026
VB
15
0.02
IN
14
0.019
POS
8
0.011
RB
7
0.009
NNPS
6
0.008
VBP
5
0.007
VBG
5
0.007
Knowledge-
Free
Tokens
IN
149
0.28
DT
109
0.205
.
100
0.188
VBZ
39
0.073
VBD
31
0.058
TO
14
0.026
,
13
0.024
CC
9
0.017
VB
8
0.015
RB
8
0.015
WDT
7
0.013
PRP$
6
0.011
VBP
5
0.009
WRB
4
0.008
MD
4
0.008
(a) In LAMA SQuAD samples
Knowledge-
Baring
Tokens
NNP
657
0.383
NN
419
0.244
CD
205
0.12
VBN
102
0.059
JJ
91
0.053
IN
38
0.022
VBD
36
0.021
NNS
23
0.013
FW
22
0.013
PRP
19
0.011
VBG
14
0.008
DT
12
0.007
VBZ
11
0.006
VBP
10
0.006
RB
9
0.005
Knowledge-
Free
Tokens
IN
230
0.236
,
143
0.147
DT
113
0.116
.
110
0.113
CC
58
0.059
-RRB-
58
0.059
-LRB-
58
0.059
VBD
53
0.054
VBZ
40
0.041
HYPH
26
0.027
:
18
0.018
RB
16
0.016
WP
11
0.011
PRP$
11
0.011
WRB
4
0.004
Table 10 :
10Part-of-speech Results on our annotated samples. For each cell, the tag name is at the top, the number of this tag is in the middle, the proportion of this tag is in the bottom. For each type of token in each data set, we only display the top-15 tags.RoBERTa-Cont
(Random)
NNP
0.206
NN
0.151
IN
0.117
DT
0.067
CD
0.054
JJ
0.048
,
0.047
VBN
0.045
VBD
0.043
.
0.042
VBZ
0.031
NNS
0.021
CC
0.019
-RRB-
0.015
RB
0.013
RoBERTa-Sel-I
NNP
0.24
NN
0.181
IN
0.097
CD
0.073
DT
0.051
JJ
0.047
VBN
0.038
VBD
0.037
,
0.034
.
0.033
VBZ
0.024
NNS
0.023
CC
0.016
-RRB-
0.012
RB
0.012
RoBERTa-Sel-A
NNP
0.265
NN
0.167
IN
0.112
CD
0.086
JJ
0.049
DT
0.048
VBN
0.033
,
0.026
VBD
0.024
.
0.021
VBZ
0.021
NNS
0.018
CC
0.013
-LRB-
0.012
RB
0.012
Table 11 :
11The result of part-of-speech analysis for three mask policies. For each cell, the tag name is at the top and the proportion of this tag is in the bottom.Method
K-B / K-F ratio
RoBERTa-Cont (Random)
1.47:1
RoBERTa-Sel-I
2.16:1
RoBERTa-Sel-A
2.33:1
Table 12 :
12The K-B to K-F ratios for three mask policies. The experiment is conducted on the samples which are annotated with K-B and K-F.
https://dl.fbaipublicfiles.com/LAMA/ data.zip 3 https://rajpurkar.github.io/ SQuAD-explorer/
UniLMv2: Pseudo-masked language models for unified language model pre-training. Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, Hsiao-Wuen Hon, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Song- hao Piao, Ming Zhou, and Hsiao-Wuen Hon. 2020. UniLMv2: Pseudo-masked language models for uni- fied language model pre-training. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 642-652. PMLR.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Unified language model pre-training for natural language understanding and generation. Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon, Advances in Neural Information Processing Systems. Curran Associates, Inc32Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xi- aodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understand- ing and generation. In Advances in Neural Informa- tion Processing Systems, volume 32. Curran Asso- ciates, Inc.
T-REx: A large scale alignment of natural language with knowledge base triples. Hady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, Elena Simperl, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAHady Elsahar, Pavlos Vougiouklis, Arslen Remaci, Christophe Gravier, Jonathon Hare, Frederique Laforest, and Elena Simperl. 2018. T-REx: A large scale alignment of natural language with knowledge base triples. In Proceedings of the Eleventh Interna- tional Conference on Language Resources and Eval- uation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. W Fedus, Barret Zoph, Noam M Shazeer, abs/2101.03961ArXiv. W. Fedus, Barret Zoph, and Noam M. Shazeer. 2021. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. ArXiv, abs/2101.03961.
Train no evil: Selective masking for task-guided pre-training. Yuxian Gu, Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu, Maosong Sun, 10.18653/v1/2020.emnlp-main.566Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Yuxian Gu, Zhengyan Zhang, Xiaozhi Wang, Zhiyuan Liu, and Maosong Sun. 2020. Train no evil: Se- lective masking for task-guided pre-training. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6966-6974, Online. Association for Computa- tional Linguistics.
Don't stop pretraining: Adapt language models to domains and tasks. Ana Suchin Gururangan, Swabha Marasović, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, 10.18653/v1/2020.acl-main.740Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsSuchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, Ming-Wei Chang, arXiv:2002.08909REALM: Retrieval-augmented language model pre-training. arXiv preprintKelvin Guu, Kenton Lee, Zora Tung, Panupong Pa- supat, and Ming-Wei Chang. 2020. REALM: Retrieval-augmented language model pre-training. arXiv preprint arXiv:2002.08909.
Question and answer test-train overlap in open-domain question answering datasets. Patrick Lewis, Pontus Stenetorp, Sebastian Riedel, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsPatrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Pro- ceedings of the 16th Conference of the European Chapter of the Association for Computational Lin- guistics: Main Volume, pages 1000-1008, Online. Association for Computational Linguistics.
Conceptnet-a practical commonsense reasoning tool-kit. Hugo Liu, Push Singh, BT technology journal. 224Hugo Liu and Push Singh. 2004. Conceptnet-a practi- cal commonsense reasoning tool-kit. BT technology journal, 22(4):211-226.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
The Stanford CoreNLP natural language processing toolkit. Christopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, David Mcclosky, 10.3115/v1/P14-5010Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsMarylandAssociation for Computational LinguisticsBaltimoreChristopher Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language pro- cessing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Lin- guistics: System Demonstrations, pages 55-60, Bal- timore, Maryland. Association for Computational Linguistics.
Probing neural network comprehension of natural language arguments. Timothy Niven, Hung-Yu Kao, abs/1907.07355CoRRTimothy Niven and Hung-Yu Kao. 2019. Probing neu- ral network comprehension of natural language argu- ments. CoRR, abs/1907.07355.
KILT: a benchmark for knowledge intensive language tasks. Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, Sebastian Riedel, 10.18653/v1/2021.naacl-main.200Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnlineAssociation for Computational LinguisticsFabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, and Sebastian Riedel. 2021. KILT: a benchmark for knowledge intensive language tasks. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2523-2544, Online. Association for Computational Linguistics.
Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, 10.18653/v1/D19-1250Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaLanguage models as knowledge bases?Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya SutskeverAlec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language under- standing by generative pre-training.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392. Asso- ciation for Computational Linguistics.
How much knowledge can you pack into the parameters of a language model. Adam Roberts, Colin Raffel, Noam Shazeer, 10.18653/v1/2020.emnlp-main.437Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsAdam Roberts, Colin Raffel, and Noam Shazeer. 2020. How much knowledge can you pack into the param- eters of a language model? In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5418-5426, Online. Association for Computational Linguistics.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc.
Can generative pre-trained language models serve as knowledge bases for closed-book QA?. Cunxiang Wang, Pai Liu, Yue Zhang, 10.18653/v1/2021.acl-long.251Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Cunxiang Wang, Pai Liu, and Yue Zhang. 2021a. Can generative pre-trained language models serve as knowledge bases for closed-book QA? In Proceed- ings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th Interna- tional Joint Conference on Natural Language Pro- cessing (Volume 1: Long Papers), pages 3241-3251, Online. Association for Computational Linguistics.
Ruize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, 10.18653/v1/2021.findings-acl.121Daxin Jiang, and Ming Zhou. 2021b. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsRuize Wang, Duyu Tang, Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and Ming Zhou. 2021b. K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1405-1418, Online. Association for Computational Linguistics.
On the influence of masking policies in intermediate pre-training. Qinyuan Ye, Belinda Z Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-Tau Yih, Xiang Ren, Madian Khabsa, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsQinyuan Ye, Belinda Z. Li, Sinong Wang, Benjamin Bolte, Hao Ma, Wen-tau Yih, Xiang Ren, and Ma- dian Khabsa. 2021. On the influence of masking policies in intermediate pre-training. In Proceed- ings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 7190-7202, Online and Punta Cana, Dominican Republic. Asso- ciation for Computational Linguistics.
ERNIE: Enhanced language representation with informative entities. Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, Qun Liu, Proceedings of ACL 2019. ACL 2019Zhengyan Zhang, Xu Han, Zhiyuan Liu, Xin Jiang, Maosong Sun, and Qun Liu. 2019. ERNIE: En- hanced language representation with informative en- tities. In Proceedings of ACL 2019.
| [] |
[
"Challenges in Persian Electronic Text Analysis",
"Challenges in Persian Electronic Text Analysis"
] | [
"Behrang Qasemizadeh \nText and Speech Technology Ltd\nTehranIran\n",
"Saeed Rahimi \nFaculty of Literature and Humanities\nTehran University\nEnqelab, TehranIran\n",
"Mehdi Safaee Ghalati \nFaculty of Literature and Humanities\nTehran University\nEnqelab, TehranIran\n"
] | [
"Text and Speech Technology Ltd\nTehranIran",
"Faculty of Literature and Humanities\nTehran University\nEnqelab, TehranIran",
"Faculty of Literature and Humanities\nTehran University\nEnqelab, TehranIran"
] | [] | Farsi, also known as Persian, is the official language of Iran and Tajikistan and one of the two main languages spoken in Afghanistan. Farsi enjoys a unified Arabic script as its writing system. In this paper we briefly introduce the writing standards of Farsi and highlight problems one would face when analyzing Farsi electronic texts, especially during development of Farsi corpora regarding to transcription and encoding of Farsi e-texts. The pointes mentioned may sounds easy but they are crucial when developing and processing written corpora of Farsi. | null | [
"https://arxiv.org/pdf/1404.4740v1.pdf"
] | 8,325,426 | 1404.4740 | c3f41cb0b5f70ec0f1515a01c18674c241a34b7e |
Challenges in Persian Electronic Text Analysis
Behrang Qasemizadeh
Text and Speech Technology Ltd
TehranIran
Saeed Rahimi
Faculty of Literature and Humanities
Tehran University
Enqelab, TehranIran
Mehdi Safaee Ghalati
Faculty of Literature and Humanities
Tehran University
Enqelab, TehranIran
Challenges in Persian Electronic Text Analysis
PersianOrthographyStandardNatural Language Processing
Farsi, also known as Persian, is the official language of Iran and Tajikistan and one of the two main languages spoken in Afghanistan. Farsi enjoys a unified Arabic script as its writing system. In this paper we briefly introduce the writing standards of Farsi and highlight problems one would face when analyzing Farsi electronic texts, especially during development of Farsi corpora regarding to transcription and encoding of Farsi e-texts. The pointes mentioned may sounds easy but they are crucial when developing and processing written corpora of Farsi.
INTRODUCTION
People in different countries use different characters to represent the words of their native languages. With library automation and the development of networked information structures, the problem of finding a unique way to show information has become much more complex [1] [2]. Unicode [4] was devised so that one unique code is used to represent each character, even if that character is used in multiple languages [3]. In this paper, we describe Farsi language transcription in Unicode framework and we discuss challenges that someone would face when processing Farsi etexts.
Old Farsi was based on the cuneiform writing system (a mostly syllabic style) as early as the 6th century B.C. Later, the Persians developed a new alphabet called Pahlavi, which was derived from Aramaic writing system, a Semitic language, to replace the previous uniform alphabet. However, after the Arab's conquest in 651, the Persians adopted a unified Arabic script for writing. One should note that despite their shared alphabet, Farsi and Arabic are entirely different languages: they are not genealogically related (they belong to separate language families, named, Indo-European and Afro-Asiatic) and naturally have different phonology and grammar [5] [6].
With the expansion of Islam, Arabic script was adopted as a system of writing also for other languages other than Arabic. As in many of these languages, among them Farsi, Urdu, and Sindhi, a greater number of phonemes, compare to the Arabic language, had to be depicted in written form, the repertoire of Arabic characters was extended. The original Arabic language alphabet consists of 28 characters. The modern Farsi writing system uses the Arabic alphabet, but with the addition of four letters which do not occur in Arabic. These are:" ژ پ چ ."گ Additionally, it changes the shape of another two i.e. "ی" and ."ک" Not all of the sounds represented in the Arabic alphabet exist in Farsi; as a result, more than one letter may represent one sound. For example, there are two letters in Farsi for the sound /t/ ت( )ط and three for the sound /s/ س( ص .)ث Salient characteristics of Arabic script are: existence of various connecting letters, varying graphic forms for many letters depending on their position in a word, varying letter width, absence of full size characters for vowels (vowels are represented with particular signs above and below characters), existence of a number of digraphs and composite letters, writing direction from right to left and absence of upper case and lower case letters. General rules of Arabic writing system are followed by the writing system of Farsi.
In the following, we will introduce writing system of Farsi in detail and we will underline problems of this writing system when analyzing Farsi e-texts. The rest of paper is organized as follows: section 2 introduces Farsi character encoding. Section 3, describes the orthography of Farsi. Section 4 gives an overview of common ambiguities in the analysis of Farsi e-texts. Finally, we conclude in section 6.
The definition and universal implementation of character sets for the presentation, interpretation, and exchange of multi script data is a problem ever since computers were adapted for Information Retrieval applications. As for the Farsi script, various attempts have been done in the past to create a universally acceptable and technically viable encoding system. The most prominent result was the Iran System. Note that Iran System is not a standard; it is a corporate character set that found its way through the Iranian user community. Therefore, there is no standardization paper to refer to. The characters in the Iran System standard are saved, and transmitted in a visual order. There are two, three, or four codes for each letter in most cases: for initial, medial, isolated, and final forms of the letters, some of which are unified in most of the cases. In other words, in character encoding system before Unicode, glyphs are encoded instead of conceptual letters. After a while a standard for 8-bit Farsi character encoding was proposed in [7] by the Institute of Standards & Industrial Research of Iran but it did not catch on with user community.
After a while, The ISIRI 6219:2002 (Information Technology -Persian Information Interchange and Display Mechanism, using Unicode) was proposed as Farsi standard for using Unicode in digital environment. This standard indicates a subset of Arabic character set in Unicode to be used by user communities for Farsi. In this document, we will refer to ISIRI 6219:2002 as Farsi Standard Character set.
Unicode standard version 4.0 reserves the range 0600 to 06FF for Arabic characters. Among the 227 Arabic script signs currently encoded, there are punctuation marks, pronunciation marks, symbols for honorifics and Koran's annotation signs as well as all letters representing consonants in Arabic and the other languages using Arabic script. Important design principles observed in the Unicode standard and relevant to the Representation of Arabic script are characters not glyphs. Some Arabic letters can have up to four different positional forms depending on their position relative to other letters or spaces. According to the design principle "characters, not glyphs" there is no individual code for each visual form (glyph) that an Arabic character can take in varying contexts but only one code for each actual letter. The correct glyphs to be displayed for a particular sequence of Arabic characters can be determined by an algorithm. Encoding algorithm for Arabic transcribed texts uses "first-to-last" logical order instead of "right-to-left" or "left-to-right" for correct representation in cases a text contains "right-to-left" and "left-to-right" strings.
As mentioned earlier, an algorithm is responsible for displaying correct glyphs of characters. For this reason, characters are classified according to their ability to join their previous or next characters onto different shaping classes. For proper displaying of characters two special characters namely Zero Width Joiner (0x200D) and Zero Width Non Joiner (0x200C) are added to character codes. The use of these special characters after a code means that a ZWJ or a ZWNJ should be added after the character if the character is not followed by a "right-join causing" character, or a "nonjoining character" respectively. Unfortunately, even with the developed standard for using these characters in Farsi electronic texts, the user community does not respect it; thus, this is caused ambiguity in Farsi e-texts manipulation as we talk about it later.
FARSI ORTHOGRAPHY
Iran's Academy of Farsi Language and Literature 2 is a governmental body presiding over the use of Farsi in Iran. The Academy has created an official orthography of Farsi, entitled 'Dastoor-e Khatt-e Faarsi' (Farsi Script Orthography). Official orthography of Farsi can be found in [9]. According to the proposed orthography, Farsi affixes must be written in an attached form to their stem. In some cases e.g. when the stem end in letter "("هh), affixes must be attached to the stem with a short space character before them. Here, we take ZWNJ character as the short space character. Most of the ambiguities in Farsi morphology and Part of Speech tagging of Farsi e-texts will be avoided using the standard that has been proposed by the academy as discussed in the next section.
When starting the computational analysis of Farsi, one would face a lot of ambiguities which root in Farsi language characteristics and its special transcription. In this section we have discussed these ambiguities and we propose our solutions for some of these problems.
Ambiguities while Characters Manipulation
In Farsi, the use of Arabic characters instead of Farsi standard ones is possible. A common mistake is associated with letters "ی" and "ي" as well as " ك " and ."ک" Problems arise when using dictionaries for looking up words, making frequency profiles of words due to the inconsistency in encoding. Even this problem causes different results when using keyword based search engines like Google. Similar problem happens when a short vowel enters in a word transcription. Because short vowels do not appear alone in Farsi transcriptions but when using these short vowels in a word, they are coded independently. Therefore, there is a difference in Unicode strings of the same word with or without short vowels included. This can lead to an unsuccessful search in a dictionary or lexicon.
Another problem concerns the character "TATWEEL". As we mentioned in the characteristics of Arabic transcription, letters can appear with varying width. This is just a visual characteristic and it does not influence the meaning of a word. "TATWEEL" character with code (0640) is used to support this characteristic. We have to consider that this character must be deleted from input string when looking up dictionaries etc.
To solve this problem a standardization procedure must be undertaken for input Farsi e-texts. This standardization procedure can vary from one application to another. But using a standard character set for constant letters is obvious. This can be done using a map between Arabic characters to Farsi ones. For some cases, such as keyword based searching, it is recommended to omit short vowels from input to have a more consistent search, it is clear that the omitting of short vowels results in loss of information. Even these vowels can be used as signs for solving the problem of homographs in Farsi transcription. We recommend omitting these vowels from input texts even at the cost of losing information due to the fact that using short vowels in Farsi is rare but actually it is much dependant on the context.
Ambiguity at Word Boundaries
In Farsi, word boundaries can be delimited by space, punctuation, and the forms of the characters indicating their position within a word. Word boundaries are usually denoted by space. Considering the official orthography of Farsi, space is an unambiguous word boundary. For some cases such as compound words and light verb constructions ZWNJ is used for separating their different parts. Unfortunately, the user communities do not consider this; even the organizations like newspapers. For this reason in a normal text, space cannot be considered as an unambiguous word boundary and vice versa. Here there appears a conflict between ZWNJ and Space character that should be considered.
Full stop marks a sentence boundary, but it may also appear in the formation of abbreviations or acronyms. The slash (/) is used in the numbers and dates structures. Also the dash (-) could be used to separate compound words. Other punctuation marks including the comma, quotes, brackets, question mark and colon unambiguously indicate word boundaries.
As mentioned above, Arabic characters take one of the four forms: initial, medial, final, and stand-alone, and the final form of characters may indicate the end of a word. Character form can be used as a delimiter depending on the encoding structure. This condition is not applicable when using Unicode as an encoding system. This is a common mistake associated with tokenization in Farsi, as you can see in [10] [11]. This condition can be used for word boundary detection when using old encoding systems because they use different character codes for different glyphs of a letter. Even while using an old encoding system of Farsi this condition can raise ambiguity as you can see in [11]. One may argue that by using ZWNJ as an end of word marker, we can overcome this problem but it cannot be right considering the official orthography of the Farsi.
Ambiguity in morphology
Ambiguity in morphological analysis of words in Farsi arises for two reasons. One for homograph words and the other is the ambiguity caused by word boundary. According to [11] in Farsi, a single surface form can represent different morphemes. In addition, short vowels are not marked in written texts, which results in different possibilities for analysis. The word (ببرbbr), for instance, can be pronounced with different vowel combinations resulting in three possible common lexical elements: (ببرbabr) which means "tiger", (ببرbebar) which means "take" and (ببرbebor) which means "cut".
As we mentioned in 5.3, user communities use space character instead of ZWNJ, short space. For this reason, certain bound affixes appear as free morphemes. For example, the affix "ھا" (ha) can be written in three different ways as shown in the example below with the same meaning: As bound morpheme "کتابھا" (ketabha) which means "books" in English.
As free morpheme with a space between the root and bound morpheme ھا" "کتاب (ketab ha). As free morpheme with a short space(ZWNJ) instead of a space character کتا ب ھا " "
The last format is the right orthography according to [9].
Similar problem usually happens for some other lexical elements, such as the preposition "به" (be), the postposition (object overt marker) "را" (ra), or the conjunction "که" (ke) that usually appear as separate words in written texts, but can also be found as attached morphemes. By using official orthography this lexical elements must be written separately. Another solution for detached bound morphemes is the morpheme-based processing of Farsi written texts.
Ambiguity while Detecting Proper Nouns in Farsi
Since there is no capital letter in Arabic transcription and as a result in Farsi transcription, detecting proper nouns in Farsi brings some problems. In Farsi, there is no general rule for distinguishing proper names from the other nouns. However, some heuristics can be used to distinguish proper names. Rezaie addressed a solution for this problem in [12].
Ambiguity in Farsi Syntax analysis
Another ambiguity arises in possessive constructions in Farsi due to its Arabic transcription. The element joining the Farsi noun phrase constituents to each other is the Ezafe clitic. However, it is usually pronounced as the short vowel /e/ and therefore is not marked in written texts. The result, in Farsi written texts, is a series of consecutive nouns without any overt links or boundaries. In [9] it is recommended to include Ezafe in written texts but as we mentioned before in section 4.1 this can result in some ambiguities while manipulating Farsi texts.
CONCLUSION
In this paper, we discussed problems that may occur when analyzing Farsi e-texts due to the inconsistency of representing Farsi character and its special orthography. Although the points we have mentioned here, may sound trivial but not considering them may lead to wrong results, which are far away from real situation.
Our study shows that the combination of Farsi orthography that is proposed in [9] and the standard in [8] is the best way to remove ambiguities associated with Farsi e-texts. Considering these standards will result in consistency between Farsi and other languages especially developing parallel corpora. Still we have to find a solution for putting the existing Farsi texts in these standards. As a step to this goal, we started to develop a standardization tool to cope with the mentioned problems. We hope that we can put it as an online tool for user community of Farsi. In addition, it is needed that software community respect these standards especially the keyboard layouts they prepare must be compatible with the proposed standards in [8]. Also according to the importance of ZWNJ character in Farsi transcription it is needed to put this character in keyboard layout in a way that it can be easily available for users.
Corresponding Author: Behrang Qasemi Zadeh, qasemizadeh@gmail.com
http://www.persianacademy.ir
Options For Presentation of Multi-Lingual Text: Use Of the Unicode Standard. J C Erickson, Library Hi Tech. 153-4Erickson J.C., Options For Presentation of Multi-Lingual Text: Use Of the Unicode Standard, Library Hi Tech, Vol. 15, No. 3-4, 1997.
Unicode and Arabic Script, Workshop "Unicode Und Mehrschriftlichkeit In Katalogen. Wiederhold Lutz, Sbb Pk, BerlinWiederhold Lutz, Unicode and Arabic Script, Workshop "Unicode Und Mehrschriftlichkeit In Katalogen", Sbb Pk, Berlin, 2003.
Orthographic Diacritics and Multilingual Computing. J C Wells, Language Problems and Language Planning. 243Wells, J.C., Orthographic Diacritics and Multilingual Computing, Language Problems and Language Planning, Vol. 24, No. 3, 2000.
. The Unicode Standard At. The Unicode Standard At Http://www.Unicode.org/.
. Steven Roger Fischer, History Of Writing. Reaktion BooksSteven Roger Fischer, History Of Writing, Reaktion Books, 2001.
T Peter, William Daniels, Bright, The World's Writing Systems. Oxford University PressPeter T. Daniels, William Bright: The World's Writing Systems, Oxford University Press 1996.
Farsi 8-Bit Coded Character Set For Information Interchange. Isiri. 3342Document In FarsiIsiri 3342, Farsi 8-Bit Coded Character Set For Information Interchange, Http://www.Isiri.Org/Std/3342.htm, Document In Farsi.
Information Technology -Persian Information Interchange And Display Mechanism, Using Unicode. 6219Document In FarsiIsiri 6219:2002, Information Technology -Persian Information Interchange And Display Mechanism, Using Unicode, Http://www.shci.Ir/Download/Unicode%20finalversion.pdf, Document In Farsi.
Iranians Academy of Persian Language and Literature. Official Persian Orthography. Iranians Academy of Persian Language and Literature, Official Persian Orthography, Http://www.Persianacademy.Ir/Books/Dastoor-E%20khatt.pdf.
. Nima Mazdak, Department Of Linguistics., Stockholm UniversityFarsisum -A Persian Text Summarizer. Master ThesisNima Mazdak, Farsisum -A Persian Text Summarizer. Master Thesis. Department Of Linguistics., Stockholm University. Http://www.Dsv.Su.Se/Hercules/Papers/Farsisum.pdf , 2004.
Karine And Rémi Megerdoomian, Zajac, Processing Persian Text: Tokenization In The Shiraz Project . Nmsu, Crl, Memoranda In Computer And Cognitive Scienc. essing Persian Text: Tokenization In The Shiraz Project . Nmsu, Crl, Memoranda In Computer And Cognitive SciencMegerdoomian, Karine And Rémi Zajac, Processing Persian Text: Tokenization In The Shiraz Project . Nmsu, Crl, Memoranda In Computer And Cognitive Scienc,2000
Tokenizing An Arabic Script Language. Siamak Rezaie, Arabic Language Processing: Status And Prospects, Acl/Eacl. Siamak Rezaie, Tokenizing An Arabic Script Language, Arabic Language Processing: Status And Prospects, Acl/Eacl 2001.
| [] |
[
"Resolution of Difficult Pronouns Using the ROSS Method",
"Resolution of Difficult Pronouns Using the ROSS Method"
] | [
"Glenn R Hofford \nSoftware Engineering Concepts, Inc\n\n"
] | [
"Software Engineering Concepts, Inc\n"
] | [] | A new natural language understanding method for disambiguation of difficult pronouns is described. Difficult pronouns are those pronouns for which a level of world or domain knowledge is needed in order to perform anaphoral or other types of resolution. Resolution of difficult pronouns may in some cases require a prior step involving the application of inference to a situation that is represented by the natural language text. A general method is described: it performs entity resolution and pronoun resolution. An extension to the general pronoun resolution method performs inference as an embedded commonsense reasoning method. The general method and the embedded method utilize features of the ROSS representational scheme; in particular the methods use ROSS ontology classes and the ROSS situation model.ROSS ontology classes include the object frame class and the behavior class.The ROSS behavior class defines associations among a set of objects that have attribute-based state descriptions and nested behaviors. In addition to the classes of the ontology, the methods use several working memory data structures, including a spanning information data structure and a pronoun feature set structure. The ROSS internal situation model (or "instance model") is an instance of a meaning representation; it is a spatial/temporal representation of declarative information from the input natural language text.A new representational formalism called "semantic normal form" (SNF) is also introduced. This is a specification at the abstract level for a set of data structures that are used to store the syntax and content of input natural language text that has been transformed and augmented with semantic role and other information. It is an intermediate form of the input information that is processable by a semantic NLU engine that implements the pronoun resolution method.The overall method is a working solution that solves the following Winograd schemas: a) trophy and suitcase, b) person lifts person, c) person pays detective, and d) councilmen and demonstrators.Many of the features described in this paper have been productized -the functionality is implemented in an NLU system that is available for use via a RESTful API server (currently English-only). | null | [
"https://arxiv.org/pdf/1411.4109v1.pdf"
] | 30,453,201 | 1411.4109 | 735d2cf19ca3bc335d1f7b996704181d3ced302d |
Resolution of Difficult Pronouns Using the ROSS Method
Glenn R Hofford
Software Engineering Concepts, Inc
Resolution of Difficult Pronouns Using the ROSS Method
Date of Publication: 11/14/2014 (Version 1.0) 2 Contact: glennhofford(at) 3 Table of Contents
A new natural language understanding method for disambiguation of difficult pronouns is described. Difficult pronouns are those pronouns for which a level of world or domain knowledge is needed in order to perform anaphoral or other types of resolution. Resolution of difficult pronouns may in some cases require a prior step involving the application of inference to a situation that is represented by the natural language text. A general method is described: it performs entity resolution and pronoun resolution. An extension to the general pronoun resolution method performs inference as an embedded commonsense reasoning method. The general method and the embedded method utilize features of the ROSS representational scheme; in particular the methods use ROSS ontology classes and the ROSS situation model.ROSS ontology classes include the object frame class and the behavior class.The ROSS behavior class defines associations among a set of objects that have attribute-based state descriptions and nested behaviors. In addition to the classes of the ontology, the methods use several working memory data structures, including a spanning information data structure and a pronoun feature set structure. The ROSS internal situation model (or "instance model") is an instance of a meaning representation; it is a spatial/temporal representation of declarative information from the input natural language text.A new representational formalism called "semantic normal form" (SNF) is also introduced. This is a specification at the abstract level for a set of data structures that are used to store the syntax and content of input natural language text that has been transformed and augmented with semantic role and other information. It is an intermediate form of the input information that is processable by a semantic NLU engine that implements the pronoun resolution method.The overall method is a working solution that solves the following Winograd schemas: a) trophy and suitcase, b) person lifts person, c) person pays detective, and d) councilmen and demonstrators.Many of the features described in this paper have been productized -the functionality is implemented in an NLU system that is available for use via a RESTful API server (currently English-only).
Introduction and Background
Disambiguation of so-called "difficult" pronouns is a challenging problem for natural language processing. Although statistics-based approaches are at least partly effective for some cases, the problem calls for a semantic approach that addresses the representational aspects using deeper and more powerful techniques that involve comprehension of the meaning of natural language. The application of world knowledge and domain knowledge seem to be essential components of the cognitive processes that are used by us as humans in order to comprehend language, i.e. to grasp its meaning in a manner that allows for a reasoning process that reaches conclusions regarding the meaning (i.e. the referent) of pronouns in natural language text or spoken discourse. The challenge lies in somehow emulating this approach in software.
A new general-use ontology-based artificial intelligence method is presented that uses a complex multi-stage set of algorithmic processes that effectively resolves important categories of ambiguous pronouns. The method creates an internal situation model of the subject matter of natural language text that enables identification of the referent and the antecedent of a pronoun 1 . A ROSS situation model is an internal memory representation of instances of objects and processes that is constructed as part of the natural language understanding process. The method uses an ontology-based approach that involves ROSS object frame classes and behavior classes. The classes of the ontology are directly involved in the creation of the situation model as they provide a basis for the instantiation of object instances and process instances.
An important extension to the basic method is also described. This extension involves an embedded inference process that performs commonsense reasoning and that is invoked for pronoun resolution problems that are not adequately handled by the basic resolution method. The embedded inference routine specifically handles natural language sentences wherein there is an indirect association between the semantics for the unresolved pronoun and the set of candidate referents. (A specific example is presented from Winograd schema #1 ("councilmen and demonstrators").
A second extension is described as it applies to a solution for Winograd schema #2 ("trophy and suitcase"). With this extension, the basic (general) method can be supplemented by the use of a set of ontology classes and situation model features that model not only the semantics of the natural language text, but also the "meta" entities and aspects of the communication process itself: these include the "communicative agent" (the talker), the information that is communicated, the receiving, or "self" agent (the listener), and cognitive processes on the part of the communicative agent or agents.
Where the ontology is small, the task of difficult pronoun resolution can be addressed without the use of probabilistic representations in the ontology and situation model and without probabilistic reasoning. However, the introduction of probability data into the ontology becomes necessary in order to scale the method. The ROSS representational scheme has support for probability fields for attribute types, attributes, structure and for nested behaviors within behavior classes. The use of the behavior class probability field is demonstrated by the solution to variant #1 of Winograd schema #1 ("councilmen … feared violence").
The ROSS Representational Method
The ROSS method (Hofford 2014 (a, b)) is a new approach in the area of representation that is useful for many artificial intelligence and natural language understanding (NLU) tasks. (ROSS stands for "Representation", "Ontology", "Structure'", "Star" language). ROSS is a physical symbol-based representational scheme. ROSS provides a complex model for the declarative representation of physical structure and for the representation of processes and causality. From the metaphysical perspective, the ROSS view of external reality involves a 4D model, wherein discrete single-time-point unit-sized locations with states are the basis for all objects, processes and aspects that can be modeled.
The ROSS method is also capable of the representation of abstract thingsthey are modeled by grounding them in a 4D space-time model. Abstract entities that are modeled include the entities that are involved in the representation of representation ("meta-representation"), including representation of intelligent agent mental representations, cognition and communication.
ROSS is used in two ways in support of the pronoun resolution and inference methods: 1) the Star ontology language is used for the specification of object frame classes and for rule-like constructs referred to as behavior classes in the ontology/knowledge base, and 2) the formal scheme of the ROSS situation model (also called "instance model") is used for the specification of meaning representations that represent the semantics of a particular situation.
The ontology+knowledge base repository stores supporting definitions, object frame classes, and representations of conceptual, or world knowledge that use the behavior class. The ontology and knowledge base is organized into three tiers: an upper tier contains supporting definitions and high-level abstract classes, a middle tier contains classes whose primary purpose is functional: middle tier classes are used in many behavior classes, and a lower tier of object classes contains a large number of classes that are distinguishable from other similar classes by a few features. Examples of lower tier classes include "house cat", "trophy", and "father-person".
The internal instance model that is used during processing is a proprietary feature of ROSS that is used for representing factual information about particular situations (past, present or hypothetical situations).
Background: Winograd Schema Challenge
The Winograd Schema (WS) Challenge (Davis: 2011a ) is a set of tests for assessing whether or not an automated natural language understanding system has capabilities for "thinking" -i.e. does the system use and exhibit true intelligence in some sense, or is it responding to humanentered natural language input using canned (hard-coded) replies, "tricks", deception, diversion from the topic, etc. The WS challenge includes a variety of schemas: a schema consists of a pair of descriptive sentences and an associated pair of questions that tests whether or not the system has understood the sentence and its alternate. The NLP task involves some form of anaphora or coreference resolution for an ambiguous, or difficult pronoun that exists in the original sentence. The purpose of the WS Challenge is not to test for simple disambiguation; rather it is to use this task as a test of underlying intelligent capabilities.
The fields of commonsense reasoning for AI and NLU and of anaphora resolution and related disambiguation tasks can be explored from many perspectives. Nevertheless the author has focused particularly on the Winograd Schema Challenge based on the belief that this set of schemas provides a broad-based foundation by its inclusion of a wide variety of problem types that form a sort of "core set" of use cases for NLU.
Davis (2011a) describes the Winograd Schema Challenge as follows:
A Winograd schema is a pair of sentences that differ in only one or two words and that contain an ambiguity that is resolved in opposite ways in the two sentences and requires the use of world knowledge and reasoning for its resolution. The schema takes its name from a well-known example by Terry Winograd (1972) The city councilmen refused the demonstrators a permit because they [feared/advocated]
violence.
If the word is ``feared'', then ``they'' presumably refers to the city council; if it is ``advocated'' then ``they'' presumably refers to the demonstrators.
The schema challenge sentences and test questions for the trophy and suitcase example is described in Levesque et al (2012) as follows:
The trophy doesn't fit in the brown suitcase because it's too big. What is too big?
Answer 0: the trophy Answer 1: the suitcase The obvious answer to a human is that it is the trophy that is too big. This answer is obvious to a person at least partly because humans are able to form a picture (a conceptualization) of the situation as it involves physical objects, processes, and causality. A human is also able to do reasoning about the processes and the causalityi.e. the causal relationshipsthat are involved in such a situation.
Levesque et al (2012) describes a further aspect of the WS schema challenge for this particular schema: … 4. There is a word (called the special word) that appears in the sentence and possibly the question. When it is replaced by another word (called the alternate word), everything still makes perfect sense, but the answer changes. … This is where the fourth requirement comes in. In the first example, the special word is "big" and its alternate is "small;" and in the second example, the special word is "given" and its alternate is "received." These alternate words only show up in alternate versions of the two questions:
• The trophy doesn't fit in the brown suitcase because it's too small. What is too small?
Answer 0: the trophy Answer 1: the suitcase Levesque et al (2012) shed light on why this challenge is an appropriate one for purposes of testing whether or not a system that purports to do intelligent thinking and natural language comprehension is actually doing such thinking:
The claim is that doing better than guessing requires subjects to figure out what is going on: for example, a failure to fit is caused by one of the objects being too big and the other being too small, and they determine which is which.
Addressing this topic with another example involving the aforementioned city councilmen and demonstrators as originally conceived by Terry Winograd, they further state:
This was the whole point of Winograd's example! You need to have background knowledge that is not expressed in the words of the sentence to be able to sort out what is going on and decide that it is one group that might be fearful and the other group that might be violent. And it is precisely bringing this background knowledge to bear that we informally call thinking.
In his commentary on the difficulty of the "councilmen/demonstrators" example, Winograd (1972) states:
"We understand this because of our sophisticated knowledge of councilmen, demonstrators, and politicsno set of syntactic or semantic rules could interpret this pronoun reference without using knowledge of the world."
Solutions for the following W.S. schemas are presented here: schema #1: "the councilmen refused the demonstrators a permit", #2: "trophy that doesn't fit in a suitcase", #8: "the man could not lift his son", and #115: "Joe paid the detective". The present method uses a mixture of techniques in solving these schemasthis method is not a "one size fits all" approach.
It will be shown that some disambiguation tasks can be adequately handled by an approach that relies on characteristics that are unique to common objects based on a determination of the higher classes from which they can be said to derive functional properties (i.e. a suitcase is a member of a container class that can be "fitted into"). Other tasks involve a prior-stage determination of semantic roles (active or passive) due to the fact that multiple objects of the same class are involved ("the man could not lift his son"a person cannot lift another person). Some resolution problems require knowledge that associates behaviors with what are referred to as "nested behaviors" (or "chained behaviors"). The schema that contains "Joe paid the detective …" requires this approach. Finally, many pronoun resolution tasks require the application of a set of preliminary inferences (commonsense reasoning), using a generate-and-test approach that uses temporary situation representations of the situation that is described in order to test candidate antecedents for the pronoun. (This is demonstrated for the "advocate violence" variant of the "city councilmen and demonstrators" schema).
The probability aspects are also addressed with the councilmen and demonstrators schema: it will be shown that the present method can handle this type of resolution problem using behavior classes, or rules, that incorporate a probability value. The example involves an examination of multiple behavior class rules that represent the act of "refusing something with fear as a causal feature", where the causal connection of this behavior to a prior (nested) behavior in one of the rules is compared with that of other (possibly multiple) rules.
Main Concepts
Entity Resolution Using a ROSS Ontology
A ROSS ontology is a repository that contains declarative information about objects and processes. Pronoun resolution involves a preliminary stage task that identifies, or links, antecedent words or phrases with items in the ROSS-based ontology. This is referred to herein either as "entity resolution" or as "class selection". (There is possible overlap with word-sense disambiguation which is viewed as describing a closely-related task that is not directly relevant to this method).
Note that there is no single authoritative ROSS ontology; ROSS ontologies are interchangeable. However a single ontology does exist that supports the pronoun resolution examples described in this document.
Ontology Scalability
To support scalability, the ontology that supports the procedures of the resolution method must be general purpose as a declarative representation of entities and features for a problem domain (in this case the commonsense reasoning domain). The ontology should not contain entities, attributes or features that are custom-designed for specific procedural pronoun resolution problems. This may at first appear to be the case for the "trophy and suitcase" schema solution, however it will be shown that the ontology features for that particular schema are generally-useful.
The rationale for the requirement of generally-useful ontology classes and attribute types is scalability: the method can only scale if it depends solely on a set of ontology definitions that have been created or derived apart from considerations of problem specificity.
The ROSS Instance Model
The ROSS instance model has an important role in supporting the pronoun resolution and inference processes. Declarative content of the input natural language text is used in order to build a central instance model that contains a semantic representation of all objects and processes that can be identified during the execution of the entity resolution processing task. (Note that a situation model is a type of instance model; the terms are used interchangeably in this document). The instance model thus contains a set of referents (referents can be either objects or processes, however for the examples of this paper they are objects). The information in the instance model is also tracked by an internal memory data structure called the "spanning information stack". Spanning information is tied into the instance model and is used for tracking referents with respect to their level of immediacy to the phrase or clause that contains the unresolved pronoun. The task of pronoun resolution can thus be re-stated as a task that involves a determination of which instance-model-based referent is indicated.
The instance model is not limited to containing objects or processes that are explicit in the input NL text: for instance exophoric pronouns refer to objects that are not explicitly described but that can be represented in an instance model. An example of a sentence with an exophor is "Nobody came to the beach party because it was too hot". Although this case may perhaps be interpreted in any of several ways 2 , it can be adequately addressed using a ROSS behavior class that represents weather phenomena (in the locality of the beach where the party would be held) via a behavior class specification that represents a collection of air molecules. As a commonsense representational problemnot a physics problem -an attribute type such as "RelativeTemperatureExperiencedByPersons" may be adequate as an abstraction for representing the state of being "too hot".
Features of the ROSS Behavior Class That Support the Resolution Process
ROSS behavior classes have a prominent role in providing a set of referent target options; these include objects that are represented by common nouns and nested behaviors that can be represented by either nouns or verbs.
The behavior class has the following features that support the resolution and inference processes:
Multiple time and space-related constituent elements within a single behavior class, where elements can be:
o Physical objects: what is actually stored is the state or states of an object (as specified using ROSS attributes); the physical object and its specified state are part of a wrapper class called a "populated object class".
o Nested behaviors (e.g. one of possibly many behavior classes for "refusing a permit request" can contain a nested behavior class that represents "fearing a harmful event"). Support for the representation of un-communicated objects (see the "beach party" example above). Behavior classes can involve a wide variety of objects and nested behaviors that are implicit in a situation: in addition to phenomena like the weather, these may include the ground (earth) and persons that are observers. A representational construct called the "binder" that allows for the representation of the spatial and temporal relationships between the various objects that are part of a behavior. (Hofford (2014 (b)) "The ROSS User's Guide and Reference Manual" describes the behavior class in greater detail).
Definitions
The following terms are unique to the present method or have unique uses pertaining to the method.
meaning unit: a meaning unit ("ME") is a syntactic construct that consists of a subject, a predicate and any adverbial modifier words, phrases or clauses. The predicate contains verbbased expressions and includes objects (direct object and indirect object). Meaning units are recursive and nested MEs may occur in any of several places. Generally speaking a meaning unit is the equivalent of a clause. There is usually a one-to-one correspondence between a syntactic ME and a semantic predicate expression, described next. Examples of meaning units include: "Bob did walk the dog.", and "because it was too big". predicate expression: (part of semantic normal form (SNF)) -a (semantic) predicate expression ("PE") is a semantic construct that centers around a single syntactic predicate expression (e.g. "did walk the dog"). PEs have arguments that have roles such as "actor" and "actee". Like MEs, PEs can be nested. The PE is explained in greater detail below.
Overview of the Algorithm
The pronoun resolution general algorithm is part of a larger algorithm called the semantic engine driver. The pronoun resolution general algorithm is driven by pronoun instances as they are encountered during execution of an entity resolution routine that itself is invoked within the control flow of the engine driver. When a pronoun is encountered, an attempt is made to resolve its referent and the antecedent word or syntactic phrase that corresponds to the semantic entity. (The referent/ semantic entity is not limited to "objects"e.g. it could be a process or a fact).
The semantic engine driver processes a list of semantic normal form 3 predicate expression (PE) data structures that corresponds to one or more input NL sentences 4 . In a typical situation that involves an anaphor, the input NL text fragment consists of at least two consecutive meaning units, a main meaning unit and a second (current) meaning unit that contains one or more unresolved pronouns. Several tasks are appliedthe processing described here starts with the main PE, which represents the main meaning unit.
Example:
Main meaning unit:
"The trophy doesn't fit in the brown suitcase" Current meaning unit:
"because it's too big"
The first task involves class selection (entity resolution) for all common nouns and proper nouns in the main PE (and possibly pronouns, based on earlier resolution results). In the example shown this would involve selecting a TrophyClass and a SuitcaseClass. The selected classes are then used for the second main task: instantiating object instances within the master internal instance model. For this example this creates a new "trophy" object instance and a "suitcase" object instance within the master instance model.
The third task also involves use of the main PE: it is a form of entity resolution referred to as behavior class selection: this selects a behavior class or list of behavior classes that are relevant for the situation. The behavior class selection process takes into account not only the verb word (e.g. "fit", "lifted", "payed", "refused") but whether or not the event or action is negated, and whether or not the active, passive and "extra" object instances are a match with respect to their class, or higher class in an inheritance hierarchy, and with respect to their use in active, passive or extra roles. The method is further capable of utilizing verb modification phrases (usually adverbs or adverbial phrases) in the input (e.g. "completely fit", or "tightly fit"; e.g. "walking quickly" versus "walking", and "trying to walk" versus "walking")this input guides the behavior class selection process via a process that matches verb modification information against behavior class modification parameters.
Fourth task: once the list of behavior classes, each of which matches all search criteria, has been attained, the NLU system is able to fully describe the situation of the main meaning unit ("The trophy doesn't fit in the brown suitcase"). (Note that each of the retrieved behavior classes in the list are equivalent with respect to the information that they provide for instance model generation). The first behavior class in the list is used to generate new object instances in the master instance model. (An alternate approach is to use a higher behavior class rather than the first of multiple similar behavior classes). The step of generating new object instances using the behavior class is called "behavior class application". (Details of this process are outside the scope of this document).
The transition to the next task involves completion of processing of the main PE and the start of the processing of the current PE.
The fifth task involves processing of the entity arguments of the current PE: this starts with entity resolution and instance model generation for any entity arguments that do not contain pronouns. (The trophy and suitcase example does not have any such entity arguments in the current PE). (Where the current PE only contains adjectival information (as in "too big"), this will get saved in the pronoun feature set data structure).
The sixth task involves processing of the entity arguments of the current PE that contain pronoun(s). (For the trophy and suitcase example, this involves processing of the entity argument containing the "it" of the current PE). An early part of this process is entity resolution: it in turn involves the actual pronoun resolution. The pronoun resolution routine involves a search for the constituent elementusually an object instance -of the master instance model that matches the features of the unresolved pronoun, as they are specified or implied in the current meaning unit (and current PE). The search process is limited to those instance model object instances that are associated via pointers from a spanning information data structure. The search process involves examination of each of the following to find a match (note that all criteria that are provided by the text of the current meaning unit (represented in the pronoun feature set) are necessary for a match to succeed).
The pronoun feature set: all features of the unidentified object or event that is represented by the unresolved pronoun: this includes all of the following that exist. (Note: the features here are described using various possible trophy and suitcase sentences).
o an associated attribute or state if one exists (e.g. "because it is too big."). Matching against instance model: match this feature against an optional causal feature attribute for a populated object class within the behavior class that is associated with an object instance.
o a behavior of the meaning unit in which the pronoun is contained (e.g. "because the packing person did not push it hard enough") (it participates in a push behavior). Matching against instance model: match this feature against a nested behavior in the behavior class.
o the active/passive/extra role within the meaning unit. (e.g. "because it was not pushed hard enough" (it has passive role). Matching against instance model: match this feature against a PassiveParticipant flag that belongs to passive role populated object classes within a behavior class.
Instance model features: qualitative attributes/states, spatial/temporal relationship to other objects within the instance model, object frame class or higher class in the hierarchy, and active/passive role. These features may be determined by information in the instance model itself, or indirectly via an inspection of the behavior class that was used in generating the instance model objects from the main meaning unit. If the behavior class is involved, the populated object classes or nested behaviors are examined. Note that the instance model and spanning information structure may in some cases include object instances that are not explicit in the text: this is possible where a behavior class was applied to the main meaning unit and resulted in the generation of non-explicit object instances (e.g. the weather, e.g. the ground). In such cases, an exophoric pronoun will be matched against the object instance.
If the pronoun referent and corresponding syntactic antecedent can be resolved, both the instance model object instance and its class are associated with the pronoun, and this newlyacquired information is added to the instance model. (Subsequent processing may also use the newly-acquired semantic information (pronoun class and object instance) during application of a behavior class for the current PE). The new information that identifies the pronoun is also added to the spanning information data structure for possible subsequent use. If the pronoun is not resolved via the matching process described above, other resolution attempts can be made: these include matching based on gender or number. Finally, a default resolution mechanism is invoked if all other resolution attempts have failed; in this case a return code indicates that pronoun resolution did not succeed using the instance model-based approach: this allows for subsequent processing to handle possibly-cataphoric pronouns.
The spanning information data structure keeps track of classes and instances for each main meaning unit/PE so that a current meaning unit/PE may refer to them. The spanning information stack extends this concept by keeping track of the classes and instances for up to n prior meaning units, where the value of n is chosen based on practical considerations.
Probability-Based Pronoun Resolution
The functionality of the method has been described apart from the use of probabilistic information that may be available. Both the entity resolution method and the pronoun resolution method can be supplemented by using probability fields within the classes. The probabilistic functionality for pronoun resolution will be explained and demonstrated as it has been applied to the "feared violence" variant of the councilmen and demonstrators schema.
Use of ROSS Situation Model to Support Question Answering
Once a situation/instance model is generated by the semantic engine, it can be used for a variety of follow-up tasks; a primary example is that of question answering. For instance, for the "man lifting son" schema, the follow up question "Who was so weak?" is processed by searching the instance model that was previously generated when the original sentence was processed.
Optional Representation of the Communicative Agent
The method can also incorporate an optional model that represents intelligent/communicating agents, information that is communicated, and cognitive information and processes. (See Appendix 1: Solution for "Trophy and Suitcase" Schema Using a Model of the Communicating Agent for full details). This optional approach involves generation of extra "meta" information in the instance model so that the reception of natural language input is represented as a process that involves one or more communicative agents (a "talker" or "talkers"). The communicated information is also represented in the instance model. The information is received by a self-agent (the "listener"), i.e. the NLU system, which can also be represented in the instance model.
The general pronoun resolution method described in this document does not include considerations of modeling of the communicative agent and cognition. It makes a set of default epistemological assumptions: that there is a shared ontology, and that the communicative agent adheres to a set of shared rules (e.g. about causality in the physical world) in the realm of cognition; this allows the tasks of entity resolution and pronoun resolution to be handled using an approach that deals directly with the input text and the semantics of the text.
Non-Objective: Representation of Deep Structure of Physical Objects
Some lines of research in the area of commonsense reasoning have focused on spatial representations and spatial reasoning. This approach is exemplified by Davis (2011b), wherein he describes the trophy and suitcase example. He states "The first task is to interpret the phrase, "because it was too large" in terms of its spatial content."
In his subsequent analysis, he emphasizes the spatial reasoning aspects of the problem.
The present method takes a different tack: it relies on class inheritance that involves a middle ontology that includes classes such as "container", or "two-sided enclosure", and "enclosable object". These middle ontology classes have attribute types such as "size relative to the process of fitting", from which can be derived attributes with values such as "too big" or "too small". Lower ontology objects like trophies and suitcases derive some of their features from the higher classes (e.g. container) that they are associated with via the inheritance mechanism. The anaphora resolution method is focused on the task of identifying the entity that is the most likely of the candidate referents.
The present approach does not fully emulate human thought processes as they are used to disambiguate pronouns; in some respects it is based only on useful abstractions. As such it does not handle all conceivable pronoun resolution cases: a method that employs a deep structure representation of the objects of a situation may indeed be necessary for many such cases. Such a method could be based on ROSS, and would represent the following aspects:
Instantiation of object instances using values that represent compositional properties, e.g.
"substance" properties. For instance, this approach involves representations of common objects like trophies and suitcases with respect to whether each unit-sized cubicle region (e.g. with dimensions the size of a millimeter) is solid or space. Further depth of analysis and representation involves questions regarding aspects such as flexibility of materials (or the lack thereof) (e.g. a cloth suitcase may be flexible in various parts thus allowing something that seems too big to actually fit into it). Specifications of the sizes of objects and of all distances between objects. The spatial orientation of all object instances. Behavior classes (causal rules) that redefine coarse-grained rules such as "fitting" in terms of fine-grained rules such as a rule that describes that a solid-filled cuboid region at t=1 cannot occupy the same position as another (adjacent) solid-filled cuboid region at t=2 unless the other solid has "moved out of the way".
The author's view is that the ROSS method is a promising approach for achieving the deep spatial reasoning that would accomplish anaphora resolution using the above guidelines.
Comprehendor NLU System
Comprehendor is a natural language understanding (NLU) system that performs a variety of NLU tasks. While this paper describes a method and a main set of use cases for difficult pronoun resolution, it also describes a supporting set of uses cases for ontology derivation and knowledge acquisition as performed by the Comprehendor system. The ontology derivation/knowledge acquisition capabilities are viewed as significant in their own right; they have provided a substantial boost in time-savings for purposes of tackling new disambiguation method use cases. (The ontology derivation and knowledge acquisition sub-system is a separate topic of research and development by the author as part of an ongoing effort to create a controlled natural language for ROSS).
Pronouns: Types of Pronouns and Syntactic Locations of Pronouns
Types of Pronouns Handled by the Method
The class of difficult pronouns that is handled by the method includes the following types of pronouns:
Personal subjective: he, she, it, they Personal objective: him, her, it, them First and second person personal pronouns require somewhat different handling and are viewed by the author as a part of the area of modeling the intelligent agent (not included in this document). Other classes of pronouns include possessive, demonstrative, Wh-pronouns, reflexive and interrogative: resolution of some of these categories of pronouns does not yield to the present method.
Pronoun Syntactic Locations
The resolution of the third-person personal pronouns that are the focus of the present method involves a process of analysis that centers on, (or "pivots" around) the imaginary dividing line between a pair of adjacent meaning units. Other configurations are handled as secondary casesthese include antecedents that are several clauses or sentences back, and exophoric pronouns. The following are the primary configurations for personal pronouns and their antecedents as they appear within the syntactic structure of natural language sentences: Anaphora crossing meaning units: the pronoun is within a current meaning unit and the antecedent is in an earlier meaning unit. Variations include but are not limited to:
o A main clause (earlier) containing the antecedent, followed by an adverbial clause (current) that contains the pronoun as a (noun phrase) subject. (e.g. "The man could not lift his son because he was too weak.").
o A main clause (earlier) with antecedent, followed by an adverbial clause (current) that contains the pronoun as direct object, or that contains the pronoun within a prepositional phrase complement. (e.g. "The man could not lift his son because the building had collapsed on top of him."). Cataphora crossing meaning units: the pronoun is within a current meaning unit and the antecedent is in a later meaning unit. (e.g. "When he arrived home, John went to bed."). Anaphora within a meaning unit: the current meaning unit contains a sentence with a personal objective pronoun that refers to an antecedent within the same meaning unit. This structure is shown by these sentences: "The house's owners sold it last year.", or "The owners of the house sold it.".
Semantic Normal Form (SNF)
This section contains a formal specification of the input needed by a semantic engine that implements the present method; this is referred to as semantic normal form (SNF) 5 . Semantic normal form is a syntax-independent formalization; it is an intermediate representation that stands between syntax and the ROSS instance model. SNF has been designed to facilitate instance model creation.
The data structure definitions here may be used for the creation of engine input data adapters; this allows for flexibility with respect to parsers that can be integrated into systems that use the present method. SNF is language-independent and thus allows use of the present method with a wide variety of natural languages.
The Predicate Expression ("PE")
The predicate expression ("PE") is the basic building block of semantic normal form 6 . A predicate expression consists of a predicate specifier list, a list of entity argument specifiers, or entity arguments, a list of attributive argument specifiers, or attributive arguments, and a list of modification specifiers, or modifiers. Entity arguments are typically associated with semantic entities that correspond to the syntactic subject, direct object, indirect object, and those that are represented by nouns or noun phrases within post-verb (predicate complement) prepositional phrases. Attributive arguments are words or phrases that represent attributes (usually representing an adjective used with a form of "to be"). Modifiers are associated with adverbial syntactic items, e.g. adverbs and adverbial phrases and clauses. Predicate expressions allow for indirect recursion, or nesting: an argument may itself be a predicate expression, a modifier may be a predicate expression or it may be a modification specifier expression that includes a predicate expression.
Semantic Role Labels
Predicate Specifier Roles
The predicate specifier has a predicate specifier role label. This label has one of the following enumerated values (this list is not exhaustive). (Actors/actees/extras are explained in the following section).
enumeration PredicateSpecifierRole {
PredicateToBeAttributive, // "The sky is gray." PredicateToBeIsA, // "A car is a vehicle." PredicateHasAVerb, // "A vehicle has wheels."
PredicateToBeTakingEntityArgument // "The car is in the garage." (with actor and extra) PredicateVerbTakingEntityArgument // "The man walked." (with actor) // "The man lifted his son." (with actor and actee) // "The ball was thrown." (with actee) } Note that syntactic concepts such as auxiliary/helper verb uses of "to be" are not present here. E.g. for the sentence "The ball was thrown", the predicate specifier verb word is the "throw" verb, the role is PredicateVerbTakingEntityArgument, and "was" is not stored in the data structure.
Entity Argument Roles
An entity argument has an entity argument role label. This identifies the argument as actor (active, or causative role), actee (passive role) or extra (neither active nor passive role).
enumeration EntityArgumentRole {
Actor, Actee, Extra } Entity argument roles have their syntactic origination in syntax categories such as subject and direct object, however they are a reflection of the need to represent the phenomenon of causality. The determination of an entity argument role may involve a syntactic analysis of a prepositional phrase: e.g. the sentence "The man was bitten by the dog" gets processed to generate two arguments: "dog" gets the actor role, and "man" gets the actee role. The extra role is for entities that are neither active nor passive. Extra role entities often come from prepositional phrases that are complements of the main verb or predicate; an example of an entity with the extra role is "building", in "She walked away from the building.".
For the "man could not lift his son" schema this results in the following assignments of actor/actee/extra roles:
Extra Sub-Roles
Extra sub-roles are used for entities that have an association with the semantics of a predicate that is neither active nor passive. Many of the sub-roles are directly derived from prepositions. E.g. for the sentence "The man drove the car around the block.", the word "block" has the extra role and the Around sub-role.
Relative/Subordinate Clauses
Relative/subordinate clauses do not supply entities to the predicate expression in which they are contained; rather, they consist of: a) a possible preposition, e.g. "from", b) Wh-pronoun, e.g. "who", and c) a nested predicate expression that has its own set of entity arguments. Examples include "The sheriff arrested the man who had held up the bank.", "They followed the stream through the woods to the spring from which it had its source." The nested predicate expression is handled differently from entity arguments that do not contain nested PEs: it is processed by an indirect recursive call as a PE that is part of the overall syntactic sequence of PEs (cf. PredicateExpressionPointerList in section 6.Semantic Engine Driver: Data Structures and Control Flow).
Attributive Argument Roles
Predicate expressions with predicates having the PredicateToBeAttributive role or the PredicateToBeIsA role have attributive arguments. Examples include "The sky is blue", "Mary is seven years old", and "A car is a vehicle". Examples from the Winograd schemas include "it was too big", and "he was so weak".
Other Argument Categories: Abstractions That Represent Aspects
(This section draft/under review) This is a list of categories of arguments that do not get handled in the same way as other arguments since they are not instantiated within instance models as objects or as behaviors:
Aspect types: e.g. color (example: "The intense color of the sky dazzled the observers.") Aspects: e.g. "blueness" (example: "The blueness of the sky extended to the horizon.") One option for handling such abstractions is reification of the aspect type or aspect in the ontology and in instance models.
Other Enumerated Types
The SyntacticRole is used to represent the syntactic origin (currently limited to use for noun phrases) Hypothetical // e.g. "if an object is dropped then it will fall" }
The Structure of the Predicate Expression
The structure of the predicate expression is described here using a hybrid form that mixes data structure pseudo-code with BNF. Lower level items are described after the larger items in which they are contained. Optional items are bracketed with '[' and ']'. The order of items within a structure is not important unless specifically indicated or implicit within a BNF expression. A list may contain 0, 1 or multiple items unless otherwise noted. (Note: "predicate unit" is sometimes used as a synonym for "predicate expression"). (This is a high-level view of SNF and many lowerlevel items such as PrepositionalPhraseComplement are not defined in detail). SpecifierList -> Specifier // e.g. "the", "this", "first" | Specifier SpecifierList ;
QualifierList -> Qualifier // e.g. "angry", "old", "green" | Qualifier QualifierList ;
PostnominalModifierList -> PostnominalModifier // e.g. "in the garage" | PostnominalModifier PostnominalModifierList ;
PostnominalModifier -> PrepositionalPhrase | AdjectivePhrase ; // Note: each of the following may contain nested PEs:
PrepositionalPhrase -> // (not shown) e.g. "from which its name is derived"
BoundRelativeClause -> // (not shown) e.g. "the man who drives the bus" AdverbWord -> literal ; // e.g. "quickly" AdverbPhrase -> … // e.g. "early in the morning" AdverbialExpression // e.g. "while it was still dark", "when it is not raining" { Wh-Word | AdverbPhraseIntroductoryWord PredicateExpression } Wh-Word -> "while" | "when" | … ; AdverbPhraseIntroductoryWord -> "because" | … ;
SNF Example
Semantic normal form can be illustrated by a predicate expression that represents the following sentence: "The city councilmen refused the demonstrators a permit because they feared violence."
PredicateExpression { PredicateSpecifierList PredicateSpecifier { MainVerbWord ("refused") MainVerbSemanticRole (PredicateVerbTakingEntityArgument) DiscourseContext (DeclarativePastSimple) } EntityArgumentSpecifierList ( EntityArgumentSpecifier // "the city councilmen" { EntityDesignatorList ( EntityDesignator { NounPhrase { SpecifierList Specifier ("the") QualifierList
Qualifier ("city") NounHeadWord ("councilmen") } } ); EntityArgumentSemanticRole (Actor) PredicateOrdinal (0) // refers to "refused" } EntityArgumentSpecifier // "the demonstrators" Figure 1 shows the high-level architecture/dataflow diagram for an NLU system that implements the present method. The diagram is included as background that shows the context wherein the anaphora resolution method operates.
{ EntityDesignatorList ( EntityDesignator { NounPhrase // (detail not shown) } ); EntityArgumentSemanticRole (Actee) PredicateOrdinal (0) } EntityArgumentSpecifier // "a permit" { EntityDesignatorList ( EntityDesignator { NounPhrase // (detail not shown) } ); EntityArgumentSemanticRole (Extra) PredicateOrdinal (0) } ); // EntityArgumentSpecifierList ModificationSpecifierList ModificationSpecifier // "because they feared violence" { AdverbialExpression { AdverbPhraseIntroductoryWord ("because") // Nested PE: PredicateExpression { PredicateSpecifierList PredicateSpecifier { MainVerbWord ("feared") MainVerbSemanticRole (PredicateVerbTakingEntityArgument) DiscourseContext (DeclarativePastSimple) } EntityArgumentSpecifierList ( EntityArgumentSpecifier // "they" (actor role) { EntityDesignatorList ( EntityDesignator { NounPhrase { NounHeadWord ("they") } } ); EntityArgumentSemanticRole (Actor) PredicateOrdinal (0) // refers to "feared" } EntityArgumentSpecifier // "violence" (actee role) { // (not shown) } ); // EntityArgumentSpecifierList } // PredicateExpression } // AdverbialExpression SyntacticPosition (Final) PredicateOrdinal (0) // refers to "feared" } // ModificationSpecifier } // PredicateExpression
NLU System Architecture and Data Flow
A parser subsystem will include a number of subsystems that include lexical analysis, sentence segmentation, morphological analysis, part of speech tagging (possibly optional depending on the parser capabilities), and a parsing component. The parser subsystem generates a list of syntax trees (or syntactic tree-like data structures) that are processed by a SNF data adapter to create a list of SNF predicate expressions (PEs) that are usable by the engine.
The NLU semantic engine subsystem processes the list SNF predicate expressions in order to create an internal instance model. The engine performs the various tasks that accomplish the entity resolution (class selection) and pronoun resolution/disambiguation. The engine uses the internal instance model both for pronoun resolution and for cases where it performs the embedded commonsense reasoning.
Input to a Parser: Communication Unit List
This section describes the structure of the natural language text input in its original form, prior to conversion to semantic normal form. The root element is Document. A Document is defined as a communication unit list. A communication unit may be a sentence or some other non-sentence textual expression. Nonsentence textual expressions are useful for handling strings of text containing non-sentence text, e.g. news article headlines, date and time stamps, email addresses, etc.
SNF Data Adapter
Document ->
CommunicationUnitList ;
CommunicationUnitList -> CommunicationUnit | CommunicationUnit CommunicationUnitList ;
CommunicationUnit -> SingleWordOnLine | TwoWordSequenceOnLine // e.g. "Chapter 1" | DateAndTime | EmailAddress | WebAddress | Sentence ;
The following are the grammatical elements under sentence.
Sentence -> SemicolonExpressionList FullStop | MeaningUnitList FullStop ;
SemicolonExpressionList -> SemicolonExpression | SemicolonExpression SemicolonExpressionList ;
PredicateExpressionOrderedList -> PredicateExpression | PredicateExpression CoordinatingConjunction PredicateExpressionOrderedList ;
CoordinatingConjunction -> 'and' | 'or' | 'but' | ... ;
SemicolonExpression -> PredicateExpressionList ';' PredicateExpressionList ;
FullStop -> '.' | '!' | '?' ;
Data Adapters (Syntax-To-Semantic-Normal-Form Converters)
A data adapter that converts syntactic data, usually consisting of a list of parser-generated syntax trees, to semantic normal form is called an SNF data adapter, or SNF converter. This process provides input in a form that is usable by a semantic engine that implements the present method using SNF.
Example: Stanford Parser Output / Data Adapter Input
The following syntax tree (context-free phrase structure grammar representation) was generated by the Stanford parser (online demo at http://nlp.stanford.edu:8080/parser/index.jsp). (This is provided as an example of possible input to an SNF converter).
(ROOT (S (NP (DT The) (NN trophy)) (VP (VBZ does) (RB n't) (VP (VB fit) (PP (IN in) (NP (DT the) (JJ brown) (NN suitcase))) (SBAR (IN because) (S (NP (PRP it)) (VP (VBZ 's)
(ADJP (RB too) (JJ small))))))) (. .)))
Example: Phrase Structure Parser Output / Data Adapter Input
The Comprehendor NLU system includes an English phrase structure parser sub-system. This system generates a syntax tree like the following for the trophy and suitcase example sentence (the "too big" variant). The grammar for this parser is not shown, however most of the items shown below have descriptive names that convey their meaning.
Semantic Engine
The semantic engine tasks include the following that are particularly relevant for pronoun resolution.
Multi-Stage Process: Main Predicate Expression and Current Predicate Expression
A processing task of instantiating object instances will usually take place prior to that of pronoun resolution (exceptions involve sentences that have pleonastic or exophoric pronouns). This task establishes pointers to entity classes and instances within the spanning information data structure. The main predicate expression and the current predicate expression are defined as follows. (Note that, syntactically, the main predicate expression may appear after the current predicate expression, as is the case where cataphoric pronouns are involved). main predicate expression: the predicate expression that contains semantic information about previously-resolved entities: It is described by a spanning information data structure. The master internal instance model contains both structural parent object instances and component object instances that were generated from the entity arguments of the main predicate expression. E.g. for the councilmen and demonstrators schema, object instances include an object instance for each of councilmen, demonstrators, and permit. The master instance model also contains instantiations of the main meaning unit predicatebased behavior, based on a higher behavior class (the first found behavior class can be used here as it contains commonly-shared information for all behavior classes). E.g. for the councilmen and demonstrators schema, object instances will have had state attribute values set based on the definition of a "refusing something due to fear" behavior class. current predicate expression: contains one or more unresolved pronouns:
It is described by the pronoun feature set data structure rather than the spanning information data structure. The master internal instance model contains partial information based on common nouns, proper nouns, any resolvable pronouns (e.g. possessive pronouns), and verb information that can be determined prior to resolution of the unresolved pronoun.
The processing of the current predicate expression by the semantic engine is the main focus of the algorithms of the present method; however several other engine preparation tasks are also described.
Entity Resolution (Class Selection)
The task that involves selection of a relevant class from the ontology for a noun head word or noun phrase is referred to herein as "entity resolution"; this may to some extent overlap with the common usage of "word sense disambiguation". The syntactic and SNF (semantic) information about a word or phrase may provide sufficient constraints to allow unambiguous resolution (e.g. where "man" contained in a noun phrase constrains the ontology lookup process to object frame classes; behavior classes in the ontology are not searched). Other aspects of entity resolution that are important but not addressed here include:
examination of the immediate in-sentence context to determine attribute types and behaviors that apply to one candidate class but not another or others probabilistic approaches commonsense inferences that can be triggered The entity resolution task resolves a word or phrase by associating it with a class in the ontology. This involves a set of assertionsi.e. the features of the class -about the entity that is described by the word or phrase. Refer to Hofford (2014 (b)) "The ROSS User's Guide and Reference Manual" for detail about the object frame class.
Generation of Internal Instance Model
The semantic engine executes two main tasks that are part of master internal instance model generation. They are:
Object instance instantiation: this occurs after entity resolution/class selection. Structural parent object instances and component object instances are inserted into the master instance model. Behavior class selection and application: this usually occurs after object instance instantiation and involves the application of a behavior class in order to generate additional information that can be determined. A typical case of behavior class application, as it would apply to the meaning unit "Bob hit the other man" would add additional attribute information to objects in the instance model (Bob, other man), and would also generate a new structural parent object instance at a separate point along a time-line: this new structural parent instance will hold cloned copies of the object instances (Bob, other man) and these object instances will have state attribute values that represent the results of the "hit" action.
Other Engine Tasks
The following tasks are also performed by the engine. Details about pronoun resolution and the embedded inference process will be provided in the following sections.
Pronoun resolution: this takes place within entity resolution but also performs object instance instantiation (object instances based on pronouns can be instantiated as soon as the pronoun antecedent is resolved, therefore this task is incorporated into the final stages of the pronoun resolution task). Embedded inference/commonsense reasoning: this is performed when additional instance model information is needed. Types of inference include the following (triggered inference is not covered in this document)
o Inferences that can be triggered by information that is gained as the input natural language text is processed. E.g. a sentence in a story provides information that gets used to create an instance model, from which an inference can be drawn: "As the mule slowly descended the rocky trail, suddenly it lost its footing and fell into the vast open space below". (Triggered inference: the mule got injured or killed).
o Inferences that are used within the pronoun resolution routine in order to handle the pronoun resolution task where other simpler instance-model-based attempts have failed. This is described in a subsequent section and has been applied in order to solve Winograd schema #1 (councilmen and demonstrators) for the "advocate violence" variant. Generation of external instance model: this is an optional step that involves generation of an external XML-based instance model. Question answering: the Comprehendor NLU system stores instance model information, which is used for follow-up question answering.
Semantic Engine Driver: Data Structures and Control Flow
Overview
A semantic engine sub-system for the present method will have a high-level function that is referred to as the engine driver. The EngineDriver() function processes an input Document, which is a list of communication units. The engine driver branches to an appropriate subroutine, depending on whether the communication unit is part of a sentence or is of another communication unit type, e.g. a web URL or a standalone email address. When it has finished processing all communication units, the engine driver invokes GenerateOutputInstanceModels() in order to generate the external (XML) version of the internal instance model and any other selected output forms (e.g. a bullet-point summary of a story).
Data structures and code from the Comprehendor NLU system are used to illustrate engine driver concepts. Comprehendor is a C++ implementation; the following sections use C++ or pseudo-code. (Note: data structures shown here and the functions that follow are not intended as full listings of the actual code in the Comprehendor system).
Data Structure for Master Token List
The engine requires an input master list of lexical tokens: the main data structure that is part of the implementation of this token list, referred to in the following sections, is "TokenListNode" (a list node class).
Data Structures and Data Types: Input to the Engine
This section describes the following data structures and their supporting definitions: communication unit, sentence, predicate expression, predicate specifier, entity argument specifier, and modification specifier.
A communication unit data structure has the following structure: A sentence has the following structure; note that semicolon expressions are handled as toplevel expressions within a sentence as they are similar to full sentences. (This structure also shows flags relating to paragraphs and quotations that have uses that are not described in detail here). The DiscourseContext enumerated type is defined as follows. A DiscourseContext value roughly corresponds to the grammatical concepts of tense and aspect, with further divisions that are partly based on mood. ("declarative" and "interrogative", although both are indicative, are separated here due to the needs of the semantic engine). The predicate expression list is not shown; a predicate expression has the following structure. The PredicateExpressionPointerList is not shown: this is a list of pointers to predicate expressions (including the current PE) that represents the original syntactic order of meaning units. For instance, given the sentence "The man could not lift his son because he was so weak.", the pointer list points to two PEs that represent the two meaning units that are shown here:
(head of list) "The man could not lift his son" // pointer to this data object (next/tail) "because he was so weak" // pointer to a PE that is nested within the ModificationSpecifierList
The TokenListNode data structure is not shown: this is for a master token list that is generated by a lexical analyzer; token list nodes also store disambiguation information as it is determined by the engine. The pFirstTokenListNode pointer points into the master token list at the location that corresponds to the start of the predicate expression (this is usually the start token of a sentence). The PredicateSpecifierRole enum is as follows.
enum PredicateSpecifierRole { PredicateToBeAttributive, // "The sky is gray." PredicateToBeIsA, // "A car is a vehicle." PredicateCapability, // "can" PredicateHasAVerb, // "A vehicle has wheels." PredicateToBeTakingEntityArgument // "The car is in the garage." PredicateVerbTakingEntityArgument // "The man walked." };
The entity argument specifier list is not shown; the entity argument specifier is shown here. Note that the entity argument specifier may contain a predicate expression, allowing for recursivity/nesting of predicate expressions. (Refer to the section above on Semantic Normal Form for definition of the EntityArgumentSemanticRole enumerated type). The entity designator list is not shown; the entity designator is defined as follows: The attributive argument specifier data structures are not shown here. The modification specifier list is not shown; the modification specifier is as follows. Note that the modification specifier may contain a predicate expression, allowing for recursivity/nesting of predicate expressions.
class ModificationSpecifier { AdverbialPhrase *pAdverbialPhrase; // e.g. "quickly" AdverbialExpression *pAdverbialExpression; // e.g. "I awoke while it was still dark." PredicateExpression *pPredicateExpression; // e.g. "The snows came early that year, // driving the bears into an early hibernation." SyntacticPosition syntacticPosition; // e.g. Leading, PreVerb, PostVerb, Final int ordinalPredicate; // refers to a predicate specifier };
Data Structures and Data Types: Internal/Operational
The ObjectInstance structure is shown here: this is the main data structure that is used for information within an internal instance model. (Note: this shows one of two alternatives (fixedlength array) for storing attributes and relationshipsthe second approach uses a list). The InstanceStructure member contains embedded objects: this is of particular importance for instance models, insofar as a structural parent object instance is only a "holder". (Structural parent object instances exist at the top level in an instance model as members of Contexts, described later). For instance, a structural parent instance based on the EverydayObjectStructuralParentClass may contain an object instance for a HouseClass and a DrivewayClass. An analogy that may help illustrate these concepts is the diorama: a structural parent object instance is like a diorama that is frozen at one instant of time; the object instances that it contains (e.g. a house, a car) are like objects in a diorama. (The representation of time is accomplished by the use of the Context).
The ObjectInstanceSemanticWrapper structure is used by the spanning information data structure: The BehaviorClassesPerMainVerbWrapper stores a main verb word and the associated behavior classes. E.g. given the sentence, "The man could not lift his son or carry his daughter because he was too weak.", this stores information to relate a list of behavior classes for each predicate separately: There are several ActivePointer structures that are maintained per predicate expression and that are used internally by the engine: they are not shown here. The SpanningInformation structure contains information that normally corresponds to the previous predicate expression; it stores pointers to object instances in the master internal instance model. The SpanningInfoStack allows for the storage of multiple spanning infos. SpanningInformation pointers (referred to as "spanning info's") are pushed onto the stack in an order that is dependent on the control strategy for processing of PEs (by default this order reflects the order of original syntactic meaning units). The engine uses a stack trim operation (not shown) in order to limit the size of the stack: this is based on the heuristic assumption that there is a limit to the number of prior sentences/clauses that may intervene between an antecedent referent and an anaphoral pronoun. E.g. the stack trim may be invoked so that the size of the stack stays in the range of 10 to 15 meaning units. The PronounFeatureSet data structure stores all information that can be gathered about the pronoun and its context within the clause in which it appears. Several supporting enumerated types are shown first:
//---------------------------------------------------------------------------- // // enum PredicateExpressionTemporalOrderIndicator // //---------------------------------------------------------------------------- // enum PredicateExpressionTemporalOrderIndicator {
PredicateExpressionTemporalOrderIndicatorFollowing, PredicateExpressionTemporalOrderIndicatorPreceding, // e.g. for "after" PredicateExpressionTemporalOrderIndicatorUndetermined,
// PredicateExpressionTemporalOrderIndicatorNONE }; //---------------------------------------------------------------------------- // // enum PredicateExpressionHypotheticalUsage // //
//----------------------------------------------------------------------------// // struct PronounFeatureSet // //----------------------------------------------------------------------------
Data Structures and Data Types: Instance Model
The instance model is implemented as a ContextList. The following listing contains the Context data structure and the ContextList structure. The map that contains all top-level structural parent instances is in bold: The MapObjectInstances structure stores structural parent object instances, each of which is indexed by a temporal attribute value. // MapObjectInstances: // // -the wrapper class is not shown; the map of object instances contains ObjectInstance pointers: // typedef map <string, ObjectInstance*> MapTypeObjectInstances; typedef pair<MapTypeObjectInstances::iterator,bool> retvalMapTypeObjectInstances;
//--------------------------------------------------------------------------------------------------------// Map that contains all structural parent instances, indexed by time attributes
Engine Driver Algorithm
The following is the control flow for the engine driver that iteratively processes all communication units and does several other tasks. During processing, the engine driver builds the master internal instance model, generates spanning information within the spanning info stack, and also adds disambiguation information, as it is determined, to the master token list that is associated with the syntactic and semantic normal form information. It also generates an external instance model, and any other external information artifacts (e.g. bulleted summary) that have been specified. ProcessAllCommunicationUnits() is shown next. For each communication unit that is a sentence, this routine extracts a pointer to the first predicate expression in the list and passes it to the function ProcessPredicateExpression() (shown inline). The iterative processing performs the same task for all subsequent predicate expressions within the same list within the sentence.
Return // from ProcessAllCommunicationUnits ()
ProcessPredicateUnitIndicative processes a predicate expression as follows. The details of the control strategy for populating and maintaining the spanning information stack are not provided here; by default, the spanning information stack items get pushed onto the stack in the order in which they appear within the input natural language text. The PredicateExpression::PredicateExpressionPointerList is used by default to support population of the spanning info stack using a sequence that corresponds to the syntactic order of the original input meaning units. For instance, a leading adverbial clause such as "Before the first light dawned", will generate a spanning info that gets pushed onto the stack before the subsequent main clause with the same sentence. (E.g. full sentence: "Before the first light dawned, Joe ran several miles.").
Because the main control strategy within this function is iterative, and indirect recursive calls are used to process nested PEs, there is a limit to the levels of nesting in the input that are processed recursively. Examples:
"The trophy that Tim won doesn't fit in the suitcase."one level of nesting, processed by a recursive call. "The trophy that Tim won for the contest that he entered last year doesn't fit in the suitcase."two levels of potential recursion, however the second bound relative clause is not handled by a nested recursive call; rather "that he entered last year" is processed in the iteration sequence since it is pointed to by a node in the PredicateExpressionPointerList. ProcessEntityArgument () -this function processes information that was originally part of the syntactic subject, direct object, indirect object and any other entities within prepositional phrase complements within the meaning unit that corresponds to the predicate expression. The entity resolution (class selection) tasks for the actors, actees, and extras of the predicate unit are performed within ProcessEntityArgument (). For all noun phrases except those that contain unresolved pronouns, internal instance model instantiation takes place. ProcessPronounEntityArguments () is similar to ProcessNonPronounEntityArguments(); it calls ProcessPronounEntityArgument (), which is similar to ProcessEntityArgument(), and only processes entity arguments that contain pronouns. (Detail not shown).
ProcessModificationSpecifier () is shown next; the main processing task is to handle the nested predicate expression, for which ProcessPredicateExpression() is invoked.
General Pronoun Resolution Method
This section explains the general pronoun resolution method. This method has been used to process the following Winograd Schema Challenge schemas:
trophy and suitcase schema (this schema is processed using the general pronoun resolution method as it is supplemented by the modeling of the communicative agent (see Appendix 1: Solution for "Trophy and Suitcase" Schema Using a Model of the Communicating Agent). man cannot lift his son Joe paid the detective city councilmen refusing a permit because they feared violence.
A subsequent section describes the embedded commonsense reasoning method that is invoked during execution of the general pronoun resolution method when a situation is encountered that cannot be solved by the general method. The embedded commonsense reasoning method is applied when resolving this schema:
city councilmen refusing a permit because they advocated violence.
Entity Resolution (Class Selection)
This function is called EntityResolutionRoutine (). For situations where a pronoun exists in any meaning unit/predicate expression other than the first within the input text, this function will get invoked twice:
During processing of the main predicate expression During processing of the current predicate expression The functions to process common nouns, proper nouns, and the existential "there" are not described here. ProcessPronoun() is described next.
ProcessPronoun() -> PronounResolutionGeneralMethod()
The ProcessPronoun() routine attempts to determine both the referent and the antecedent for a pronoun. Determination of the referent involves identifying both the object frame class and the specific instance model object instance. The spanning information stack is used throughout the functions that are invoked by ProcessPronoun(). An ActivePointers data object is also used (it is similar to the spanning information structure but it represents the current meaning unit). ProcessPronoun() is not shown here: its primary function is to pass control to PronounResolutionGeneralMethod(), which is the main driver and worker routine for pronoun resolution. (The following functions show the C++ return types and several instances of error codes, such as E_SUCCESS and E_NOTFOUND).
SetPronounResolutionInformationInMasterTokenList ()
SetPronounResolutionInformationInMasterTokenList () inserts the results of the pronoun resolution into the master token list. (The following feature not yet implemented: insert all words for a noun phrase, not just the noun head word (e.g. "trophy")).
ExploratorySearchUsingOneSpanningInfo()
ExploratorySearchUsingOneSpanningInfo () iterates to test each candidate object instance in the instance model against the pronoun feature set. If multiple behavior classes are a match, the probabilities of the nested behaviors of each are compared in order to select the nested behavior and the object that have the highest probability.
TestOneCandidateObjectInstance()
TestOneCandidateObjectInstance () branches to an appropriate subroutine depending on the information that is available in the pronoun feature set:
Adjective info: TryToMatchCausalFeatureAgainstSpanningInfoObjectInstance() Verb info: TryToMatchVerbCausalFeatureAgainstSpanningInfoObjectInstance() (The logic of these two functions is not shown here). If the input pronoun feature set contains a verb word, and if pronoun resolution fails after attempting to resolve it with TryToMatchVerbCausalFeatureAgainstSpanningInfoObjectInstance(), then GenerateAndTestForCausativeSituation() is invoked. This is the embedded sandbox-based generate and test method. Example of object instance candidates for W.S. schema #1 are "councilmen", "demonstrators", and "permit". Here is the logic that is used when invoking the embedded reasoning routine: GenerateAndTestForCausativeSituation() is the driver for the embedded reasoning tasks and is described in the next section.
Embedded Commonsense Reasoning Method
GenerateAndTestForCausativeSituation() is one of the main routines that implements the embedded commonsense reasoning method. This section will describe the embedded reasoning method using the example of WS schema #1/variant 2 ("advocated violence").
GenerateAndTestForCausativeSituation() is one of multiple possible routineseach of which performs inference for a similar purpose. GenerateAndTestForCausativeSituation() handles cases that involve finding a referent for which the antecedent exists in a clause (e.g. a "because" clause) that is an explanation of the cause for a situation. (Other such routines are not described in this document).
The main task of this function is that of testing the input object instance candidate (e.g. "the city councilmen", e.g. "the demonstrators", e.g. "a permit") to determine if it could have participated in a behavior that led to the fact that was stated in the main clause (e.g. "the city councilmen refused the demonstrators a permit"). Because it is not known at this stage whether or not the candidate is the referent (e.g. it was the demonstrators who advocated violence, not the councilmen), there is a need to create a temporary "sandbox" instance model that exists apart from the engine's master internal instance model. (Once the correct candidate has been identified, the master instance model will get updated with the newly-acquired information). The sandbox instance model is a staging area that actually involves two separate instance model contexts: to avoid confusion these will be referred to as "east" side (earlier) and "west" side (later). The task of the routine is to build the two "sides" of the instance model and then determine whether or not they "meet up" using a final matching routine (similar to building an East railroad line and a West line and joining them in the middle). An overview of the process is as follows:
(West: represents earlier points along the time-line, starting with a candidate that might have "advocated violence") (the following is done iteratively for each main rule matching "advocated violence") -given the object instance candidate, e.g. councilmen -find a main rule (i.e. a rule for "advocate violence"e.g. TalkerAdvocatesActionWithListenersWhoAnticipateSomething) -create a sandbox context and insert an earliest structural parent instance into it -apply the main rule to derive consequential information, save a pointer to a nested rule -apply the nested rule (e.g. AnticipateHarmfulEvent) to generate new information into the West-side temporary context.
(East: represents later points along the time-line, working backward from the time point where "the city councilmen refused the demonstrators a permit") (the following is done iteratively) -e.g. given "the city councilmen refused the demonstrators a permit" utilize master instance model information to create the East-side temporary context apply nested rule that is contained within the "refuse …" rule
GenerateAndTest_ProcessOneForwardRule () has the following inputs:
A pointer to the object instance candidate (e.g. an object instance representing the "councilmen", or an object instance representing the "demonstrators"). The object instance data structure also contains a pointer to the object frame class from which it is derived so that object frame class information, such as structural parent class can be obtained. A behavior class that has been retrieved by a prior search process that provided one or more object frame classes and a verb-based expression. An example such behavior class is called "TalkerAdvocatesActionWithListenersWhoAnticipateSomething"this behavior class was retrieved based on the verb "advocates" along with other criteria. The pronoun feature set data structure; this includes information about the other syntactic and semantic entities of the clause or phase wherein the pronoun is contained. E.g. for "because they advocated violence", it includes "violence" as a syntactic direct object and as an object that fills the actee semantic role within that clause.
This routine first creates the temporary working memory sandbox (West) context. The output of this routine as shown below is the West context as it has been added to by the insertion of a major structural parent instance, a minor structural parent instance, and object instances within the structural parent instances. The object instances have had their state attributes set with values that will later get matched against attribute values of other object instances from the East sandbox context in order to determine if the candidate is the correct antecedent for the unresolved pronoun.
Note that the example rule shown here contains an object for the "Talker"this is handled as a single talker even though it needs to be matched against a possible group of talkers (e.g. councilmen or demonstrators) because the singular/plural aspect is not relevant for the inference process (either "councilman" or "councilmen" will work). In contrast, the "listeners" are represented as a collection since it is necessary to represent the fact that there is a set of possible listeners; there is logic that determines that that set can include the councilmen, for the cases where the councilmen are not the talker.
Return from GenerateAndTest_ProcessOneForwardRule ()
Refer to Hofford (2014 (b)) "The ROSS User's Guide and Reference Manual", 15.6.2. Main Inference Routine: Application of Two Rules for details on the functionality of PerformForwardDirectedInferenceWithNestedBehavior(). The inference involves an application of the main rule combined with a subsequent application of the nested rule in order to derive new information that represents that there is a set of listeners that anticipate/fear a harmful event (i.e. which includes violence).
ProcessLaterTemporalSandboxContextAndPerformMatchingTest () utilizes the later temporal information that was provided by the original clause (e.g. "the city councilmen refused the demonstrators a permit"). (This clause has already been used by the engine: the semantics that it represents exist in the master instance model and the spanning info data structure contains links that point at the relevant object instances).
Using the example, the main tasks are to iteratively: 1) derive a nested rule, if one exists, from the main rule (e.g. main rule = RefusingSomethingDueToFearBehaviorClass), 2) apply the nested rule (e.g. AnticipateHarmfulEventBehaviorClass), and 3) determine if the generated information (from this East side context) matches the earlier generated information from the West side context. -> Attr:AnticipatingHarmfulEventState = "Anticipating" // Instance: CognitiveRepresentationOfHarmfulEvent -> Attr:PassiveIsAnticipatedState = "Anticipated" //================================================================================= // EAST: StructuralParent contains: // Instance: PersonObjectFrameClass (q$) -> Attr:AnticipatingHarmfulEventState = "Anticipating" // Instance: CognitiveRepresentationOfHarmfulEvent -> Attr:PassiveIsAnticipatedState = "Anticipated" //================================================================================= When MatchNewObjectInstanceStatesLatestPriorAgainstEarliestPost() is called for the "demonstrators" as the object instance candidate, the earlier application of the nested behavior on the West side had generated an object instance that is a set of "listeners" that is exclusive of the demonstrators. MatchNewObjectInstanceStatesLatestPriorAgainstEarliestPost () searches the West context for each object instance in the structural parent instance; it finds a set of "listeners" (PersonObjectFrameClass (extra$), above) that has an attribute with attribute type = "AnticipatingHarmfulEventState" and attribute value = "Anticipating". When the object instance from the East side context's structural parent instance contains a similar such object instance, the test succeeds: several flags are set and E_SUCCESS is returned from the function. The caller functions will utilize the information that the object instance candidate (e.g. the demonstrators) was successfully used by the set of inference processes and the instance model-based matching routine in order to determine its validity as a candidate referent.
Applications: Winograd Schemas #8, #115, #1
This section describes solutions for Winograd schemas #8, #115, and #1. (The two variants for schema #1 -"feared violence" and "advocated violence" are described separately). The solutions are described in terms of how they address the schemas as general pronoun resolution use cases.
(The trophy and suitcase schema solution is described in Appendix 1: Solution for "Trophy and Suitcase" Schema Using a Model of the Communicating Agent).
Schema #8: "The man could not lift his son"
This use case involves a main meaning unit that contains declarative text about a past situation, followed by second meaning unit (clause) that describes something that occurred or was true at an earlier point in time. An earlier event or state may exist within an explanatory ("because") clause. The pronoun in the current meaning unit refers back to an antecedent in the main meaning unit. This use case includes the following sub-use cases: Dependent clause introduced by "because", "since", etc., where the semantics of the clause are of an explanatory nature. Dependent adverbial clause introduced by "after".
The pronoun resolution algorithm makes use of the behavior class that was used to generate instance model information; in doing so it needs a specification of whether or not to examine the object frame classes of the behavior class's PriorStates (rule antecedent) section or the behavior class's PostStates (rule consequent section). Therefore the caller function for the pronoun resolution algorithm contains logic that sets an enumerated value for PredicateExpressionTemporalOrderIndicator (to PredicateExpressionTemporalOrderIndicatorPreceding). This is passed to PronounResolutionGeneralMethod() via the pronoun feature set structure.
How to Handle Duplicate Classes and Actor/Actee Identification
The "person lifts person" schema builds on the basic resolution method but it also needs a feature that is shown in the following code from "NotLift_Weak_BehaviorClass": the PassiveParticipant flag.
The ontology and knowledge base features that were used for this schema include:
Object frame classes:
o Person class and several sub-classes, including "man" and "son". Behavior classes:
o NotLift_Weak_BehaviorClass o NotLift_Heavy_BehaviorClass A portion of NotLift_Weak_BehaviorClass is shown here in order to illustrate the use of the functional attribute type that has a value that is used to match the "weak" of "too weak". This also illustrates the use of the passive participant flag.
BehaviorClass "NotLift_Weak_BehaviorClass" (The appendix contains full listings for the classes that are used in processing this schema).
Schema #115: "Joe paid the detective"
The "person pays detective" schema builds on the basic resolution method but it also needs a feature that is shown in the following code from the "PayAfterReceivingBehaviorClass" behavior class: the feature is the nested behavior. (note: by way of comparison, in the terminology of Discourse Representation Theory, this is similar to an event within a DRS).
The ontology and knowledge base features that were used for this schema include:
Object frame classes:
o Person class and several sub-classes: detective and deliverable. (The concept of a "deliverable" is used here as a high-level abstraction that includes any of services, products, a report, etc. This is one of several possible ways to model the semantics of the input schema text "received the final report"). The nested behavior class feature is illustrated here in that PayAfterReceivingBehaviorClass contains a nested behavior class reference that refers to a separate behavior class called "ReceiveBehaviorClass" (full details are in the appendix).
Schema #1/Variant #1: "City councilmen refused … feared violence"
The sentence for this schema is: "The city councilmen refused the demonstrators a permit because they feared violence.". Resolution of the difficult pronoun for this variant of this schema uses the general pronoun resolution method; the embedded inference process is not needed since the instance model contains sufficient information to make the determination. The main ROSS rule that is used in the processing of the input sentence is the following. (This shows only key parts of the rule). The nested behavior class is called "AnticipateHarmfulEventBehaviorClass" (not shown here).
A call stack for the processing of this schema is shown below (note: this uses the meaning unit rather than the predicate expression). The call stack shows how the main meaning unit is processed first, by ProcessMeaningUnitIndicative()this processing generates instance model object instances which constitute the possible referents for the pronoun that is subsequently processed. The spanning information structure points to the newly-created object instances in the instance model. The processing of the current meaning unit ("because they feared violence") occurs in a subsequent call to ProcessMeaningUnitIndicative(). In this case, the pronoun that needs resolution is in the syntactic subject, thus there are two intervening calls for handling the subject, leading to the call to EntityResolutionRoutine(). This function calls ProcessPronoun(), passing in a fully populated PronounFeatureSet data structure. Some fields of the PronounFeatureSet are used to direct the call (since the PronounFeatureSet designates the current meaning unit as a "because clause", PronounResolutionExplanatoryClauseForCognitiveExplanation() is invoked.
PronounResolutionWorkerRoutine() is the main anaphora resolution routine. It invokes ExploratorySearchUsingSpanningInfoStack(), a high-level driver function for the "exploratory searches" that will take place. This function loops through all spanning info's in the stack. ExploratorySearchUsingOneSpanningInfo() invokes TestOneCandidateObjectInstance() for each of the actor object instances, actee object instances and extra object instances in the spanning information data structure. TestOneCandidateObjectInstance() again looks at the PronounFeatureSet: if an adjective is available it attempts to match it as the causal feature. In this case a verb is available ("refused"), and thus TryToMatchVerbCausalFeatureAgainstSpanningInfoObjectInstance() is invoked. Some details of TryToMatchVerbCausalFeatureAgainstSpanningInfoObjectInstanceSingleBehaviorClass() are not described here: the mechanism involves a set of preparatory steps leading up to the call to SearchUsingNestedBehaviorForMatchingCausalFeature(). The lowest-level function in this call stack is SearchUsingNestedBehaviorForMatchingCausalFeature(). It does these tasks:
searches the behavior classes that are associated with the object instance candidate's class (e.g. CityCouncilmanObjectFrameClass) for one that matches the verb ("refused"). This retrieves a behavior class pointer. compares the nested behavior class that was supplied as a parameter to the behavior class pointer that was just retrievedif they match, the object instance candidate is flagged as the pronoun antecedent and control returns up to PronounResolutionWorkerRoutine() for a set of follow-up tasks that include adding new information to the instance model. The present method can also utilize probability values that are associated with nested behaviors in order to select the most appropriate behavior class from among several. Refer to Hofford (2014 (b)) "The ROSS User's Guide and Reference Manual", Appendix: Star Classes for the Solution for Winograd Schema #1, 5.3.1. RefusingSomethingDueToFearOnPartOfRequestorBehaviorClass for full details.
Schema #1/Variant #2: "City councilmen refused … advocated violence"
The sentence for this schema is: "The city councilmen refused the demonstrators a permit because they advocated violence.". Refer to section 8. Embedded Commonsense Reasoning Method for details of this logic.
Conclusion: Test Results for Schemas
The method has been fully implemented in a working system that processes sentences, creates instance models, and then answers relevant questions based on its internal knowledge 7 . The Comprehendor system is also usable via a RESTful API server.
The following were derived from or directly adopted from the Winograd Schema Challenge schemas. (A minor change to one of the original schemas is noted).
(Note: the API call results show the antecedents in parenthesis after the pronoun (e.g. "it(trophy)"). The method is capable of determining the noun phrase, not just the noun head word and this will be addressed in a future version so that the system will generate the phrase, e.g. "it(the trophy) and "it(the brown suitcase)").
Trophy and Suitcase Schema
Original Winograd Schema
The trophy doesn't fit into the brown suitcase because it's too [small/large]. What is too [small/large]?
Answers: The suitcase/the trophy.
Test Results
The test results were as follows.
>The trophy doesn't fit in the brown suitcase because it's too big.
>What is too big? >The trophy is too big.
>The trophy doesn't fit in the brown suitcase because it's too small.
>What is too small? >The suitcase is too small.
Appendix 1: Solution for "Trophy and Suitcase" Schema Using a Model of the Communicating Agent
The original anaphora resolution method, as developed by the author and applied to solve the trophy and suitcase schema, involved a paradigm that models the process of communication itself. This process of communication involves the following elements; each of these is represented by object instances in a ROSS instance model. The object instances are based on ontology classes, as follows:
Intelligent/Communicative Agents: o The communicating agent or agents (also referred to as the talker). o The listening agent (the NLU system itself, also referred to as the listener) The information that is communicated Cognitive entities that belong to the communicating agent:
Rationale for Modeling the Communicating Agent
The "communicating agent paradigm" is useful for NLU and anaphora resolution situations that include the following: (NLU) Two-way or multi-way dialog or written communication (not covered here) (anaphora resolution) Sentences that involve ambiguity where the specific beliefs of the communicating agent are not known by the listener. An example would involve the following sentence:
"The bat did not hit the baseball because it moved too fast."
Here the pronoun it refers to either the bat or the baseball. It is possible to imagine two situations where this might be spoken or conveyed:
By a batter who has just swing and missed a fast pitch: this person explains the cause of his or her not hitting the ball from the perspective of a cognitive rule that describes bats and baseballs, wherein the causative behavioral feature of interest is pitches (balls) that are so fast that they are missed. By a batting coach who is coaching a rookie batter who tends to swing too fast. The batting coach explains the cause from the perspective of a cognitive rule that involves a causative behavioral feature: missed pitches caused by a batter who swings too quickly.
Despite a possible lack of plausibility of this particular example, it illustrates that some instances of natural language understanding can benefit from a model that takes into consideration the probability that the talker has a particular set of beliefs constituting the causative aspects of external phenomena.
The probability aspects are not addressed here, but the mechanisms involved in modeling a communicating agent, communicated information, and cognitive entities are explained.
Overview
Two distinct aspects are involved for the sentence "The trophy didn't fit in the suitcase because it was too big" 8 . The first of these is the process of communication on the part of an intelligent agent, including the abstract cognition (mental) entities that exist in the mind/brain of the intelligent agent. The second of these involves a representation of the actual, or external situation that the intelligent agent describesfor this schema it is modeled as a physical process of attempting to fit an object (trophy) into another object (suitcase).
The trophy and suitcase example sentences associate a behavior (described by the verb phrase "does not fit") with an attributive state that describes a causal feature ("bigness" or "smallness").
Ontology
An overview is described here: the appendix contains full listings for some of these classes.
Object Frame Classes
These include the following:
higher-level classes: o a special class for a structural parent object that is used to construct a 4D frame of reference for a situation (EverydayObjectStructuralParentClass) o a class of objects that are capable of being instantiated in an instance of a EverydayObjectStructuralParentClass (EverydayObjectFrameClass) o a class of objects that can fit into a container (EnclosableObjectFrameClass) o a class of container objects (ContainerObjectFrameClass) o a class of common objects that provides other attribute types such as color (CommonObjectFrameClass) a trophy class that inherits properties from the EverydayObjectFrameClass, the EnclosableObjectFrameClass, and the CommonObjectFrameClass.
a suitcase class that inherits properties from the EverydayObjectFrameClass, the ContainerObjectFrameClass, and the CommonObjectFrameClass. human intelligent agent class: it models an agent that performs cognition, and communicationi.e. this is the person that communicates (spoken or written) the sentences of the schema (IntelligentAgentObjectFrameClass) a mental conceptual representation (referred to as an "image", although it is not necessarily pictorial) of a static or process-wise situation, e.g. an image of the process of a person attempting to fit a trophy into a suitcase (CognitiveImageForSituationObjectFrameClass) a mental conceptual representation of the intelligent agent's cognitive representation of the causal explanation for the situation (CognitiveExplanationObjectFrameClass) a higher-level more generic "information" class from which the cognitive causal explanation class gets most of its properties: (RepresentationOfCausalExplanationObjectFrameClass) the information items: the spoken or written forms of the sentence and its constituent parts (CommunicationUnitSentenceObjectFrameClass, CommunicationFragmentMeaningUnitObjectFrameClass, CommunicationFragmentWordObjectFrameClass)
Behavior Classes
Internal knowledge base behavior class definitions are used as specifications of causality. Definitions exist for each of the following:
the "fitting" process or behavior (FitsBehaviorClass) the "not fitting" behavior (NotFit_Big_BehaviorClass, NotFit_Small_BehaviorClass)
Instance Model
The internal instance model is generated from the main sentence in several stages. ("The trophy doesn't fit in the brown suitcase because it's too big."). It consists of a main/overall model that contains an embedded model:
The main/overall instance model represents the communicating agent (the talker) and the process of communicating information. This main instance model contains an embedded instance model that represents the "fitting" action of the actual situation. The main instance model also involves representation of a cognitive process (on the part of the talker) involving reasoning about the causal aspects of the embedded instance model.
The embedded instance model represents the actual situation: it involves the objectstrophy, suitcase, person, and an instance of the behavior "to fit" as a process that occurs along a timeline (it is a 4D representation of the situation). It includes:
o attachment of a structural parent instance that is based on a structural parent object frame class (EverydayObjectStructuralParentClass).
o use of a timeline with time points such as "T01", "T02". o object instances with attributes: suitcase, trophy and person instances. An example attribute for the trophy is one that represents "not fitted into", at T01.
o behavior instances are implicitly implemented along the timeline. This involves specification of the suitcase and the trophy in an initial state, specification of the next state involving the action of moving the trophy towards/into the suitcase, and a final state where the trophy comes to rest outside of the suitcase.
o Once the pronoun is resolved, an attribute for "too big" or "too small" is added to the embedded instance model as an attribute of either the trophy or suitcase depending on which object has been determined as the pronoun antecedent.
Diagram of Overall Situation
The overall situation involves a process wherein an intelligent agent (the talker) communicates two clauses within a single sentence. (The first clause is "The trophy did not fit in the suitcase"; the second clause is "because it was too big"). Figure 2 shows the cognitive and communicative aspects of the situation (note that the listener agent is not shown). The semantic engine generates an overall instance model that corresponds to the diagram of figure 2. The overall instance model contains object instances, as follows. The intelligent agent is the talker, labeled as IntelligentAgent-01. The talker agent has a set of general beliefs (about how things work in the physical world) that is represented in brackets to the left of the this agent. An ActualPastSituation exists and occurred earlier, but it is only represented indirectly within the bubble in the upper left: this involves a trophy, a person, a suitcase and a "fitting attempt" action.
(1) CognitiveImageForSituation-01 -this represents the intelligent agent's cognitive representation of the actual past situation. (2) CognitiveExplanation-01: this is what the intelligent agent believes about the cause(s) involved in the specific actual situation. (3) CommunicationUnitSentence-01 is an object instance that represents the sentence communicated by IntelligentAgent-01. CommunicationUnitSentence-01 consists of CommunicationFragmentMeaningUnit-01(not labeled -"The trophy did not fit in the suitcase") and CommunicationFragmentMeaningUnit-02 (not labeled -"because it was too big").
The timeline has two important time points. The timeline numbers have been selected only for illustration purposes and are intended to represent a typical scenario. (The working prototype uses an enumerated type consisting of timeline values such as "T01", "T02", etc.). At t=3 seconds, IntelligentAgent-01 forms the mental image as shown, and also at t=3 the intelligent agent forms a cognitive explanation of the past situation. At t = 10 the agent communicates by speaking "The trophy did not fit in the suitcase because it was too big".
Semantic Engine Tasks
The entity resolution and pronoun resolution tasks are described here. Underlying the pronoun resolution process is an assumption of a shared set of beliefs -between the talker agent and the listener agent -about "how things work". I.e. the listener agent builds a model of what the talker agent is thinking with respect to the talker's explanation of the cause of the trophy not fitting in the suitcase. The listener uses this model to reason about the possible meanings of the unresolved pronouns that were communicated to it.
Entity Resolution/Class Selection for Common Nouns and Verbs
The first task involves the selection of appropriate classes from the internal knowledge base that correspond to each of the common nouns and to the main verb phrase (representing "to not fit"). Although a ROSS knowledge base may contain classes that map the word "trophy" and "suitcase" to any of a number of classes, for the purpose of simplifying this example the following mappings from common words to knowledge base classes have been used:
"trophy" -TrophyObjectFrameClass (inherits from EverydayObjectFrameClass, CommonObjectFrameClass and EnclosableObjectFrameClass) { an ordinary object with varying size, shape, color, composition, etc. } "suitcase" -SuitCaseObjectFrameClass (inherits from EverydayObjectFrameClass, CommonObjectFrameClass and from ContainerObjectFrameClass) { an ordinary object with varying size, shape, color, composition, etc. } "not fit" -NotFit_Big_BehaviorClass, NotFit_Small_BehaviorClass { behavior classes that are associated with both EnclosableObjectFrameClass and ContainerObjectFrameClass }
Pronoun Resolution
The pronoun resolution task (for "it" within the second clause) draws inferences about the undetermined entities, mental concepts, or words (each is represented by x). x within the actual past situatione.g. which physical object was too big? x within the talker's cognitive image of the past situation x within the talker's cognitive explanation of the causality of the situation -i.e. within
CognitiveExplanation -01 which item is associated with the (causal) feature that has a causative effect on the "not fitting" behavior? x within the natural language text: i.e. what is the pronoun antecedent within the first clause ("trophy" or "suitcase")?
The output of the resolution process is a determination of the unknown facts, and is contained in the instance model for the overall situation that involves detail that includes the resolution of the pronoun.
The following is a form of pseudo-code using first-order logic to show the specifications of the rules that are used to support the inference. (Since the logic within the antecedents for each of the three rules is similar, for the last two rules only the logic of the consequent is shown). Each of the three rules contains several main groups of logical expressions in the antecedent:
Expressions that specify the actual/external situation (e.g. the situation involving an instance of a trophy not fitting into a suitcase). Expressions that specify the natural language text itself (the main organizing predicate is "CommunicationUnitSentence"). Expressions that specify shared, or generally-known commonsense cognitive knowledge, e.g. about containers and things that can fit into containers (these expressions represent two ROSS behavior classes -one for "not fitting due to enclosable being too big", and another for "not fitting due to container being too small"). Expressions that specify an instance of a CognitiveExplanationObjectFrameClass.
Rule 1: This rule uses an abstraction centering around an instance of a CognitiveExplanationObjectFrameClass. (The CognitiveExplanationObjectFrameClass inherits its properties from a RepresentationOfCausalExplanationObjectFrameClass class, which is actually class of the object (the enclosable object or the container object) that can affect the behavior result, which in this case is the enclosable object. It uses this class to determine which actual object is referred to based on the inheritance tree for the object frame class of each of the objects. (Note: for purpose of ontology scalability, the use of probability values is viewed as a necessary requirement: given that both a trophy and a suitcase can be a container object (and each can be an enclosable object), the use of a probability value for the higher classes mechanism within the object frame class is envisioned. For instance, this would specify by implication (comparison of respective probability values) that it is more likely that a suitcase is a container than is a trophy).
Rule 2:
This rule associates the unresolved NL text pronoun (e.g. "it") with an entity in the represented, or external world. Given the unresolved pronoun within the explanatory ("because clause"), it resolves which actual entity in the external situation is referred to (e.g. the trophy object instance or the suitcase object instance). (The Comprehendor engine implements this rule in code subsequent to the resolution of Rule 1; it sets the appropriate attribute for the entity of the actual situation (FunctionalSize = "TooBig", or FunctionalSize = "TooSmall"). (note: "pron" designates the pronoun). Rule 3: This rule centers around the natural language text. Given the unresolved pronoun within the explanatory ("because clause"), it resolves which word (anaphor antecedent) in the main clause the pronoun refers to (e.g. "trophy" or "suitcase"). Note that in the actual system Rule 3 is not implemented, since the question answering system of the semantic engine is capable of searching the actual instance model that corresponds to the communicated sentence. ); // "CommonObjectFrameClass"
Lower Ontology: Auto-Generated Classes
ObjectFrameClass "TrophyObjectFrameClass" ( <StructureTrait val = "Compound"/> Dictionary ( English ( { "trophy", "trophys" } // (bug in morphology analyzer: should be generated as "trophies") );); HigherClasses ( { "EnclosableObjectObjectFrameClass" } ); );
ObjectFrameClass "SuitcaseObjectFrameClass" ( <StructureTrait val = "Compound"/> Dictionary ( English ( { "suitcase", "suitcases" } );); HigherClasses ( { "ContainerObjectObjectFrameClass", //TODO: add "CommonObjectFrameClass", // (has color attribute type) } ); );
Note that the trophy and suitcase classes that are shown here only inherit properties from the enclosable object and container object classes, respectively. Since ROSS allows for multiple inheritance, other middle ontology classes can be added to the "HigherClasses" lists, e.g. "PropertyObjectFrameClass" (a class of objects that are owned as property).
Since the ontology already contains a person class, the Comprehendor Ontology Builder subsystem only needed to generate lower ontology object frame class attributes and behavior classes. These classes are shown here. This demonstrates the Comprehendor Ontology Builder "partial class definition" feature, which allows for adding attributes or other information to a class that already exists in the ontology. For instance, the ontology contains PersonObjectFrameClass; the first definition below adds several attribute types to this class: "FunctionalAttributeType1", "LiftingState", and "PassiveIsLiftedState". A second definition adds another attribute type called "FunctionalAttributeType2".
Object Frame Classes
Lower Ontology: Auto-Generated Class Information
ObjectFrameClass "PersonObjectFrameClass" ( <StructureTrait val = "Compound"/> AttributeTypes ( AttributeType "FunctionalAttributeType1" ( <SuperType val = "Qualitative"/> <StateAttributeType val = "true" /> <OptionalCausalFeature val = "true" /> "Values" ( { "NotTooWeak", "TooWeak" : Dictionary ( English ( { "weak" } ); ); } ); );
AttributeType "LiftingState" ( <SuperType val = "Qualitative"/> <StateAttributeType val = "true" /> "Values" ( { "NotLifting", "Lifting" } ); );
AttributeType "PassiveIsLiftedState" ( <SuperType val = "Qualitative"/> <StateAttributeType val = "true" /> "Values" ( { "NotLifted", "Lifted" } ); ); ); );
ObjectFrameClass "PersonObjectFrameClass" ( <StructureTrait val = "Compound"/> AttributeTypes ( AttributeType "FunctionalAttributeType2" ( <SuperType val = "Qualitative"/> <StateAttributeType val = "true" /> <OptionalCausalFeature val = "true" /> "Values" ( { "NotTooHeavy", "TooHeavy" : Dictionary ( English ( { "heavy" } ); ); } ); ); ); );
Behavior Classes
Observations
The behavior classes shown here were also generated from the same two sentences (above), copied here for clarity.
Natural language input:
If a person is too weak then he cannot lift another person. If a person is too heavy then another person cannot lift him.
Listing
Behavior Classes
Observations
The first two behavior classes shown here are for the actions "to deliver" and "to receive" (involving persons and deliverables).
Within the "paying" behavior classes, the nested behavior is in bold.
Listing
BehaviorClass "ReceiveBehaviorClass" ( <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary ( English ( { "receive", "received", "received", "receives", "receiving" } ););
Instance Model for Schema: "Person Lifts Person" ("too weak" variant)
The external instance model for the person lifts person schema is as follows ("The man could not lift his son because he was too/so weak."). <?xml version="1.0" encoding="US-ASCII" standalone="yes"?> <InstanceModel> <TranscriptHeader> <TextSource value="DocumentFile"> </TextSource> <DocumentFile name="Samples\Sentence-02.txt"> </DocumentFile> </TranscriptHeader> <ConceptualModel> <LocalContext contextId = "1"> <MoodAndTense> Declarative-PastSimple </MoodAndTense> <StructuralParent name="EverydayObjectStructuralParentClass" > <Timeline name = "EverydayObjectStructuralParentClass.EverydayObjectDimensionSystem"/> </StructuralParent>
. Introduction and Background ............................................................................................... 6 1.1. The ROSS Representational Method ................................................................................ 7 1.2. Background: Winograd Schema Challenge ......................................................................7 2. Main Concepts .................................................................................................................... 10 2.1. Entity Resolution Using a ROSS Ontology .................................................................... 10 2.2. Ontology Scalability ....................................................................................................... 11 2.3. The ROSS Instance Model.............................................................................................. 11 2.4. Features of the ROSS Behavior Class That Support the Resolution Process ................. 122.5. Definitions....................................................................................................................... 12 2.6. Overview of the Algorithm ............................................................................................. 13 2.7. Probability-Based Pronoun Resolution ........................................................................... 15 2.8. Use of ROSS Situation Model to Support Question Answering .................................... 16 2.9. Optional Representation of the Communicative Agent .................................................. 16 2.10. Non-Objective: Representation of Deep Structure of Physical Objects ......................... 16 2.11. Comprehendor NLU System........................................................................................... 17 3. Pronouns: Types of Pronouns and Syntactic Locations of Pronouns ................................. 18 4. Semantic Normal Form (SNF) ............................................................................................ 19 4.1. The Predicate Expression ("PE") .................................................................................... 19 4.2. Semantic Role Labels .....................................................................................................194.2.1. Predicate Specifier Roles ..........................................................................................Roles ............................................................................................... 20 4.2.3. Extra Sub-Roles .......................................................................................................... 21 4.3. Relative/Subordinate Clauses ......................................................................................... 21 4.4. Attributive Argument Roles ............................................................................................ 21 4.5. Other Argument Categories: Abstractions That Represent Aspects ............................... 22 4.6. Other Enumerated Types ................................................................................................ 22 4.7. The Structure of the Predicate Expression ...................................................................... 23 4.8. SNF Example .................................................................................................................. 26 5. NLU System Architecture and Data Flow .......................................................................... 28 5.1. Input to a Parser: Communication Unit List ................................................................... 28 5.2. Data Adapters (Syntax-To-Semantic-Normal-Form Converters) ................................... 29 5.3. Semantic Engine ............................................................................................................. 31 6. Semantic Engine Driver: Data Structures and Control Flow .............................................. 34 6.1. Overview ......................................................................................................................... 34 6.2. Data Structure for Master Token List ............................................................................. 34 6.3. Data Structures and Data Types: Input to the Engine ..................................................... 34 6.4. Data Structures and Data Types: Internal/Operational ................................................... 38 6.5. Data Structures and Data Types: Instance Model ........................................................... 43 6.5. Engine Driver Algorithm ................................................................................................ 44
/ (others here not shown) }
Figure 1 :
1High-level Architecture of Comprehendor
m_pReferenceObjectFrameClass; // (ptr to class from which it was instantiated) public: char szContentString [MAXLEN_CONTENTSTRING_STAR]; // e.g. stores "councilmen" char szUniqueIdentifier [MAXLEN_UNIQUEID_STRING]; // unique id bool fMultiple; // specifies this as a collection (set) of object instances //-------------------------------------------------------// Features of the Object Instance: // // -upon instantiation, each of the following is derived using any // available features from the object frame class. // -during subsequent instance model generation, new features may be added. // RelationshipToParent relationshipToParent; // (from ObjectFrameClass::Structure structure) InstanceStructure structure; // (from ObjectFrameClass::Attributes attributes) AttributeBaseExpression *rgpAttributeExpressions [MAX_OBJECTFRAMEINSTANCE_ATTRIBUTES]; // (from ObjectFrameClass::Relationships relationships) RelationshipExpression *rgpRelationshipExpressions [MAX_OBJECTFRAMEINSTANCE_RELATIONSHIPS]; //----------------------------------------------------
reference-only: do not deallocate this pointer) ObjectInstance *pObjectInstance;//-------------------------------------------------------// SNFInformation: // EntityArgumentSemanticRole semanticRole; // one of: Actor, Actee, Extra ExtraSubRole extraSubRole; SyntacticRole syntacticRole; // e.g. subject, direct object, indirect object int ordinalOfPredicate; };
do not call delete for these pointers) Context *pContextMRU; // (most-recently-used Context in the master instance model)
/
/----------------------------------------------------------------------------// //class SpanningInfoStack // //-------------------------------------------------------------------------
Note: a predicate expression may have a hypothetical usage ("because ...") // and at the same time convey declarative information. // //----------------------------------------------------------------------------// enum PredicateExpressionHypotheticalUsage { PredicateExpressionHypotheticalUsageExplanationOfCause, // "because" PredicateExpressionHypotheticalUsageExplantionOfEffect, // "causing ..." PredicateExpressionHypotheticalUsageExplantionOfObjective, // "in order to ..----------------------------------------------------------------------------// // enum PronounGender // //-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------// // enum PronounActiveOrPassive // (e.g. "they" (active) versus "them" (passive)) // //-----------------------------------------------------------------------------------------------------------------------------------------------------// // enum SyntacticRole // //---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
EngineDriver ( input: list of communication units (containing embedded PEs using SNF) output: disambiguation information (stored in master token list), output: instance model and other selected models ) // other tasks here not shown
while (loop to get all predicate expressions) pCommUnitList->Next() } // while (loop to get all communication units)
(Note: the functionality of ProcessPredicateUnitInterrogative() and ProcessPredicateUnitImperative() are similar and are not shown here). ProcessPredicateUnitIndicative (IN: PredicateExpression *pPredicateExpressionMain, OUT: SpanningInfoStack *pSpanningInfoStack, OUT: InstanceModel *pInstanceModel) SetTimelineHintInformation () // examine adverbial information that indicates time and temporal order PredicateExpressionPointer *pPredicateExpressionPointer = pPredicateExpressionMain->GetFirstPEPointer(); while (!pPredicateExpressionMain->IsDonePredicateExpressionPointerList()) { If (pPredicateExpressionPointer points to self/this) { // (not shown: process arguments for predicate type PredicateToBeAttributive; new attribute // information gets added to the appropriate instance model object instance) // (not shown: process arguments for each of predicate types: PredicateToBeIsA, PredicateHasAVerb)// Process non-nested entity argument, e.g. "Joe", "miles" that do not contain pronouns: .g. "Because it was too big, the trophy did not fit in the suitcase." pPredicateExpressionPointer = pPredicateExpressionMain->GetNextPEPointer(); // process non-nested entity arguments in the PE, populate a temporary spanning info // retry: ProcessPronounEntityArguments() // use temp spanning info } // Extract predicate adverbials: get adverbs and adverb phrases that modify a verb // in the predicate specifer, (e.g. "not", e.g. following will do indirect recursion to process the PE pointed to by pPredicateExpressionPointer:If (nested PE is within a modification specifier) { ProcessModificationSpecifier (); // e.g. leading adverbial phrase, e.g. final adverbial phrase } Else if (nested PE is within an entity argument) { ProcessNestedEntityArgument (); // e.g. gerundive phrase, e.g. "walking to the corner" // e.g. nested bound relative clause, e.g. "the person who fell ill" } } pPredicateExpressionPointer = pPredicateExpressionMain->GetNextPEPointer(); } // while (loop to get top-level and nested PEs from pPredicateExpressionMain) Return ProcessNonPronounEntityArguments () iteratively process the entity arguments that do not contain pronouns:ProcessNonPronounEntityArguments (IN: entityArgumentSpecifierList, OUT: pSpanningInfoStack, OUT: pInstanceModel) while (entityArgumentSpecifierList is not empty) { If (entity argument specifier does not contain a pronoun)
(MainDriverForInstanceModelGeneration() represents a complex sub-system, the functionality of which is outside the scope of this document). The control flow of ProcessEntityArgument () is as follows (showing the lower level functions inline).
() is outlined here. Like ProcessEntityArgument(), this function also generates instance model information, but it is based on main verbs that represent processes (it is only invoked for predicateSpecifierRole == PredicateVerbTakingEntityArgument). (Note: C++ arguments are shown for some calls to lower-level routines). Apply/process the behavior class (first behavior class or higher behavior class) // (this performs possibly extensive Instance Model Generation (not shown))
pronoun is post-verb object (e.g. him/her/them/it) attempt to resolve it to a pre-verb entity // (return if successful) // (e.g. "The owners of the house sold it.")//----------------------------------------------------------------------------------// Main driver for searching the instance model via the spanning information: ------------------------------------------------------------------------// Checkresults from the exploratory search routine: // if (found a referent) { *ppObjectFrameClassReferent = pMatchingObjectInstance->GetReferenceObjectFrameClass(); // Save for disambiguation: pszOriginalObjInstWord = pMatchingObjectInstance->szContentString; // "trophy", "Joe", "detective" } // Modify the object instance within the (actual situation) instance model: SetAttributeWithinActualSituationEntity (pMatchingObjectInstance, szCausalFeatureAttributeType, // e.g. "FunctionalAttribute1" szCausalFeatureValue); // e.g. "TooSmall" //------------------------------------------------------------------------------
-
(see subsequent section for logic that matches attribute state values from the West and East contexts) ------------------------------------------------------------------// Use pPronounFeatureSet->semanticRole to determine roles within current clause: // // e.g. set pObjFrameClassActorTemp (not shown) //---------------------------------------------------------------------------------------------------------------------------------// Loop: // -search each nested behavior class: (e.g. " TalkerAdvocatesActionWithListenersWhoAnticipateSomething") // BehaviorClassListNode *pBehavClassCurrNode = pBehaviorClassListNodeHeadForwardRule; while (pBehavClassCurrNode != NULL) { if (pBehavClassCurrNode->pBehaviorClass->pBehaviorClassExpression->fCausalRule)
reference to a nested rule: this represents that whoever is the listener will fear violence: ------------------------------------------------------------------------------------------------------------------// (WEST SIDE) //------------------------------------------------------------------------------------------------------------------//------------------------------------------------------------------------------------------------------------------// Create anew temporary context along with a structural parent instance ("major"), // -sets context fields, and inserts the structural parent instance into the context.// -(by default use the first ordinal temporal attribute value of the structural parent class) // CreateSandboxContext() //------------------------------------------------------------------------------------------------------------------// Create object instances and set values for semantic roles: // -create clone of the candidate object instance (pObjInstCandidate) // e.g. "councilmen" // -use the pronoun feature set to determine other object instances, e.g. "violence" West context has now been populated with all information based on the application of // the TalkerAdvocatesActionWithListenersWhoAnticipateSomething rule and the nested rule // that it contains -AnticipateHarmfulEventBehaviorClass //------------------------------------------------------------------------------------------------------------------// (EAST SIDE) //------------------------------------------------------------------------------------------------------------------ProcessLaterTemporalSandboxContextAndPerformMatchingTest ()
------------------------------------------------------------------------------------------------------------------// Loop: try each behavior class: e.g.----------------------------------------------------------------------------------The details of MatchNewObjectInstanceStatesLatestPriorAgainstEarliestPost() are not shown. This function matches object instances, based on criteria that includes their respective object frame classes, and attribute state values. A set of example values is shown here:
ref = RelativeTime var = t1$ /> <Attribute ref = LiftingState val = "NotLifting" /> <Attribute ref = FunctionalAttributeType1 val = "TooWeak" />
val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = PayingState val = "NotPaying" /> <Attribute ref = UniqueIdentityAttributeType var = q$ /> // (identity) ); BehaviorClassReference ( <BehaviorClass ref = ReceiveBehaviorClass /> // -->> DEFINED-BEHAVIOR-CLASS <ParameterActor ref = PersonObjectFrameClass expr = q$ /> // (identity)
/
/------------------------------------------------------------------------he/she/they will not grant a thing that was requested (e.g. a permit request)." // //------------------------------------------------------------------------ref = PersonObjectFrameClass /> // e.g. government official(s) <Attribute ref = RefusingState val = "NotRefusing" /> <Attribute ref = UniqueIdentityAttributeType var = q
() // resolve "it" PronounResolutionExplanatoryClauseForCognitiveExplanation() // handle a "because" clause PronounResolutionWorkerRoutine() ExploratorySearchUsingSpanningInfoStack() // loop: search each spanning info in the stack ExploratorySearchUsingOneSpanningInfo() // invoke the tests TestOneCandidateObjectInstance() // branch based on available keyword // ("refused") in PronounFeatureSet TryToMatchVerbCausalFeatureAgainstSpanningInfoObjectInstance() // loop for all // behavior classes TryToMatchVerbCausalFeatureAgainstSpanningInfoObjectInstanceSingleBehaviorClass() SearchUsingNestedBehaviorForMatchingCausalFeature()
o
Beliefs and knowledge about facts and about behaviors of objects in the agent's environment o The processes of cognitionreasoning on the part of the agent o The process of communicating, i.e. conveying information to a listener
"Figure 2 :
2The trophy did not fit in the suitcase because it was too big." Visualization of overall situation, for main instance model
of pronoun to actual situation entity: RepresentationalRelationshipWordToActualSituationEntity (pron, entity)
-----------------------------------------------------------------// // EnclosableObjectObjectFrameClass // e.g. a trophy, an apple// //------------------------------------------------------------------------------------------------------------------------------------------// // "CommonObjectFrameClass" // (not auto-generated) // //-------------------------------------------------------------------------Black" : Dictionary ( English ( { "black" } ); ); , "Silver" : Dictionary ( English ( { "silver" } ); ); , "White" : Dictionary ( English ( { "white" } );
ref = LiftingState val = "NotLifting" /> <Attribute ref = FunctionalAttributeType1 val = "TooWeak" /> ); PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$
Table of Contents
of
The DiscourseContext enumerated type represents mood+tense.enumeration SyntacticRole
{
Subject,
DirectObject,
IndirectObject,
Other
}
enumeration DiscourseContext
{
DeclarativePastSimple,
DeclarativePastPerfect,
DeclarativePastProgressive,
DeclarativePastPerfectProgressive,
DeclarativePresentSimple,
DeclarativePresentPerfect,
DeclarativePresentProgressive,
DeclarativePresentPerfectProgressive,
DeclarativeFutureSimple,
DeclarativeFuturePerfect,
DeclarativeFutureProgressive,
DeclarativeFuturePerfectProgressive,
InterrogativePastSimple,
InterrogativePastPerfect,
InterrogativePastProgressive,
InterrogativePastPerfectProgressive,
Imperative,
(input to Engine is a list of semantic normal form (SNF) data structures)internal
ontology/
knowledge
base
Parser
External
Instance
Model
Declarative
Sentence(s
)
Question
Answer
Star
language
definitions
Internal
Instance
Model
NLU
Semantic
Engine
Communication unit type: Sentence Sentence contents: The trophy [doesn't] does not fit in the brown suitcase because [it's] it is too big. Note: the test results shown in this document use a version of the Comprehendor semantic engine that directly uses this type of input for each of the schemas).Syntax tree:
MeaningUnit
SubjectPhrase:
NounPhrase:
Specifier List: The
Head word: trophy
PredicatePhrase:
PreVerbAdverb: not
AuxVerbWord: does
MainVerbWord: fit
Prepositional phrase complement:
PrepositionalPhrase:
Head word: in
NounPhrase:
Specifier List: the
Qualifier List:
AdjectivePhrase:
Head word: brown
Head word: suitcase
Final adverbial phrase list:
AdverbPhrase:
MeaningUnit
Introductory word: because
SubjectPhrase:
NounPhrase:
Head word: it
PredicatePhrase:
AuxVerbWord: is
PostVerbAdverb: too
PostVerbAdjectivePhrase:
AdjectivePhrase:
Head word: big
(
// pointers to start and end tokens in the master input token list: TokenListNode *pFirstTokenListNode; TokenListNode *pLastTokenListNode; // (e.g. for a declarative sentence, points to period token)The CommunicationUnitType enumerated type is as follows (other types may be defined as needed)class CommunicationUnit
{
public:
enum CommunicationUnitType communicationUnitType;
Sentence *pSentence;
}
enum CommunicationUnitType
{
CommunicationUnitTypeSentence = 0,
CommunicationUnitTypeURL,
CommunicationUnitTypeEmailAddress,
CommunicationUnitTypeSingleWordOnLine,
CommunicationUnitTypeTwoWordPhraseOnLine,
CommunicationUnitTypeAuthorInfo,
CommunicationUnitTypeNONE // max value for this enum
};
The GrammaticalMood enumerated type is defined as follows.The predicate specifier list is not shown; the predicate specifier is defined as follows:class PredicateExpression
{
public:
GrammaticalMood grammaticalMood;
char szIntroductoryWord [MAXLEN_SINGLEWORDSTRING]; // e.g. that
PredicateSpecifierList predicateSpecifierList;
EntityArgumentSpecifierList entityArgumentSpecifierList;
AttributiveArgumentSpecifierList attributiveArgumentSpecifierList;
ModificationSpecifierList modificationSpecifierList;
PredicateExpressionPointerList predicateExpressionPointerList; // list order represents original syntactic
order of MUs
TokenListNode *pFirstTokenListNode; // start token only; end token is marked and does not need to be
stored
}
enum GrammaticalMood
{
GrammaticalMoodIndicative = 0,
GrammaticalMoodInterrogative,
GrammaticalMoodImperative,
GrammaticalMoodNONE
};
class PredicateSpecifier
{
int ordinal;
char szMainVerbWord [MAXLEN_SINGLEWORDSTRING]; // e.g. walked, walking
// char szParticleWord [MAXLEN_SINGLEWORDSTRING]; // e.g. up, out, over, in
PredicateSpecifierRole semanticRole;
DiscourseContext discourseContextActual;
char szTrailingConnectiveWord[MAXLEN_SINGLEWORDSTRING]; // e.g. "and"
};
EntityArgumentSemanticRole semanticRole; // one of: Actor, Actee, Extra // ExtraSubRole extraSubRole; // SyntacticRole syntacticRole; // e.g. subject, direct object, indirect object // int ordinalPredicate; // refers to a predicate specifier Loop: // process each item in entityDesignatorList:MainDriverForInstanceModelGeneration () // Instance Model Generation (not shown)ProcessEntityArgument ( IN: EntityArgumentSpecifier *pEntityArgumentSpecifier,
OUT: pSpanningInfoStack,
OUT: pInstanceModel)
// EntityArgumentSpecifier:
// EntityDesignatorList entityDesignatorList;
// EntityDesignator
// NounPhrase *pNounPhrase;
// PrepositionalPhrase *pPrepositionalPhrase;
// char szTrailingConnectiveWord; // e.g. "and"
// switch (type) // noun phrase or prepositional phrase
{
case NounPhrase:
ProcessNounPhrase ()
{
EntityResolutionRoutine (pEntityDesignator->pNounPhrase,
pSpanningInfoStack)
}
break;
case PrepositionalPhrase:
ProcessPrepositionalPhrase ()
{
// (note: if role is Extra, SubRole is available here)
// Extract noun phrase (not shown)
EntityResolutionRoutine (pEntityDesignator->pPrepositionalPhrase->pNounPhrase,
pSpanningInfoStack)
}
break;
} // switch
End Loop
Return
// Loop: iterate through the list of noun phrase head words: e.g. "Fred and Mary walked their dog." NounHeadWordListNode *pNounHeadWordCurrNode = pNounPhrase->pMainWordHeadNode;EntityResolutionRoutine (IN: EntityArgumentSpecifier *pEntityArgumentSpecifier,
IN/OUT: ActivePointers *pActivePointers,
OUT: SpanningInfoStack *pSpanningInfoStack)
{
while (pNounHeadWordCurrNode != NULL)
{
switch (pNounHeadWordCurrNode->nounWordPhraseType)
{
case NounWordPhraseTypePronoun:
ProcessPronoun (pSpanningInfoStack, pActivePointers, pPronounFeatureSet);
break;
case NounWordPhraseTypeExistentialThere:
ProcessExistentialThere();
break;
case NounWordPhraseTypeCommonNoun:
ProcessCommonNounPhrase ();
case NounWordPhraseTypeProperNoun:
ProcessProperNounPhrase ();
}; // switch()
pNounHeadWordCurrNode = pNounHeadWordCurrNode->next;
} // while ()
// Determine a structural parent class that can be used for the entity or entities:
iResult = GetBaseStructuralParentClass ();
Return from EntityResolutionRoutine()
ExploratorySearchUsingSpanningInfoStack () is a driver function that iterates to perform the pronoun resolution search logic against each spanning info in the spanning info stack.int SetPronounResolutionInformationInMasterTokenList (char *pszOriginalObjInstWord,
PronounFeatureSet *pPronounFeatureSet,
TokenListNode *pFirstTokenNode)
{
if (pszOriginalObjInstWord == NULL)
{
return E_NOTFOUND_REQUIREDITEM;
}
if (pPronounFeatureSet->szPronounWord[0] != '\0')
{
TokenListNode *pTokenNodeTEMP = pFirstTokenNode;
// Search the token sub-list for the pronoun:
while (pTokenNodeTEMP != NULL &&
pTokenNodeTEMP->pMarkers->commUnitMarker != CommUnitMarkerEnd &&
0 != strcmp (pTokenNodeTEMP->tokenvalue, pPronounFeatureSet->szPronounWord))
{
pTokenNodeTEMP = pTokenNodeTEMP->next;
}
if (pTokenNodeTEMP == NULL)
{
return E_NOTFOUND_REQUIREDITEM;
}
else
{
strcpy (pTokenNodeTEMP->tokenResolvedWord, pszOriginalObjInstWord);
}
}
return E_SUCCESS;
}
7.4. ExploratorySearchUsingSpanningInfoStack()
int ExploratorySearchUsingSpanningInfoStack (
// IN:
SpanningInfoStack *pSpanningInfoStack,
PronounFeatureSet *pPronounFeatureSet,
// OUT:
char *pszCausalFeatureAttributeType,
char *pszCausalFeatureValue,
bool *pfFoundPopObj,
bool *pfFoundMatchingNestedBehavior,
ObjectInstance **ppMatchingObjectInstance,
BehaviorClass **ppBehaviorClassNested)
{
int iResult = E_NOTFOUND;
// Main Loop: until reach bottom of StackOfSpanningInfos
SpanningInformation *pSpanningInformation = NULL;
pSpanningInfoStack->ResetCurrentToTop();
while (pSpanningInfoStack->Current(&pSpanningInformation))
{
iResult = ExploratorySearchUsingOneSpanningInfo
pSpanningInformation,
pPronounFeatureSet,
pszCausalFeatureAttributeType,
pszCausalFeatureValue,
pfFoundPopObj,
pfFoundMatchingNestedBehavior,
ppMatchingObjectInstance,
ppBehaviorClassNested);
if (iResult == E_SUCCESS)
{
break; // Found
}
} // End Main Loop
pSpanningInfoStack->ResetCurrentToTop();
return iResult;
}
// Compare Probabilities, allowing for the following to be set: (compare logic not shown) *ppMatchingObjectInstance = pSpanningInformation->GetObjectInstance(idx); *ppBehaviorClassNested = pBehaviorClassNested[idx]; }ExploratorySearchUsingOneSpanningInfo (
// IN:
SpanningInformation *pSpanningInfo,
PronounFeatureSet *pPronounFeatureSet,
// OUT:
char *pszCausalFeatureAttributeType,
char *pszCausalFeatureValue,
bool *pfFoundPopObj,
bool *pfFoundMatchingNestedBehavior,
ObjectInstance **ppMatchingObjectInstance,
BehaviorClass **ppBehaviorClassNested)
while (candidates exist)
{
TestOneCandidateObjectInstance (candidate)
}
PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = ReceivingState val = "NotReceiving" /> End listing.PriorStates
(
PopulatedObjectClass "AntecedentActor"
(
<ObjectFrameClass ref = <TimelineTimePoint value = "T02">
<InstanceStructure>
<Component>
TrophyObjectFrameClass.TrophyObjectFrameClass-1 (trophy)
<Attributes>
<Attribute>
EnclosableObjectFrameClass.PassiveIsFittedInsideContainerState = NotFittedInsideContainer
</Attribute>
</Attributes>
</Component>
<Component>
SuitcaseObjectFrameClass.SuitcaseObjectFrameClass-1 (suitcase)
<Attributes>
<Attribute>
ContainerObjectFrameClass.PassiveIsFittedIntoState = NotIsFittedInto
</Attribute>
</Attributes>
</Component>
</InstanceStructure>
</TimelineTimePoint>
</LocalContext>
</ConceptualModel>
</InstanceModel>
referent is used here to denote the external thing; antecedent denotes the syntactic item (usually a word or phrase).
The pronoun "it", within "it was too hot" may be viewed either as an exophoric reference or as a pleonastic pronoun.
See section 4. Semantic Normal Form for a description of semantic normal form.4 Predicate expressions (PEs) and meaning units (MU) are used somewhat interchangeably throughout this document. The actual method as it has been implemented involves a semantic engine that uses MEs as input; however, the description of the method will usually utilize the PE as the basic input building block.
Although the term "semantic normal form" may have other prior use(s), the author is unaware of any restrictions regarding its use; any overlap with other concepts represented by the term are unintentional.6 The term "predicate expression" connotes that it contains representations that correspond to syntactic expressions; neither the predicate expression itself nor the immediate constituent fields of a predicate expression are themselves true expressions. Note that "predicate unit" is also used as a synonym for "predicate expression".
Main Meaning Unit -"The man could not lift his son" Actor := man, derived from the subject phrase Actee := son, derived from the direct object noun phrase Extra := (none) Subsequent Meaning Unit -"because he was so weak." Actor := he, derived from the subject phrase Actee := (none) Extra := (none)
Information about a demo for the working system is available on the author's web site(Hofford (2014 (c)).
For purposes of analysis, the sentence that is used in the remainder of this section refers to the trophy and suitcase in past tense ("didn't fit", versus "doesn't fit").
<ParameterActee ref = DeliverableObjectObjectFrameClass /> <ParameterExtra ref = PersonObjectFrameClass />
Appendix 2: Ontology/Knowledge BaseThe main Star language object frame classes and behavior classes that are used by the Comprehendor NLU system to process the schemas are shown in this appendix, with the exception of those for schema #1, which is contained inHofford (2014 (b)) "The ROSS User's Guide and Reference Manual". The Star language definitions exist within several different Infopedia include files; these include files such as BasicDefinitions.h and PersonRelatedClass.h.Ontology and KB for Schema: "Trophy and Suitcase"Most of the middle and lower ontology classes that are needed for this schema were autogenerated from natural language input, using the Comprehendor Ontology Builder sub-system. The actual sentences are shown here:Natural language input:A container object is an everyday object. An enclosable object is an everyday object that fits in a container object. If an enclosable object is too big then it does not fit in the container object. If a container object is too small then an enclosable object does not fit in it. A trophy is an enclosable object. A suitcase is a container object.Supporting DefinitionsSupporting definitions are described in the "ROSS User's Guide and Reference Manual"(Hofford 2014 (b)).Object Frame ClassesUpper Ontology ClassesThe following upper ontology classes/definitions are described in the "ROSS User's Guide and Reference Manual"(Hofford 2014 (b)). Structural parent class: EverydayObjectStructuralParentClass Higher-level class: EverydayObjectFrameClass Structural parent class: BehavioralStructuralParentClassMiddle Ontology Classes: Auto-Generated Classes(Note: comments were added by hand after completion of the auto-generation process).Natural language input:A deliverable object is a common object. A person can receive something. A person can deliver something. A person can pay a person. A detective is a person.The concept of a "deliverable" is used here as a high-level abstraction that includes any of: services, products, a report, etc. This is one of several possible ways to model the semantics of the input schema text "received the final report". (Although the behavior classes shown in the following section were hand-coded, not auto-generated, the following NL input could be used once this auto-generation feature is implemented in the Comprehendor NLU system).If a person receives a deliverable then he/she does pay another person. If a person delivers a deliverable then he/she is paid by another person.Object Frame ClassesLower Ontology: Auto-Generated ClassesOntology and KB for Schema: "Councilmen and Demonstrators"The ontology and KB for Winograd schema #1 are contained inHofford (2014 (b)) "The ROSS User's Guide and Reference Manual".Appendix 3: ROSS Instance ModelsThese listings are of external instance models for selected schema sentences. The generated instance models contain object instances, not classes: the object instances refer back to the ontology classes from which they have been instantiated.Instance Model for Schema: "Trophy and Suitcase" ("too big" variant)The external instance model shown here is for the sentence: "The trophy did not fit in the suitcase because it was too big." <?xml version="1.0" encoding="US-ASCII" standalone="yes"?> <InstanceModel> <TranscriptHeader> <TextSource value="SubmittedFromWebClient"> </TextSource> </TranscriptHeader> <ConceptualModel> <LocalContext contextId = "1"> <MoodAndTense> Declarative-PastSimple </MoodAndTense> <StructuralParent name="EverydayObjectStructuralParentClass" > <Timeline name = "EverydayObjectStructuralParentClass.EverydayObjectDimensionSystem"/> </StructuralParent> <TimelineTimePoint value = "T01"> <InstanceStructure> <Component> TrophyObjectFrameClass.TrophyObjectFrameClass-1 (trophy) <Attributes> <Attribute> EnclosableObjectFrameClass.FittingIntoState = FittingInto </Attribute> <Attribute> EnclosableObjectFrameClass.FunctionalAttributeType1 = TooBig </Attribute> </Attributes> </Component> <Component> SuitcaseObjectFrameClass.SuitcaseObjectFrameClass-1 (suitcase) <Attributes> <Attribute> ContainerObjectFrameClass.PassiveIsFittedIntoState = NotIsFittedInto </Attribute> </Attributes> </Component> </InstanceStructure> </TimelineTimePoint>
This rule describes the logic that is used to figure out what the intelligent agent was thinking when he/she said "because it was too big". (Note that this rule example uses "ContainerClass" as an equivalent for the Infopedia class called "ContainerObjectFrameClass"). (Note: logic for the "too small" case not shown). of RepresentationOfCausalExplanationObjectFrameClass, referred to using the variable name "unknown-entity. Rule 1: for "x was too big" ∀ unknown-entity: // (Rule Antecedentof RepresentationOfCausalExplanationObjectFrameClass, referred to using the variable name "unknown-entity". This rule describes the logic that is used to figure out what the intelligent agent was thinking when he/she said "because it was too big". (Note that this rule example uses "ContainerClass" as an equivalent for the Infopedia class called "ContainerObjectFrameClass"). (Note: logic for the "too small" case not shown). Rule 1: for "x was too big" ∀ unknown-entity: // (Rule Antecedent)
// Variables that represent specific attribute values: Ǝattval1: CausalFeatureAttributeValue(attval1) Ʌ StringValue(attval1. TooSmall// Variables that represent specific attribute values: Ǝattval1: CausalFeatureAttributeValue(attval1) Ʌ StringValue(attval1, "TooBig") Ǝattval2: CausalFeatureAttributeValue(attval2) Ʌ StringValue(attval2, "TooSmall")
Ǝentity1: IsRepresentedByClassName(entity1, entity-class-name1) Ʌ PartOf(entity1,sit) Ʌ // e.g. TrophyClass Ǝentity2: IsRepresentedByClassName (entity2, entity-class-name2) Ʌ PartOf(entity2,sit) Ʌ // e.g. SuitcaseClass Ǝaction1: AttemptToFitEnclosableIntoContainer(action1) Ʌ PartOf(action1,sit) Ʌ. Ǝentity1: IsRepresentedByClassName(entity1, entity-class-name1) Ʌ PartOf(entity1,sit) Ʌ // e.g. TrophyClass Ǝentity2: IsRepresentedByClassName (entity2, entity-class-name2) Ʌ PartOf(entity2,sit) Ʌ // e.g. SuitcaseClass Ǝaction1: AttemptToFitEnclosableIntoContainer(action1) Ʌ PartOf(action1,sit) Ʌ
NotFittedInsideContainer(entity1) Ʌ // Att-type = PassiveIsFittedInsideContainerState NotIsFittedInto(entity2) Ʌ // Att-type = PassiveIsFittedIntoState ( CausalFeature(entity1,atttype-name, attval1) V // disjunction CausalFeature(entity2,atttype-name. 1NotFittedInsideContainer(entity1) Ʌ // Att-type = PassiveIsFittedInsideContainerState NotIsFittedInto(entity2) Ʌ // Att-type = PassiveIsFittedIntoState ( CausalFeature(entity1,atttype-name, attval1) V // disjunction CausalFeature(entity2,atttype-name, attval1) )
// The input natural language text (the sentence and its constituent parts). // The input natural language text (the sentence and its constituent parts)
Ǝm1,m2: // the two clauses of the sentence MeaningUnit (m1) Ʌ PartOf(m1,s) Ʌ // clause that is a description of a past situation Ǝsubj: ReferentPhraseSubject(subj,entity1) Ʌ PartOf(subj,m1) Ʌ // e.g. enclosable object Ǝadv: VerbModifierWord(adv) Ʌ PartOf(adv,m1) Ʌ // e.g. negation Ǝverb: WordBehavior(verb) Ʌ PartOf(verb,m1) Ʌ // e.g. Ǝs: Communicationunitsentence, fitting" behavior Ǝdobj: ReferentPhraseDirObject(dobj,entity2) Ʌ PartOf(dobj,m1) Ʌ // e.g. container objectƎs: CommunicationUnitSentence (s) Ʌ // the input sentence // (@ t = n) Ǝm1,m2: // the two clauses of the sentence MeaningUnit (m1) Ʌ PartOf(m1,s) Ʌ // clause that is a description of a past situation Ǝsubj: ReferentPhraseSubject(subj,entity1) Ʌ PartOf(subj,m1) Ʌ // e.g. enclosable object Ǝadv: VerbModifierWord(adv) Ʌ PartOf(adv,m1) Ʌ // e.g. negation Ǝverb: WordBehavior(verb) Ʌ PartOf(verb,m1) Ʌ // e.g. "fitting" behavior Ǝdobj: ReferentPhraseDirObject(dobj,entity2) Ʌ PartOf(dobj,m1) Ʌ // e.g. container object
MeaningUnit (m2) Ʌ PartOf(m2,s) Ʌ // clause that is a causal explanation for the situation Ǝadv2: CauseExplanationIntroducerWord(adv2) Ʌ PartOf(adv2,m2) Ʌ // e.g. "because" Ǝpron: PronounWord(pron) Ʌ PartOf(pron,m2) Ʌ // the unresolved "variable" Ǝverbtobe: AuxiliaryVerbToBeWord(verbtobe) Ʌ PartOf(verbtobe,m2) Ʌ // e.g. was" Ǝ:caufeat: ReferentPhraseCausalFeatureAttValue(caufeat, attval1) Ʌ PartOf(caufeat,m2) // "too big" ɅMeaningUnit (m2) Ʌ PartOf(m2,s) Ʌ // clause that is a causal explanation for the situation Ǝadv2: CauseExplanationIntroducerWord(adv2) Ʌ PartOf(adv2,m2) Ʌ // e.g. "because" Ǝpron: PronounWord(pron) Ʌ PartOf(pron,m2) Ʌ // the unresolved "variable" Ǝverbtobe: AuxiliaryVerbToBeWord(verbtobe) Ʌ PartOf(verbtobe,m2) Ʌ // e.g. "was" Ǝ:caufeat: ReferentPhraseCausalFeatureAttValue(caufeat, attval1) Ʌ PartOf(caufeat,m2) // "too big" Ʌ
// Higher-level Entity classes: Ǝenclosable: EnclosableObjectFrameClass(enclosable) Ʌ Ǝcontainer: ContainerObjectFrameClass(container) Ʌ. // Higher-level Entity classes: Ǝenclosable: EnclosableObjectFrameClass(enclosable) Ʌ Ǝcontainer: ContainerObjectFrameClass(container) Ʌ
Entity classes: Ǝtrophy: InheritsPropertiesFrom(enclosable) Ʌ Ǝsuitcase: InheritsPropertiesFrom (container) Ʌ. // Inherited Entity classes: Ǝtrophy: InheritsPropertiesFrom(enclosable) Ʌ Ǝsuitcase: InheritsPropertiesFrom (container) Ʌ
) Behavior class for "an enclosable does not fit in a container if the enclosable is too big" // Ǝb1: CognitiveRepresentationOfBehaviorClass(b1) Ʌ // (@ t = any) Ǝb1a: CogReprAntecedent(b1a) Ʌ PartOf(b1a, b1) Ʌ Ǝreprentity1: Represents(repentity1, enclosable) Ʌ PartOf(repentity1,b1a) Ʌ CausalFeature(reprentity1,atttype-name, attval1) Ǝreprentity2: Represents(repentity2, container) Ʌ PartOf(repentity2,b1a) Ʌ Ǝrepaction: CogReprAction(repaction, action) Ʌ PartOf(repaction,b1) Ʌ Ǝb1c: CogReprConsequent Ʌ PartOf(b1c, b1) Ʌ. // (1) Behavior class for "an enclosable does not fit in a container if the enclosable is too big" // Ǝb1: CognitiveRepresentationOfBehaviorClass(b1) Ʌ // (@ t = any) Ǝb1a: CogReprAntecedent(b1a) Ʌ PartOf(b1a, b1) Ʌ Ǝreprentity1: Represents(repentity1, enclosable) Ʌ PartOf(repentity1,b1a) Ʌ CausalFeature(reprentity1,atttype-name, attval1) Ǝreprentity2: Represents(repentity2, container) Ʌ PartOf(repentity2,b1a) Ʌ Ǝrepaction: CogReprAction(repaction, action) Ʌ PartOf(repaction,b1) Ʌ Ǝb1c: CogReprConsequent Ʌ PartOf(b1c, b1) Ʌ
NotFittedInsideContainer(repentity1) Ʌ // Att-type = PassiveIsFittedInsideContainerState NotIsFittedInto(repentity2) Ʌ // Att-type = PassiveIsFittedIntoState Ʌ // (2) Behavior class for "an enclosable does not fit in a container if the container is too small. // Ǝb2: CognitiveRepresentationOfBehaviorClass(b2) Ʌ // (@ t = any) Ǝb2a: CogReprAntecedent(b2a) Ʌ PartOf(b2a,b2) Ʌ Ǝreprentity1: Represents(repentity1, enclosable) Ʌ PartOf(repentity1,b2a) Ʌ Ǝreprentity2: Represents(repentity2, container) Ʌ PartOf(repentity2,b2a) Ʌ CausalFeature(reprentity2,atttype-name, attval2) Ǝrepaction: CogReprAction(repaction, action) Ʌ PartOf(repaction,b2) Ʌ Ǝb2c: CogReprConsequent Ʌ PartOf. // (the following is shorthand for the Consequent result states). b2c, b2) Ʌ// (the following is shorthand for the Consequent result states) NotFittedInsideContainer(repentity1) Ʌ // Att-type = PassiveIsFittedInsideContainerState NotIsFittedInto(repentity2) Ʌ // Att-type = PassiveIsFittedIntoState Ʌ // (2) Behavior class for "an enclosable does not fit in a container if the container is too small" // Ǝb2: CognitiveRepresentationOfBehaviorClass(b2) Ʌ // (@ t = any) Ǝb2a: CogReprAntecedent(b2a) Ʌ PartOf(b2a,b2) Ʌ Ǝreprentity1: Represents(repentity1, enclosable) Ʌ PartOf(repentity1,b2a) Ʌ Ǝreprentity2: Represents(repentity2, container) Ʌ PartOf(repentity2,b2a) Ʌ CausalFeature(reprentity2,atttype-name, attval2) Ǝrepaction: CogReprAction(repaction, action) Ʌ PartOf(repaction,b2) Ʌ Ǝb2c: CogReprConsequent Ʌ PartOf(b2c, b2) Ʌ
// (the following is shorthand for the Consequent result states) NotFittedInsideContainer(repentity1) Ʌ // Att-type = PassiveIsFittedInsideContainerState NotIsFittedInto(repentity2) Ʌ // Att-type = PassiveIsFittedIntoState Ʌ // (Meta) The agent's cognitive explanation of the causal aspects of this situation. Ǝce: RepresentationOfCausalExplanationObjectFrameClass(ce) Ʌ // (@ t = n-1// (the following is shorthand for the Consequent result states) NotFittedInsideContainer(repentity1) Ʌ // Att-type = PassiveIsFittedInsideContainerState NotIsFittedInto(repentity2) Ʌ // Att-type = PassiveIsFittedIntoState Ʌ // (Meta) The agent's cognitive explanation of the causal aspects of this situation: Ǝce: RepresentationOfCausalExplanationObjectFrameClass(ce) Ʌ // (@ t = n-1)
Ǝunknown-entity: Repr-AntecedentCausalAgent Ʌ PartOf(unknown-entity, ce-cause) Ʌ ( RepresentedClassName(unknown-entity, entity-class-name1) Ʌ // e.g. Ǝce-Cause, Repr-CauseEntity(cexplcause) Ʌ PartOf(ce-cause,ce) Ʌ. TooBigƎce-cause: Repr-CauseEntity(cexplcause) Ʌ PartOf(ce-cause,ce) Ʌ Ǝunknown-entity: Repr-AntecedentCausalAgent Ʌ PartOf(unknown-entity, ce-cause) Ʌ ( RepresentedClassName(unknown-entity, entity-class-name1) Ʌ // e.g. "TrophyClass" V // disjunction RepresentedClassName(unknown-entity, entity-class-name2)) Ʌ // e.g. "SuitcaseClass" CausalFeatureAttributeTypeName(unknown-entity,atttype-name) Ʌ // e.g. "FunctionalSize" CausalFeatureAttributeValue(unknown-entity, attval1) Ʌ // e.g. "TooBig"
RepresentationalRelationship(unknown-entity, entity-class-name1). RepresentationalRelationship(unknown-entity, entity-class-name1)
What is not shown here (due to the complexities of specifying it with FOL), is the way in which the substitution takes place: the pronoun resolution algorithm uses the behavior class of the embedded situation (the "not fitting due to too big" behavior) in order to derive the higher 1.3. Auto-Generated Behavior Classes sentences (above), copied here for clarity: Natural language input: An enclosable object is an everyday object that fits in a container object. // (positive case) If an enclosable object is too big then it does not fit in the container object. cognitive agent) has a representational relationship with entity-class-name-1, which is an instance of the trophy class. If a container object is too small then an enclosable object does not fit in itThe rule consequent expresses the fact that the unknown entity (within the mind of the cognitive agent) has a representational relationship with entity-class-name-1, which is an instance of the trophy class. What is not shown here (due to the complexities of specifying it with FOL), is the way in which the substitution takes place: the pronoun resolution algorithm uses the behavior class of the embedded situation (the "not fitting due to too big" behavior) in order to derive the higher 1.3. Auto-Generated Behavior Classes sentences (above), copied here for clarity: Natural language input: An enclosable object is an everyday object that fits in a container object. // (positive case) If an enclosable object is too big then it does not fit in the container object. If a container object is too small then an enclosable object does not fit in it.
Note that the first behavior class below is a positive "fits" behavior class that is shown for comparision purposes (it is not used by the trophy and suitcase schema). 1.3.2. Listing BehaviorClass "FitsBehaviorClass" // (positive case) ( <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary ( English ( { "fit. simple past. past participle) "fits", // (simple present. 3rd p.s.) "fitting" // (present participle) } ););Note that the first behavior class below is a positive "fits" behavior class that is shown for comparision purposes (it is not used by the trophy and suitcase schema). 1.3.2. Listing BehaviorClass "FitsBehaviorClass" // (positive case) ( <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary ( English ( { "fit", // (infinitive/base) "fitted", // (simple past) "fitted", // (past participle) "fits", // (simple present, 3rd p.s.) "fitting" // (present participle) } ););
AntecedentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = FittingState val. ( Priorstates, Populatedobjectclass, PriorStates ( PopulatedObjectClass "AntecedentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = FittingState val = "NotFitting" /> );
AntecedentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsFittedState val. Populatedobjectclass, PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsFittedState val = "NotFitted" /> ); );
ConsequentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = FittingState val. ( Poststates, Populatedobjectclass, PostStates ( PopulatedObjectClass "ConsequentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = FittingState val = "Fitting" /> );
ConsequentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsFittedState val. Populatedobjectclass, PopulatedObjectClass "ConsequentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsFittedState val = "Fitted" /> );
NotFit_Big_BehaviorClass. // Fitsbehaviorclass Behaviorclass, <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> <Negation val = "true" /> Dictionary ( English ( { "fit", "fit", "fitted", "fits", "fitting" } ););// FitsBehaviorClass BehaviorClass "NotFit_Big_BehaviorClass" ( <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> <Negation val = "true" /> Dictionary ( English ( { "fit", "fit", "fitted", "fits", "fitting" } ););
AntecedentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = FittingState val = "NotFitting" /> <Attribute ref = FunctionalAttributeType1 val. ( Priorstates, Populatedobjectclass, PriorStates ( PopulatedObjectClass "AntecedentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = FittingState val = "NotFitting" /> <Attribute ref = FunctionalAttributeType1 val = "TooBig" /> // (optional causal feature) );
AntecedentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsFittedState val. Populatedobjectclass, PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsFittedState val = "NotFitted" /> );
ConsequentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = FittingState val. ( Poststates, Populatedobjectclass, PostStates ( PopulatedObjectClass "ConsequentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = FittingState val = "Fitting" /> );
ConsequentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsFittedState val. Populatedobjectclass, PopulatedObjectClass "ConsequentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsFittedState val = "Fitted" /> );
NotFit_Small_BehaviorClass. Behaviorclass, <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> <Negation val = "true" /> Dictionary ( English ( { "fit", "fit", "fitted", "fits", "fitting" } ););BehaviorClass "NotFit_Small_BehaviorClass" ( <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> <Negation val = "true" /> Dictionary ( English ( { "fit", "fit", "fitted", "fits", "fitting" } ););
AntecedentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = FittingState val. ( Priorstates, Populatedobjectclass, PriorStates ( PopulatedObjectClass "AntecedentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = FittingState val = "NotFitting" /> );
AntecedentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsFittedState val = "NotFitted" /> <Attribute ref = FunctionalAttributeType2 val. Populatedobjectclass, PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsFittedState val = "NotFitted" /> <Attribute ref = FunctionalAttributeType2 val = "TooSmall" /> // (optional causal feature) );
ConsequentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = FittingState val. ( Poststates, Populatedobjectclass, PostStates ( PopulatedObjectClass "ConsequentActor" ( <ObjectFrameClass ref = EnclosableObjectObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = FittingState val = "Fitting" /> );
ConsequentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsFittedState val. Populatedobjectclass, PopulatedObjectClass "ConsequentActee" ( <ObjectFrameClass ref = ContainerObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsFittedState val = "Fitted" /> );
Person Lifts Person" The following classes were auto-generated using the Comprehendor NLU system's Ontology Builder sub-system. The natural language input that was used is as follows. (note: although the examples use "he" and "him", he/she and him/her can be used here interchangeably). K B Ontology, Schema, Natural language input: If a person is too weak then he cannot lift another person. If a person is too heavy then another person cannot lift him. <Attribute ref = PassiveIsLiftedState valOntology and KB for Schema: "Person Lifts Person" The following classes were auto-generated using the Comprehendor NLU system's Ontology Builder sub-system. The natural language input that was used is as follows. (note: although the examples use "he" and "him", he/she and him/her can be used here interchangeably). Natural language input: If a person is too weak then he cannot lift another person. If a person is too heavy then another person cannot lift him. <Attribute ref = PassiveIsLiftedState val = "NotLifted" /> ); );
ConsequentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = LiftingState val. ( Poststates, Populatedobjectclass, PostStates ( PopulatedObjectClass "ConsequentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = LiftingState val = "Lifting" /> );
ConsequentActee" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsLiftedState val. Populatedobjectclass, PopulatedObjectClass "ConsequentActee" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsLiftedState val = "Lifted" /> );
NotLift_Heavy_BehaviorClass. Behaviorclass, <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> <Negation val = "true" /> Dictionary ( English ( { "lift", "lifted", "lifted", "lifts", "lifting" } ););BehaviorClass " NotLift_Heavy_BehaviorClass " ( <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> <Negation val = "true" /> Dictionary ( English ( { "lift", "lifted", "lifted", "lifts", "lifting" } ););
AntecedentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = LiftingState val. ( Priorstates, Populatedobjectclass, PriorStates ( PopulatedObjectClass "AntecedentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = LiftingState val = "NotLifting" /> );
AntecedentActee" ( <ObjectFrameClass ref = PersonObjectFrameClass />. Populatedobjectclass, PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = PersonObjectFrameClass />
Receiving/Delivering and Paying" The behavior classes that are needed for this schema involve nested behaviors. K B Ontology, Schema, Ontology and KB for Schema: "Receiving/Delivering and Paying" The behavior classes that are needed for this schema involve nested behaviors.
Comprehendor's OntologyBuilder does not yet generate nested behaviors in behavior classes. However the following object frame class information items were auto-generated. Comprehendor's OntologyBuilder does not yet generate nested behaviors in behavior classes. However the following object frame class information items were auto-generated. );
AntecedentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsReceivedState val. Populatedobjectclass, PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsReceivedState val = "NotReceived" /> ); );
ConsequentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = ReceivingState val. ( Poststates, Populatedobjectclass, PostStates ( PopulatedObjectClass "ConsequentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = ReceivingState val = "Receiving" /> );
ConsequentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsReceivedState val. Populatedobjectclass, PopulatedObjectClass "ConsequentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsReceivedState val = "Received" /> ); ); );
BehaviorClass "DeliverBehaviorClass" ( <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary. English ( { "deliver", "delivered", "delivered", "delivers", "delivering" } ););BehaviorClass "DeliverBehaviorClass" ( <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary ( English ( { "deliver", "delivered", "delivered", "delivers", "delivering" } ););
AntecedentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = DeliveringState val. ( Priorstates, Populatedobjectclass, PriorStates ( PopulatedObjectClass "AntecedentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = DeliveringState val = "NotDelivering" /> );
AntecedentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsDeliveredState val. Populatedobjectclass, PopulatedObjectClass "AntecedentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = t1$ /> <Attribute ref = PassiveIsDeliveredState val = "NotDelivered" /> ); );
ConsequentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = DeliveringState val. ( Poststates, Populatedobjectclass, PostStates ( PopulatedObjectClass "ConsequentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = DeliveringState val = "Delivering" /> );
ConsequentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsDeliveredState val. Populatedobjectclass, PopulatedObjectClass "ConsequentActee" ( <ObjectFrameClass ref = DeliverableObjectObjectFrameClass /> <PassiveParticipant val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation expr = (a$+1) /> <Attribute ref = RelativeTime expr = (t1$+1) /> <Attribute ref = PassiveIsDeliveredState val = "Delivered" /> );
. " Behaviorclass, Payafterreceivingbehaviorclass, <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary ( English ( { "pay", "payed", "paid", "pays", "paying" } ););BehaviorClass "PayAfterReceivingBehaviorClass" ( <CausalRule val = "true" /> <BridgeObjectFrameClass ref = BehavioralStructuralParentClass /> Dictionary ( English ( { "pay", "payed", "paid", "pays", "paying" } ););
AntecedentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = PayingState val = "NotPaying" /> <Attribute ref = UniqueIdentityAttributeType var. ( Priorstates, Populatedobjectclass, PriorStates ( PopulatedObjectClass "AntecedentActor" ( <ObjectFrameClass ref = PersonObjectFrameClass /> <BinderSourceFlag val = "true" /> <DimensionSystem ref = RelativePosition /> <Attribute ref = RelativeLocation var = a$ /> <Attribute ref = RelativeTime var = t1$ /> <Attribute ref = PayingState val = "NotPaying" /> <Attribute ref = UniqueIdentityAttributeType var = q$ /> // (identity) );
BehaviorClassReference ( <BehaviorClass ref = ReceiveBehaviorClass /> // -->> DEFINED-BEHAVIOR-CLASS <ParameterActor ref = PersonObjectFrameClass expr. BehaviorClassReference ( <BehaviorClass ref = ReceiveBehaviorClass /> // -->> DEFINED-BEHAVIOR-CLASS <ParameterActor ref = PersonObjectFrameClass expr = q$ /> // (identity)
(b) Qualitative Spatial Reasoning in Interpreting Text and Narrative. Ernest Davis. 2011.Ernest Davis. 2011. (b) Qualitative Spatial Reasoning in Interpreting Text and Narrative, Retrieved from http://www.cs.nyu.edu/davise/papers/cosit.pdf, Last accessed June, 2014.
Introduction to ROSS: A New Representational Scheme. Glenn Hofford, Glenn Hofford. 2014. (a) Introduction to ROSS: A New Representational Scheme, Retrieved from https://www.academia.edu/7145283/Introduction_to_ROSS_A_New_Representational_Scheme, Last accessed July, 2014, (also available from http://www.softwareengineeringconcepts.com).
(b) ROSS User's Guide and Reference Manual. Glenn Hofford, Glenn Hofford. 2014. (b) ROSS User's Guide and Reference Manual, Retrieved from https://www.academia.edu/9190207/ROSS_Users_Guide_and_Reference_Manual_Version_1.0_, Last accessed November, 2014.
. Glenn Hofford, Glenn Hofford. 2014. (c) Online resource at http://www.softwareengineeringconcepts.com.
The Winograd Schema Challenge. J Hector, Ernest Levesque, Leora Davis, Morgenstern, KR-2012. n.d. Retrieved fromHector J Levesque, Ernest Davis and Leora Morgenstern. 2012. "The Winograd Schema Challenge." KR-2012. n.d. Retrieved from http://www.cs.nyu.edu/davise/papers/WSKR2012.pdf.
Procedures as a Representation for Data in a Computer Program for Understanding Natural Language". (Dissertation submitted to the Department of Mathematics. Terry Winograd, Massachusetts Institute of TechnologyTerry Winograd. 1971. "Procedures as a Representation for Data in a Computer Program for Understanding Natural Language". (Dissertation submitted to the Department of Mathematics, Massachusetts Institute of Technology).
Understanding Natural Language. Terry Winograd, Academic PressOrlandoTerry Winograd. 1972. Understanding Natural Language, Academic Press, Orlando.
| [] |
[
"Temporal Pattern Attention for Multivariate Time Series Forecasting",
"Temporal Pattern Attention for Multivariate Time Series Forecasting"
] | [
"Shun-Yao Shih shunyaoshih@gmail.com \nNational Taiwan University\nNational Taiwan University\nNational Taiwan University\n\n",
"Fan-Keng Sun \nNational Taiwan University\nNational Taiwan University\nNational Taiwan University\n\n",
"Hung-Yi Lee hungyilee@ntu.edu.tw \nNational Taiwan University\nNational Taiwan University\nNational Taiwan University\n\n",
"Yao Shih \nNational Taiwan University\nNational Taiwan University\nNational Taiwan University\n\n",
"Fan-Keng Sun \nNational Taiwan University\nNational Taiwan University\nNational Taiwan University\n\n",
"Hung-Yi Lee \nNational Taiwan University\nNational Taiwan University\nNational Taiwan University\n\n"
] | [
"National Taiwan University\nNational Taiwan University\nNational Taiwan University\n",
"National Taiwan University\nNational Taiwan University\nNational Taiwan University\n",
"National Taiwan University\nNational Taiwan University\nNational Taiwan University\n",
"National Taiwan University\nNational Taiwan University\nNational Taiwan University\n",
"National Taiwan University\nNational Taiwan University\nNational Taiwan University\n",
"National Taiwan University\nNational Taiwan University\nNational Taiwan University\n"
] | [] | Forecasting of multivariate time series data, for instance the prediction of electricity consumption, solar power production, and polyphonic piano pieces, has numerous valuable applications. However, complex and non-linear interdependencies between time steps and series complicate this task. To obtain accurate prediction, it is crucial to model long-term dependency in time series data, which can be achieved by recurrent neural networks (RNNs) with an attention mechanism. The typical attention mechanism reviews the information at each previous time step and selects relevant information to help generate the outputs; however, it fails to capture temporal patterns across multiple time steps. In this paper, we propose using a set of filters to extract time-invariant temporal patterns, similar to transforming time series data into its "frequency domain". Then we propose a novel attention mechanism to select relevant time series, and use its frequency domain information for multivariate forecasting. We apply the proposed model on several real-world tasks and achieve state-of-the-art performance in all of these with a single exception. Our source code is available at https://github.com/gantheory/TPA-LSTM.* indicates equal contribution. | 10.1007/s10994-019-05815-0 | [
"https://arxiv.org/pdf/1809.04206v2.pdf"
] | 52,196,634 | 1809.04206 | e9c69aa24f14f53043bd254f8c6f2f3016466998 |
Temporal Pattern Attention for Multivariate Time Series Forecasting
27 Nov 2018
Shun-Yao Shih shunyaoshih@gmail.com
National Taiwan University
National Taiwan University
National Taiwan University
Fan-Keng Sun
National Taiwan University
National Taiwan University
National Taiwan University
Hung-Yi Lee hungyilee@ntu.edu.tw
National Taiwan University
National Taiwan University
National Taiwan University
Yao Shih
National Taiwan University
National Taiwan University
National Taiwan University
Fan-Keng Sun
National Taiwan University
National Taiwan University
National Taiwan University
Hung-Yi Lee
National Taiwan University
National Taiwan University
National Taiwan University
Temporal Pattern Attention for Multivariate Time Series Forecasting
27 Nov 2018Received: date / Accepted: dateNoname manuscript No. (will be inserted by the editor)
Forecasting of multivariate time series data, for instance the prediction of electricity consumption, solar power production, and polyphonic piano pieces, has numerous valuable applications. However, complex and non-linear interdependencies between time steps and series complicate this task. To obtain accurate prediction, it is crucial to model long-term dependency in time series data, which can be achieved by recurrent neural networks (RNNs) with an attention mechanism. The typical attention mechanism reviews the information at each previous time step and selects relevant information to help generate the outputs; however, it fails to capture temporal patterns across multiple time steps. In this paper, we propose using a set of filters to extract time-invariant temporal patterns, similar to transforming time series data into its "frequency domain". Then we propose a novel attention mechanism to select relevant time series, and use its frequency domain information for multivariate forecasting. We apply the proposed model on several real-world tasks and achieve state-of-the-art performance in all of these with a single exception. Our source code is available at https://github.com/gantheory/TPA-LSTM.* indicates equal contribution.
Fig. 1
Historical prices of crude oil, gasoline, and lumber. Units are omitted and scales are normalized for simplicity.
Introduction
In everyday life, time series data are everywhere. We observe evolving variables generated from sensors over discrete time steps and organize them into time series data. For example, household electricity consumption, road occupancy rate, currency exchange rate, solar power production, and even music notes can all be seen as time series data. In most cases, the collected data are often multivariate time series (MTS) data, such as the electricity consumption of multiple clients, which are tracked by the local power company. There can exist complex dynamic interdependencies between different series that are significant but difficult to capture and analyze.
Analysts often seek to forecast the future based on historical data. The better the interdependencies among different series are modeled, the more accurate the forecasting can be. For instance, as shown in Figure 1 1 , the price of crude oil heavily influences the price of gasoline, but has a smaller influence on the price of lumber. Thus, given the realization that gasoline is produced from crude oil and lumber is not, we can use the price of crude oil to predict the price of gasoline.
In machine learning, we want the model to automatically learn such interdependencies from data. Machine learning has been applied to time series analysis for both classification and forecasting [G. Zhang andHu(1998), Zhang(2003), Lai et al.(2018) Lai, Chang, Yang, and Liu, Qin et al.(2017) Qin, Song, Cheng, Cheng, Jiang, and Cottrell]. In classification, the machine learns to assign a label to a time series, for instance evaluating a patient's diagnostic categories by reading values from medical sensors. In forecasting, the machine predicts future time series based on past observed data. For example, precipitation in the next days, weeks, or months can be forecast according to historical measurements. The further ahead we attempt to forecast, the harder it is.
When it comes to MTS forecasting using deep learning, recurrent neural networks (RNNs) [David E. Rumelhart and Williams(1986), J. Werbos(1990), Elman(1990)] are often used. However, one disadvantage in using RNNs in time series analysis is their weakness on managing long-term dependencies, for instance yearly patterns in a daily recorded sequence [Kyunghyun Cho and Bengio(2014)]. The attention mechanism [Luong et al.(2015)Luong, Pham, and Manning, Bahdanau et al.(2015) Bahdanau, Cho, and Bengio], originally utilized in encoder-decoder [Sutskever et al.(2014)Sutskever, Vinyals, and Le] networks, somewhat alleviates this problem, and thus boosts the effectiveness of RNN [Lai et al.(2018) Lai, Chang, Yang, and Liu].
In this paper, we propose the temporal pattern attention, a new attention mechanism for MTS forecasting, where we use the term "temporal pattern" to refer to any time-invariant pattern across multiple time steps. The typical attention mechanism identifies the time steps relevant to the prediction, and extracts the information from these time steps, which poses obvious limitations for MTS prediction. Consider the example in Figure 1. To predict the value of gasoline, the machine must learn to focus on "crude oil" and ignore "lumber". In temporal pattern attention, instead of selecting the relevant time steps as in the typical attention mechanism, the machine learns to select the relevant time series.
In addition, time series data often entails noticeable periodic temporal patterns, which are critical for prediction. However, the periodic patterns spanning multiple time steps are difficult for the typical attention mechanism to identify, as it usually focuses only on a few time steps. In temporal pattern attention, we introduce a convolutional neural network (CNN) [LeCun and Bengio(1995), A. Krizhevsky and Hinton(2012)] to extract temporal pattern information from each individual variable.
The main contributions of this paper are summarized as follows:
-We introduce a new attention concept in which we select the relevant variables as opposed to the relevant time steps. The method is simple and general to apply on RNN. -We use toy examples to verify that our attention mechanism enables the model to extract temporal pattern and focus on different time steps for different time series. -Attested by experimental results on real-world data ranging from periodic and partially linear to non-periodic and non-linear tasks, we show that the proposed attention mechanism achieves state-of-the-art results across multiple datasets. -The learned CNN filters in our attention mechanism demonstrate interesting and interpretable behavior.
The remainder of this paper is organized as follows. In Section 2 we review related work and in Section 3 we describe background knowledge. Then, in Section 4 we describe the proposed attention mechanism, after which we present and analyze the experimental results in Section 6. We conclude in Section 7. The most well-known model for linear univariate time series forecasting is the autoregressive integrated moving average (ARIMA) [G. E. Box and Ljung(2015)], which encompasses other autoregressive time series models, including autoregression (AR), moving average (MA), and autoregressive moving average (ARMA). Additionally, linear support vector regression (SVR) [Cao and Tay(2003), Kim(2003)] treats the forecasting problem as a typical regression problem with time-varying parameters. However, these models are mostly limited to linear univariate time series and do not scale well to MTS. To forecast MTS data, vector autoregression (VAR), a generalization of AR-based models, was proposed. VAR is probably the most well-known model in MTS forecasting. Nevertheless, neither AR-based nor VAR-based models capture non-linearity. For that reason, substantial effort has been put into non-linear models for time series forecasting based on kernel methods [Chen et al.(2008)Chen, Wang, and Harris], ensembles [Bouchachia and Bouchachia(2008)], or Gaussian processes [Frigola and Rasmussen(2014)]. Still, these approaches apply predetermined non-linearities and may fail to recognize different forms of non-linearity for different MTS.
Recently, deep neural networks have received a great amount of attention due to their ability to capture non-linear interdependencies. Long shortterm memory (LSTM) [Hochreiter and Schmidhuber(1997)], a variant of recurrent neural network, has shown promising results in several NLP tasks and has also been employed for MTS forecasting. Work in this area began with using naive RNN [J. Connor and Martin(1991)], improved with hybrid models that combined ARIMA and multilayer perceptrons [G. Zhang andHu(1998), Zhang(2003), Jain and Kumar(2007)], and then most recently progressed to dynamic Boltzmann machines with RNN [Dasgupta and Osogami(2017)]. Although these models can be applied to MTS, they mainly target univariate or bivariate time series.
To the best of our knowledge, the long-and short-term time-series network (LSTNet) [Lai et al.(2018)Lai, Chang, Yang, and Liu] is the first model designed specifically for MTS forecasting with up to hundreds of time series. LSTNet uses CNNs to capture short-term patterns, and LSTM or GRU for memorizing relatively long-term patterns. In practice, however, LSTM and GRU cannot memorize very long-term interdependencies due to training instability and the gradient vanishing problem. To address this, LSTNet adds either a recurrent-skip layer or a typical attention mechanism. Also part of the overall model is traditional autoregression, which helps to mitigate the scale insensitivity of neural networks. Nonetheless, LSTNet has two major shortcomings when compared to our proposed attention mechanism: (1) the skip length of the recurrent-skip layer must be manually tuned, whereas the proposed approach learns the periodic patterns by itself; and (2) the LSTNet model is specifically designed for MTS data with strong periodic patterns, whereas the proposed attention mechanism, as shown in our experiments, is simple and adaptable to various datasets, even non-periodic and non-linear ones.
Preliminaries
In this section, we briefly describe two essential modules related to our proposed model: the RNN module, and the typical attention mechanism.
Recurrent Neural Networks
Given a sequence of information {x 1 , x 2 , . . . , x t }, where x i ∈ R n , an RNN generally defines a recurrent function, F , and calculates h t ∈ R m for each time step, t, as
h t = F (h t−1 , x t )(1)
where the implementation of function F depends on what kind of RNN cell is used. Long short-term memory (LSTM) [Hochreiter and Schmidhuber(1997)] cells are widely used, which have a slightly different recurrent function:
h t , c t = F (h t−1 , c t−1 , x t ),(2)
which is defined by the following equations:
i t = sigmoid(W xi x t + W hi h t−1 ) (3) f t = sigmoid(W x f x t + W h f h t−1 ) (4) o t = sigmoid(W xo x t + W ho h t−1 ) (5) c t = f t c t−1 + i t tanh(W xg x t + W hg h t−1 ) (6) h t = o t tanh(c t )(7)
where i t , f t , and o t ∈ R m , W xi , W x f , W xo and W xg ∈ R m×n , W hi , W h f , W ho and W hg ∈ R m×m , and denotes element-wise multiplication.
Typical Attention Mechanism
In the typical attention mechanism [Luong et al.(2015)Luong, Pham, and Manning, Bahdanau et al.(2015)Bahdanau, Cho, and Bengio] in an RNN, given the previous states H = {h 1 , h 2 , . . . , h t−1 }, a context vector v t is extracted from the previous states. v t is a weighted sum of each column h i in H, which represents the information relevant to the current time step. v t is further integrated with the present state h t to yield the prediction.
Assume a scoring function f : R m ×R m → R which computes the relevance between its input vectors. Formally, we have the following formula to calculate the context vector v t :
α i = exp(f (h i , h t )) t−1 j=1 exp(f (h j , h t )) (8) v t = t−1 i=1 α i h i .(9)
Temporal Pattern Attention
While previous work focuses mainly on changing the network architecture of the attention-based models via different settings to improve performance on various tasks, we believe there is a critical defect in applying typical attention mechanisms on RNN for MTS forecasting. The typical attention mechanism selects information relevant to the current time step, and the context vector v t is the weighted sum of the column vectors of previous RNN hidden states,
H = {h 1 , h 2 , . . . , h t−1 }.
This design lends itself to tasks in which each time step contains a single piece of information, for example, an NLP task in which each time step corresponds to a single word. If there are multiple variables in each time step, it fails to ignore variables which are noisy in terms of forecasting utility. Moreover, since the typical attention mechanism averages the information across multiple time steps, it fails to detect temporal patterns useful for forecasting. The overview of the proposed model is shown in Figure 2. In the proposed approach, given previous RNN hidden states H ∈ R m×(t−1) , the proposed attention mechanism basically attends to its row vectors. The attention weights on rows select those variables that are helpful for forecasting. Since the context vector v t is now the weighted sum of the row vectors containing the information across multiple time steps, it captures temporal information.
Problem Formulation
In MTS forecasting, given an MTS,
X = {x 1 , x 2 , . . . , x t−1 }, where x i ∈ R n rep- resents the observed value at time i, the task is to predict the value of x t−1+∆ ,
where ∆ is a fixed horizon with respect to different tasks. We denote the corresponding prediction as y t−1+∆ , and the ground-truth value asŷ t−1+∆ = x t−1+∆ . Moreover, for every task, we use only {x t−w , x t−w+1 , . . . , x t−1 } to predict x t−1+∆ , where w is the window size. This is a common practice [Lai et al.(2018)Lai, Chang, Yang, andLiu, Qin et al.(2017)Qin, Song, Cheng, Cheng, Jiang, andCottrell], because the assumption is that there is no useful information before the window and the input is thus fixed.
Temporal Pattern Detection using CNN
CNN's success lies in no small part to its ability to capture various important signal patterns; as such we use a CNN to enhance the learning ability of the model by applying CNN filters on the row vectors of H. Specifically, we have k filters C i ∈ R 1×T , where T is the maximum length we are paying attention to. If unspecified, we assume T = w. Convolutional operations yield H C ∈ R n×k where H C i,j represents the convolutional value of the i-th row vector and the j-th filter. Formally, this operation is given by
H C i,j = w l=1 H i,(t−w−1+l) × C j,T −w+l .(10)
Proposed Attention Mechanism
We calculate v t as a weighted sum of row vectors of H C . Defined below is the scoring function f : R k × R m → R to evaluate relevance:
f (H C i , h t ) = (H C i ) W a h t ,(11)
where H C i is the i-th row of H C , and W a ∈ R k×m . The attention weight α i is obtained as
α i = sigmoid(f (H C i , h t )).(12)
Note that we use the sigmoid activation function instead of softmax, as we expect more than one variable to be useful for forecasting. Completing the process, the row vectors of H C are weighted by α i to obtain the context vector
v t ∈ R k , v t = m i=1 α i H C i .(13)
Then we integrate v t and h t to yield the final prediction
h t = W h h t + W v v t ,(14)y t−1+∆ = W h h t ,(15)where h t , h t ∈ R m , W h ∈ R m×m , W v ∈ R m×k , and W h ∈ R n×m and y t−1+∆ ∈ R n .
Analysis of Proposed Attention on Toy Examples
In order to elaborate the failure of traditional attention mechanisms and the influence of interdependencies, we study the performance of different attention mechanisms on two artificially constructed toy examples. In the first toy example, the t-th time step of the i-th time series is defined as sin( 2πit 64 ), that is, each time series is a sine wave with different periods. Notice that any two time series are mutually independent in the first toy example, so there are no interdependency.
The second toy example add interdependencies to the first toy example by mixing time series, and thus the t-th time step of the i-th time series is formulated as:
sin( 2πit 64 ) + 1 D − 1 D j=1,j =i sin( 2πjt 64 ),(16)
where D is the number of time series. Both toy examples are visualized in Fig. 3 for D = 6.
The two examples with D = 6 are shown in Fig. 4. All models in the following analyses are trained with window size w = 64, horizon ∆ = 1, and similar amount of parameters.
Failure of traditional attention mechanisms
Intuitively, for the first toy example, the model can accurately predict the next value by memorizing the value that appears exactly one period before. However, we know that different time series have different periods, which means to have a good prediction, the model should be able to look back different numbers of time steps for different series. From this point, it is clear that the failure of traditional attention mechanisms comes from extracting only one previous time step while discounting the information in other time steps. On the other hand, our attention mechanism attends on the features extracted from row vectors of RNN hidden states by CNN filters, which enables the model to select relevant information across multiple time steps.
The aforementioned explanation is verified by the left plot in Figure 4, where we observe that the performance of the LSTM with Luong attention is poor when D 1, compared to the others. Notice that all models have similar amount of parameters, which implies that the LSTM without attention has a larger hidden size when compared to the LSTM with Luong attention. Consequently, the LSTM without attention outperforms the LSTM with Luong attention when D 1, because the larger hidden size helps the model to make prediction while the Luong attention is nearly useless. On the contrary, our attention is useful, so the LSTM with our attention is better than the LSTM without attention on average, even though its hidden size is smaller.
Influence of interdependencies
When there are interdependencies in MTS data, it is desirable to leverage the interdependencies to further improve forecasting accuracy. The right plot in Figure 4 shows that both the LSTM with Luong attention and the LSTM without attention do not benefit from the added interdependencies, since the loss values remain the same. On the other hand, the loss of the LSTM with the proposed attention is lower when there are interdependencies, which suggests that our attention successfully utilized the interdependencies to facilitate MTS forecasting.
Experiments and Analysis
In this section, we first describe the datasets upon which we conducted our experiments. Next, we present our experimental results and a visualization of the prediction against LSTNet. Then, we discuss the ablation study. Finally, we analyze in what sense the CNN filters resemble the bases in DFT.
Datasets
To evaluate the effectiveness and generalization ability of the proposed attention mechanism, we used two dissimilar types of datasets: typical MTS datasets and polyphonic music datasets.
The These datasets are real-world data that contains both linear and non-linear interdependencies. Moreover, the Solar Energy, Traffic, and Electricity datasets exhibit strong periodic patterns indicating daily or weekly human activities.
According to the authors of LSTNet, all datasets have been split into training (60%), validation (20%), and testing set (20%) in chronological order. In contrast, the polyphonic music datasets introduced below are much complicated, in the sense that no apparent linearity or repetitive patterns exist:
-MuseData [Nicolas Boulanger-Lewandowski and Vincent(2012)]: a collection of musical pieces from various classical music composers in MIDI format. -LPD-5-Cleansed [Hao-Wen Dong and Yang(2018), Raffel(2016)]: 21, 425 multi-track piano-rolls that contain drums, piano, guitar, bass, and strings.
To train models on these datasets, we consider each played note as 1 and 0 otherwise (i.e., a musical rest), and set one beat as one time step as shown in Table 1. Given the played notes of 4 bars consisting of 16 beats, the task is to predict whether each pitch at the next time step is played or not. For training, validation, and testing sets, we follow the original MuseData separation, which is divided into 524 training pieces, 135 validation pieces, and 124 testing pieces. LPD-5-Cleansed, however, was not split in previous work [Hao-Wen Dong and Yang(2018), Raffel(2016)]; thus we randomly split it into training (80%), validation (10%), and testing (10%) sets. The size of LPD-5-Cleansed dataset is much larger than others, so we decided to use a smaller validation and testing set. The statistics of both the typical MTS datasets and polyphonic music datasets are summarized in Table 1.
Methods for Comparison
We compared the proposed model with the following methods on the typical MTS datasets:
-AR: standard autoregression model. However, as both traditional baseline methods and LSTNet are ill-suited to polyphonic music datasets due to their non-linearity and the lack of periodicity, we use LSTM and LSTM with Luong attention as the baseline models to evaluate the proposed model on polyphonic music datasets:
-LSTM: RNN cells as introduced in Section 3.
-LSTM with Luong attention: LSTM with an attention mechanism scoring function of which Luong et al.(2015)Luong, Pham, and Manning].
f (h i , h t ) = (h i ) W h t , where W ∈ R m×m [
Model Setup and Parameter Settings
For all experiments, we used LSTM units in our RNN models, and fixed the number of CNN filters at 32. Also, inspired by LSTNet, we included an autoregression component in our model when training and testing on typical MTS datasets.
For typical MTS datasets, we conducted a grid search over tunable parameters as done with LSTNet. Specifically, on Solar Energy, Traffic, and Electricity, the range for window size w was {24, 48, 96, 120, 144, 168}, the range for the number of hidden units m was {25, 45, 70}, and the range for the step of the exponential learning rate decay with a rate of 0.995 was {200, 300, 500, 1000}. On Exchange Rate, these three parameters were fixed at 30, 6 and 120, respectively. Two types of data normalization were also viewed as part of the grid search: one normalized each time series by the maximum value in itself, and the other normalized every time series by the maximum value over the whole data set. Lastly, we used the absolute loss function and Adam with a 10 −3 learning rate on Solar Energy, Traffic, and Electricity, and a 3 · 10 −3 learning rate on Exchange Rate. For the other compared methods as mentioned in previous subsection, the parameters were identical to the numbers reported in the LSTNet paper [Lai et al.(2018)Lai, Chang, Yang, and Liu].
For models used for the polyphonic music datasets, including the baselines and proposed models in the following subsections, we used 3 layers for all RNNs, as done in [Chuan and Herremans(2018)], and fixed the trainable parameters to around 5 · 10 6 by adjusting the number of LSTM units to fairly compare different models. In addition, we used the Adam optimizer with a 10 −5 learning rate and a cross entropy loss function.
Evaluation Metrics
On typical MTS datasets, since we compared the proposed model with LST-Net, we followed the same evaluation metrics: RAE, RSE and CORR. The first metric is the relative absolute error (RAE), which is defined as
RAE = t1 t=t0 n i=1 |(y t,i −ŷ t,i )| t1 t=t0 n i=1 |ŷ t,i −ŷ t0:t1,1:n | .(17)
The next metric is the root relative squared error (RSE): Table 2 Results on typical MTS datasets using RAE, RSE and CORR as metrics. Best performance in boldface; second best performance is underlined. We report the mean and standard deviation of our model in ten runs. All numbers besides the results of our model is referenced from the paper of LSTNet [Lai et al.(2018)Lai, Chang, Yang, and Liu]. and finally the third metric is the empirical correlation coefficient (CORR):
RSE = t1 t=t0 n i=1 (y t,i −ŷ t,i ) 2 t1 t=t0 n i=1 (ŷ t,i −ŷ t0:t1,1:n ) 2 ,(18)CORR = 1 n n i=1 t1 t=t0 (y t,i − y t0:t1,i )(ŷ t,i −ŷ t0:t1,i ) t1 t=t0 (y t,i − y t0:t1,i ) 2 (ŷ t,i −ŷ t0:t1,i ) 2 ,(19)
where y,ŷ is defined in Section 4.1,ŷ t , ∀t ∈ [t 0 , t 1 ] is the label of the testing data, and y denotes the mean of set y. RAE and RSE both disregards data scale and is a normalized version of the mean absolute error (MAE) and the root mean square error (RMSE), respectively. For RAE and RSE, the lower the better, whereas for CORR, the higher the better. To decide which model is better on polyphonic music datasets, we use validation loss (negative log-likelihood), precision, recall, and F1 score as measurements which are widely used in work on polyphonic music generation [Nicolas Boulanger-Lewandowski and Vincent(2012), Chuan and Herremans(2018)].
Results on Typical MTS Datasets
On typical MTS datasets, we chose the best model on the validation set using RAE/RSE/CORR as the metric for the testing set. The numerical results are tabulated in Table 2, where the metric of the first two tables are RAE, followed by two tables of RSE metric, and ended by another two tables using CORR metric. Both tables show that the proposed model outperforms all Table 3 Precision, recall, and F1 score of different models on polyphonic music datasets other methods on all datasets, horizons, and metrics, with a single exception. Generally speaking, the larger the D in Table 1, the better our model performs. Also, our models are able to deal with a wide range of dataset size, from the smallest 534 KB Exchange Rate dataset to the largest 172 MB Solar Energy dataset. In these results, the proposed model consistently demonstrates its superiority for MTS forecasting.
In the comparison to LSTNet-Skip and LSTNet-Attn, the previous state-ofthe-art methods, the proposed model exhibits superior performance, especially on Traffic and Electricity, which contain the largest amount of time series. Moreover, on Exchange Rate, where no repetitive pattern exists, the proposed model is still the best overall; the performance of LSTNet-Skip and LSTNet-Attn fall behind traditional methods, including AR, LRidge, LSVR, and GP. The proposed model is outperformed by LRidge on Exchange Rate with the 6-day horizon, because linear models is sufficient for this dataset, and deep learning is redundant. In Figure 5 we also visualize and compare the prediction of the proposed model and LSTNet-Skip.
In summary, the proposed model achieves state-of-the-art performance on both periodic and non-periodic MTS datasets.
Results on Polyphonic Music Datasets
In this subsection, to further verify the efficacy and generalization ability of the proposed model, we describe experiments conducted on polyphonic music datasets; the results are shown in Figure 6 and Table 3. We compared three RNN models: LSTM, LSTM with Luong attention, and LSTM with the proposed attention mechanism. Figure 6 shows the validation loss across training epochs, and in Table 3, we use the models with the lowest validation loss to calculate precision, recall, and F1 score on the testing set.
From the results, we first verify our claim that the typical attention mechanism does not work on such tasks, as under similar hyperparameters and trainable weights, LSTM and the proposed model outperform such attention mechanisms. In addition, the proposed model also learns more effectively com- Fig. 7 Magnitude comparison of (1) DFT of CNN filters trained on Traffic with a 3-hour horizon, and (2) every window of the Traffic dataset. To make the figure more intuitive, the unit of the horizontal axis is the period. pared to LSTM throughout the learning process and yields better performance in terms of precision, recall, and F1 score.
Analysis of CNN Filters
DFT is a variant of the Fourier transform (FT) which handles equally-spaced samples of a signal in time. In the field of time series analysis, there is a wide body of work that utilizes FT or DFT to reveal important characteristics in time series [N.E. Huang andLiu(1998), Bloomfield(1976)]. In our case, since the MTS data is also equally-spaced and discrete, we could apply DFT to analyze it. However, in MTS data, there is more than one time series, so we naturally average the magnitude of the frequency components of every time series, and arrive at a single frequency domain representation. We denote this the average discrete Fourier transform (avg-DFT). The single frequencydomain representation reveals the prevailing frequency components of the MTS data. For instance, it is reasonable to assume a notable 24-hour oscillation in Figure 5, which is verified by the avg-DFT of the Traffic dataset shown in Figure 7. Since we expect our CNN filters to learn temporal MTS patterns, the prevailing frequency components in the average CNN filters should be similar to that of the training MTS data. Hence, we also apply avg-DFT on the k = 32 CNN filters that are trained on Traffic with a 3-hour horizon; in Figure 7 we plot the result alongside with the avg-DFT of every window of Traffic dataset. Impressively, the two curves reach peaks at the same periods most of the time, which implies that the learned CNN filters resemble bases in DFT. At the 24, 12, 8, and 6-hour periods, not only is the magnitude of the Traffic dataset at its peak, but the magnitude of CNN filters also tops out. Moreover, in Figure 8, we show that different CNN filters behave differently. Some specialize at capturing long-term (24-hour) temporal patterns, while others are good at recognizing short-term (8-hour) temporal patterns. As a whole, we suggest that the proposed CNN filters play the role of bases in DFT. As demonstrated in the work by [Rippel et al.(2015)Rippel, Snoek, and Adams], such a "frequency domain" serves as a powerful representation for CNN to use in training and modeling. Thus, LSTM relies on the frequency-domain information extracted by the proposed attention mechanism to accurately forecast the future.
Ablation Study
In order to verify that the above improvement comes from each added component rather than a specific set of hyperparameters, we conducted an ablation study on the Solar Energy, Traffic, Electricity, and MuseData datasets. There were two main settings: one controlling how we attend to hidden states, H, of RNN and the other controlling how we integrate the scoring function f into the proposed model, or even disable the function. First, in the proposed method, we let the model attend to values of various filters on each position (H C i ); we can also consider attending to values of the same filters at various positions ((H C ) i ) or row vectors of H (H i ). These three different approaches correspond to the column headers in Table 4: "Position", "Filter", and "Without CNN". Second, whereas in the typical attention mechanism, softmax is usually used on the output value of scoring function f to extract the most relevant information, we use sigmoid as our activation function. Therefore, we compare these two different functions. Another possible structure for forecasting is to concatenate all previous hidden states and let the model automatically learn which values are important. Taking these two groups of settings into consideration, we trained models with all combinations of possible structures on these four datasets.
The MuseData results show that the model with sigmoid activation and attention on H C i (position) is clearly the best, which suggests that the proposed model is reasonably effective for forecasting. No matter which proposed component is removed from the model, performance drops. For example, using softmax instead of sigmoid raises the negative log-likelihood from 0.04878 to 0.04931; we obtain a even worse model with a negative log-likelihood of 0.4987 if we do not use CNN filters. In addition, we note no significant improvement between the proposed model and that model using softmax on the first three datasets in Table 4: Solar Energy, Traffic, and Electricity. This is not surprising, given our motivation for using sigmoid, as explained in Section 4.3. Originally, we expected CNN filters to find basic patterns and expected the sigmoid function to help the model to combine these patterns into one that helps. However, due to the strongly periodic nature of these three datasets, it is possible that using a small number of basic patterns is sufficient for good prediction. Overall, however, the proposed model is more general and yields stable and competitive results across different datasets.
Conclusions
In this paper, we focus on MTS forecasting and propose a novel temporal pattern attention mechanism which removes the limitation of typical attention mechanisms on such tasks. We allow the attention dimension to be featurewise in order for the model learn interdependencies among multiple variables not only within the same time step but also across all previous times and series. Our experiments on both toy examples and real-world datasets strongly support this idea and show that the proposed model achieves state-of-the-art results. In addition, the visualization of filters also verifies our motivation in a more understandable way to human beings.
Fig. 2
2Proposed
Fig. 3
3Visualization of the first toy example without interdependencies (left) and the second toy example with interdependencies (right) for D = 6, which means that there are 6 time series in each example.
Fig. 4
4Mean absolute loss in log 10 of the first toy example without interdependencies (left) and the second toy example with interdependencies (right). The baseline indicates the loss if all predicted values are zero.
typical MTS datasets are published by[Lai et al.(2018)Lai, Chang, Yang, and Liu]; there are four datasets:-Solar Energy 2 : the solar power production data from photovoltaic plants in Alabama State in 2006. -Traffic 3 : two years (2015-2016) of data provided by the California Department of Transportation that describes the road occupancy rate (between 0 and 1) on San Francisco Bay area freeways. -Electricity 4 : a record of the electricity consumption of 321 clients in kWh. -Exchange Rate: the exchange rates of eight foreign countries (Australia, British, Canada, China, Japan, New Zealand, Singapore, and Switzerland) from 1990 to 2016.
-
LRidge: VAR model with L2-regularization: the most popular model for MTS forecasting. -LSVR: VAR model with SVR objective function [V. Vapnik(1997)]. -GP: Gaussian process model [Frigola-Alcade(2015), S. Roberts and Aigrain(2011)]. -LSTNet-Skip: LSTNet with recurrent-skip layer. -LSTNet-Attn: LSTNet with attention layer. AR, LRidge, LSVR, and GP are traditional baseline methods, whereas LSTNet-Skip and LSTNet-Attn are state-of-the-art methods based on deep neural networks.
Fig. 5
5Prediction results for proposed model and LSTNet-Skip on Traffic testing set with 3-hour horizon. Proposed model clearly yields better forecasts around the flat line after the peak and in the valley.
Fig. 6
6Validation loss under different training epochs on MuseData (left), and LPD-5-Cleansed (right).
Fig. 8
8Two different CNN filters trained on Traffic with a 3-hour horizon, which detect different periods of temporal patterns.
Table 1Statistics of all datasets, where L is the length of the time series, D is the number of time series, S is the sampling spacing, and B is size of the dataset in bytes. MuseData and LPD-5-Cleansed both have various-length time series since the length of music pieces varies.Dataset
L
D
S
B
Solar Energy
52,560
137
10 minutes
172 M
Traffic
17,544
862
1 hour
130 M
Electricity
26,304
321
1 hour
91 M
Exchange Rate
7,588
8
1 day
534 K
MuseData
216-102,552
128
1 beat
4.9 M
LPD-5-Cleansed
1,072-1,917,952
128
1 beat
1.7 G
Table 4
4Ablation Study. Evaluation measure for Solar Energy, Traffic, and Electricity is RSE, and negative log-likelihood for MuseData. On each corpus, bold text represents the best and underlined text represents second best.
Source: https://www.eia.gov and https://www.investing.com
http://www.nrel.gov/grid/solar-power-data.html 3 http://pems.dot.ca.gov 4 https://archive.ics.uci.edu/ml/datasets/ElectricityLoadDiagrams20112014
A Krizhevsky IS, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. A Krizhevsky, Hinton, Advances in Neural Information Processing Systems pp. A. Krizhevsky and Hinton(2012). A Krizhevsky IS, Hinton GE (2012) Imagenet classifica- tion with deep convolutional neural networks. Advances in Neural Information Process- ing Systems pp 1097-1105
Neural machine translation by jointly learning to align and translate. Bahdanau, Proceedings of the 1st International Workshop on Nonlinear Dynamics and Synchronization. Bouchachia A, Bouchachia Sthe 1st International Workshop on Nonlinear Dynamics and SynchronizationBloomfieldJohn Wiley Bouchachia and BouchachiaEnsemble learning for time series predictionBahdanau et al.(2015)Bahdanau, Cho, and Bengio. Bahdanau D, Cho K, Bengio Y (2015) Neural machine translation by jointly learning to align and translate. ICLR Bloomfield(1976). Bloomfield P (1976) Fourier Analysis of Time Series: An Introduction. John Wiley Bouchachia and Bouchachia(2008). Bouchachia A, Bouchachia S (2008) Ensemble learning for time series prediction. Proceedings of the 1st International Workshop on Nonlinear Dynamics and Synchronization
Support vector machine with adaptive parameters in financial time series forecasting. Tay Cao, L J Cao, Feh Tay, Cao and Tay(2003). Cao LJ, Tay FEH (2003) Support vector machine with adaptive pa- rameters in financial time series forecasting. IEEE Transactions on Neural Networks pp 1506-1518
Narxbased nonlinear system identification using orthogonal least squares basis hunting. Chen , IEEE Transactions on Control Systems. Chen et al.(2008)Chen, Wang, and Harris. Chen S, Wang XX, Harris CJ (2008) Narxbased nonlinear system identification using orthogonal least squares basis hunting. IEEE Transactions on Control Systems pp 78-84
Modeling temporal tonal relations in polyphonic music through deep networks with a novel image-based representation. Chuan, C H Herremans ; Chuan, D Herremans, Chuan and Herremans(2018). Chuan CH, Herremans D (2018) Modeling temporal tonal re- lations in polyphonic music through deep networks with a novel image-based represen- tation. URL https://www.aaai.org/ocs/index.php/AAAI/AAAI18/paper/view/16679
Nonlinear dynamic Boltzmann machines for time-series prediction. Osogami Dasgupta, S Dasgupta, T Osogami, Dasgupta and Osogami(2017). Dasgupta S, Osogami T (2017) Nonlinear dynamic Boltz- mann machines for time-series prediction
Learning representations by backpropagating errors. David E Rumelhart, Williams ; David E Rumelhart, Geh Williams, R J , Nature pp. David E. Rumelhart and Williams(1986). David E Rumelhart GEH, Williams RJ (1986) Learning representations by backpropagating errors. Nature pp 533-536
Elman JL (1990) Finding structure in time. Elman, Cognitive science pp. Elman(1990). Elman JL (1990) Finding structure in time. Cognitive science pp 179-211
Integrated pre-processing for Bayesian nonlinear system identification with Gaussian processes. Rasmussen Frigola, R Frigola, C E Rasmussen, Frigola and Rasmussen(2014). Frigola R, Rasmussen CE (2014) Integrated pre-processing for Bayesian nonlinear system identification with Gaussian processes. IEEE Conference on Decision and Control pp 552-560
Forecasting with artificial neural networks: The state of the art. ; G E Frigola-Alcade, Gcr G M Box, Jenkins, Gm ; G Ljung, Bep Zhang, M Y Hu, Frigola-Alcade R (2015) Bayesian time series learning with Gaussian processes. John Wiley & Sons G. Zhang and HuUniversity of Cambridge G. E. Box and LjungPhD thesisTime series analysis: forecasting and controlFrigola-Alcade(2015). Frigola-Alcade R (2015) Bayesian time series learning with Gaussian processes. PhD thesis, University of Cambridge G. E. Box and Ljung(2015). G E Box GCR G M Jenkins, Ljung GM (2015) Time series analysis: forecasting and control. John Wiley & Sons G. Zhang and Hu(1998). G Zhang BEP, Hu MY (1998) Forecasting with artificial neural networks: The state of the art. International journal of forecasting pp 35-62
MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment Hochreiter and Schmidhuber(1997). Hochreiter S, Schmidhuber J (1997) Long shortterm memory. Wen Hao, Yang ; Hao-Wen Dong Lcy Wen-Yi Dong, Hsiao, Y H Yang, 10.1162/neco.1997.9.8.1735Neural Computation. 98Hao-Wen Dong and Yang(2018). Hao-Wen Dong LCY Wen-Yi Hsiao, Yang YH (2018) MuseGAN: Multi-track sequential generative adversarial networks for symbolic music generation and accompaniment Hochreiter and Schmidhuber(1997). Hochreiter S, Schmidhuber J (1997) Long short- term memory. Neural Computation 9(8):1735-1780, DOI 10.1162/neco.1997.9.8. 1735, URL https://doi.org/10.1162/neco.1997.9.8.1735, https://doi.org/10. 1162/neco.1997.9.8.1735
Recurrent networks and NARMA modeling. J Connor, ; J Martin, Lea Connor, D R Martin, Advances in Neural Information Processing Systems pp. J. Connor and Martin(1991). J Connor LEA, Martin DR (1991) Recurrent networks and NARMA modeling. Advances in Neural Information Processing Systems pp 301-308
Hybrid neural network models for hydrologic time series forecasting. Kumar ; Jain, A Jain, A M Kumar, Applied Soft Computing. 72Jain and Kumar(2007). Jain A, Kumar AM (2007) Hybrid neural network models for hy- drologic time series forecasting. Applied Soft Computing 7(2):585-592
JWerbos P (1990) Backpropagation through time: what it does and how to do it. J Werbos, Proceedings of the IEEE pp. the IEEE ppJ.Werbos(1990). JWerbos P (1990) Backpropagation through time: what it does and how to do it. Proceedings of the IEEE pp 1550-1560
Kim KJ (2003) Financial time series forecasting using support vector machines. Kim, Neurocomputing. 551Kim(2003). Kim KJ (2003) Financial time series forecasting using support vector machines. Neurocomputing 55(1):307-319
Kyunghyun Cho DB Bart Van Merrienboer. Kyunghyun Cho, Bengio, arXiv:14091259On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprintKyunghyun Cho and Bengio(2014). Kyunghyun Cho DB Bart Van Merrienboer, Bengio Y (2014) On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:14091259
Lai, Modeling long-and short-term temporal patterns with deep neural networks. SIGIR pp. Lai et al.(2018)Lai, Chang, Yang, and Liu. Lai G, Chang WC, Yang Y, Liu H (2018) Mod- eling long-and short-term temporal patterns with deep neural networks. SIGIR pp 95-104
Effective approaches to attention-based neural machine translation. Lecun, Y Bengio ; Lecun, Y ; Networks Bengio, Luong, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingConvolutional networks for images, speech, and time seriesLeCun and Bengio(1995). LeCun Y, Bengio Y (1995) Convolutional networks for images, speech, and time series. The handbook of brain theory and neural networks Luong et al.(2015)Luong, Pham, and Manning. Luong T, Pham H, Manning CD (2015) Ef- fective approaches to attention-based neural machine translation. Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing pp 1412-1421
The empirical mode decomposition and Hilbert spectrum for nonlinear and nonstationary time series analysis. N E Huang, ; Ne Liu, Slmwhsqznyct Z Huang, Shen, H Liu, Proc Roy Soc London A. 454N.E. Huang and Liu(1998). NE Huang SLMWHSQZNYCT Z Shen, Liu H (1998) The em- pirical mode decomposition and Hilbert spectrum for nonlinear and nonstationary time series analysis. Proc Roy Soc London A 454:903-995
Nicolas Boulanger-Lewandowski YB, Vincent P (2012) Modeling temporal dependencies in high-dimensional sequences: Application to polyphonic music generation and transcription. Nicolas Boulanger, - Lewandowski, Vincent , Nicolas Boulanger-Lewandowski and Vincent(2012). Nicolas Boulanger-Lewandowski YB, Vincent P (2012) Modeling temporal dependencies in high-dimensional sequences: Ap- plication to polyphonic music generation and transcription
A dual-stage attention-based recurrent neural network for time series prediction. Qin, IJCAI'17. Qin et al.(2017)Qin, Song, Cheng, Cheng, Jiang, and Cottrell. Qin Y, Song D, Cheng H, Cheng W, Jiang G, Cottrell GW (2017) A dual-stage attention-based recurrent neural network for time series prediction. In: IJCAI'17, pp 2627-2633, URL http://dl.acm. org/citation.cfm?id=3172077.3172254
Raffel C (2016) Learning-based methods for comparing sequences, with applications to audio-to-MIDI alignment and matching. Raffel, Rippel, Adams Snoek, O Rippel, J Snoek, R P Adams, Spectral representations for convolutional neural networks. NIPS pp. PhD Thesis Rippel etRaffel(2016). Raffel C (2016) Learning-based methods for comparing sequences, with ap- plications to audio-to-MIDI alignment and matching. PhD Thesis Rippel et al.(2015)Rippel, Snoek, and Adams. Rippel O, Snoek J, Adams RP (2015) Spec- tral representations for convolutional neural networks. NIPS pp 2449-2457
Sequence to sequence learning with neural networks. S Roberts, ; S Aigrain, Mesrng M Roberts, Aigrain Osborne, Sutskever, Le Vinyals, I Sutskever, O Vinyals, Q V Le, Advances in Neural Information Processing Systems pp. Gaussian processes for time-series modellingS. Roberts and Aigrain(2011). S Roberts MESRNG M Osborne, Aigrain S (2011) Gaussian processes for time-series modelling. Phil Trans R Soc A Sutskever et al.(2014)Sutskever, Vinyals, and Le. Sutskever I, Vinyals O, Le QV (2014) Se- quence to sequence learning with neural networks. Advances in Neural Information Processing Systems pp 3104-3112
Support vector method for function approximation, regression estimation, and signal processing. V Vapnik ; V Vapnik Asea, S E Golowich, Advances in Neural Information Processing Systems pp. V. Vapnik(1997). V Vapnik ASea S E Golowich (1997) Support vector method for func- tion approximation, regression estimation, and signal processing. Advances in Neural Information Processing Systems pp 281-287
Time series forecasting using a hybrid ARIMA and neural network model. G P Zhang ; Zhang, Neurocomputing pp. Zhang(2003). Zhang GP (2003) Time series forecasting using a hybrid ARIMA and neural network model. Neurocomputing pp 159-175
| [
"https://github.com/gantheory/TPA-LSTM.*"
] |
[
"Evaluating Scoped Meaning Representations",
"Evaluating Scoped Meaning Representations"
] | [
"Rik Van Noord r.i.k.van.noord@rug.nl \nCLCG\nUniversity of Groningen\n\n",
"Lasha Abzianidze l.abzianidze@rug.nl \nCLCG\nUniversity of Groningen\n\n",
"Hessel Haagsma hessel.haagsma@rug.nl \nCLCG\nUniversity of Groningen\n\n",
"Johan Bos johan.bos@rug.nl \nCLCG\nUniversity of Groningen\n\n"
] | [
"CLCG\nUniversity of Groningen\n",
"CLCG\nUniversity of Groningen\n",
"CLCG\nUniversity of Groningen\n",
"CLCG\nUniversity of Groningen\n"
] | [] | Semantic parsing offers many opportunities to improve natural language understanding. We present a semantically annotated parallel corpus for English, German, Italian, and Dutch where sentences are aligned with scoped meaning representations in order to capture the semantics of negation, modals, quantification, and presupposition triggers. The semantic formalism is based on Discourse Representation Theory, but concepts are represented by WordNet synsets and thematic roles by VerbNet relations. Translating scoped meaning representations to sets of clauses enables us to compare them for the purpose of semantic parser evaluation and checking translations. This is done by computing precision and recall on matching clauses, in a similar way as is done for Abstract Meaning Representations. We show that our matching tool for evaluating scoped meaning representations is both accurate and efficient. Applying this matching tool to three baseline semantic parsers yields F-scores between 43% and 54%. A pilot study is performed to automatically find changes in meaning by comparing meaning representations of translations. This comparison turns out to be an additional way of (i) finding annotation mistakes and (ii) finding instances where our semantic analysis needs to be improved. | null | [
"https://www.aclweb.org/anthology/L18-1267.pdf"
] | 3,533,173 | 1802.08599 | 2f79a2e2bd41438497663307010f1cccf3b8f4da |
Evaluating Scoped Meaning Representations
Rik Van Noord r.i.k.van.noord@rug.nl
CLCG
University of Groningen
Lasha Abzianidze l.abzianidze@rug.nl
CLCG
University of Groningen
Hessel Haagsma hessel.haagsma@rug.nl
CLCG
University of Groningen
Johan Bos johan.bos@rug.nl
CLCG
University of Groningen
Evaluating Scoped Meaning Representations
parallel corpussemantic annotationdiscourse representation structureevaluationsemantic scope
Semantic parsing offers many opportunities to improve natural language understanding. We present a semantically annotated parallel corpus for English, German, Italian, and Dutch where sentences are aligned with scoped meaning representations in order to capture the semantics of negation, modals, quantification, and presupposition triggers. The semantic formalism is based on Discourse Representation Theory, but concepts are represented by WordNet synsets and thematic roles by VerbNet relations. Translating scoped meaning representations to sets of clauses enables us to compare them for the purpose of semantic parser evaluation and checking translations. This is done by computing precision and recall on matching clauses, in a similar way as is done for Abstract Meaning Representations. We show that our matching tool for evaluating scoped meaning representations is both accurate and efficient. Applying this matching tool to three baseline semantic parsers yields F-scores between 43% and 54%. A pilot study is performed to automatically find changes in meaning by comparing meaning representations of translations. This comparison turns out to be an additional way of (i) finding annotation mistakes and (ii) finding instances where our semantic analysis needs to be improved.
Introduction
Semantic parsing is the task of assigning meaning representations to natural language expressions. Informally speaking, a meaning representation describes who did what to whom, when, and where, and to what extent this is the case or not. The availability of open-domain, wide coverage semantic parsers has the potential to add new functionality, such as detecting contradictions, verifying translations, and getting more accurate search results. Current research on open-domain semantic parsing focuses on supervised learning methods, using large semantically annotated corpora as training data. However, there are not many annotated corpora available. We present a parallel corpus annotated with formal meaning representations for English, Dutch, German, and Italian, and a way to evaluate the quality of machinegenerated meaning representations by comparing them to gold standard annotations. Our work shows many similarities with recent annotation and parsing efforts around Abstract Meaning Representations, (AMR; Banarescu et al., 2013) in that we abstract away from syntax, use firstorder meaning representations, and use an adapted version of SMATCH for evaluation. However, we deviate from AMR on several points: meanings are represented by scoped meaning representations (arriving at a more linguistically motivated treatment of modals, negation, presupposition, and quantification), and the nonlogical symbols that we use are grounded in WordNet (concepts) and VerbNet (thematic roles), rather than PropBank (Palmer et al., 2005). We also provide a syntactic analysis in the annotated corpus, in order to derive the semantic analyses in a compositional way. We make the following contributions:
• A meaning representation with explicit scopes that combines WordNet and VerbNet with elements of formal logic (Section 2).
• A gold standard annotated parallel corpus of for-mal meaning representations for four languages (Section 3).
• A tool that compares two scoped meaning representations for the purpose of evaluation (Section 4 and Section 5).
Scoped Meaning Representations
Discourse Representation Structures
The backbone of the meaning representations in our annotated corpus is formed by the Discourse Representation Structures (DRS) of Discourse Representation Theory (Kamp and Reyle, 1993). Our version of DRS integrates WordNet senses (Fellbaum, 1998), adopts a neo-Davidsonian analysis of events employing VerbNet roles (Bonial et al., 2011), and includes an extensive set of comparison operators. More formally, a DRS is an ordered pair of a set of variables (discourse referents) and a set of conditions. There are basic and complex conditions. Terms are either variables or constants, where the latter ones are used to account for indexicals (Bos, 2017). Basic conditions are defined as follows:
• If W is a symbol denoting a WordNet concept and x is a term, then W(x) is a basic condition;
• If V is a symbol denoting a thematic role and x and y are terms, then V(x,y) is a basic condition;
• If x and y are terms, then x=y, x =y, x∼y, x<y, x≤y, x≺y, and x y are basic conditions formed with comparison operators.
WordNet concepts are represented as word.POS.SenseNum, denoting a unique synset within WordNet. Thematic roles, including the VerbNet roles, always have two arguments and start with an uppercase character. Complex conditions introduce scopes in the meaning representation. They are defined using logical operators as follows:
Time(s 1 , t 1 ) Theme(s 1 , x 1 )
time.n.08(t 1 ) t 1 = now k 1 ::
x 1 x 2 e 1 t 1 male.n.02(x 1 ) play.v.03(e 1 ) Time(e 1 , t 1 ) Theme(e 1 , x 2 ) Agent(e 1 , x 1 ) time.n.08(t 1 ) t 1 ≺ now piano.n.01(x 2 ) k 2 ::
x 3 e 2 t 2 female.n.02(x 3 ) time.n.08(t 2 ) t 2 ≺ now sing.v.01(e 2 )
Time(e 2 , t 2 ) Agent(e 2 , x 3 ) Figure 1: Examples of PMB documents with their scoped meaning representations and the corresponding clausal form.
CONTINUATION(k 1 , k 2 ) k0 NOT b2 b2 REF
The first two structures are basic DRSs while the last one is a segmented DRS.
• If B is a DRS, then ¬B, ♦B, B are complex conditions;
• If x is a variable, and B is a DRS, then x:B is a complex condition;
• If B and B' are DRSs, then B⇒B' and B∨B' are complex conditions.
Besides basic DRSs, we also have segmented DRSs, following Asher (1993) and Asher and Lascarides (2003). Hence, DRSs are formally defined as follows:
• If D is a (possibly empty) set of discourse referents, and C a (possibly empty) set of DRS-conditions, then <D,C> is a (basic) DRS;
• If B is a (basic) DRS, and B' a DRS, then B↓B' is a (segmented) DRS;
• If U is a set of labelled DRSs, and R a set of discourse relations, then <U,R> is a (segmented) DRS.
DRSs can be visualized in different ways. While the compact linear format saves space, the box notation increases readability. In this paper we use the latter notation. The examples of DRSs in the box notation are presented in Figure 1. However, for evaluation and comparison purposes, we convert a DRS into a flat clausal form, i.e. a set of clauses. This is carried out by using the labels for DRSs as introduced in Venhuizen (2015) and Venhuizen et al. (2018), and breaking down the recursive structure of DRS by assigning them a label of the DRS in which they appear. Let t, t', and t" The resulting clauses are then of the form t R t' or t R t' t" where R ∈ C ∪T ∪O. The result of translating DRSs to sets of clauses is shown in Figure 1. In a clausal form, it is assumed that different variables are represented with different variable names and vice versa. Due to this, before translating a DRS to a clausal form, different discourse referents in the DRS must be represented with different variable names. This assumption significantly simplifies the matching process between clausal forms (Section 4) and makes it possible to recover the original box notation of a DRS from its clausal form.
Comparing DRSs to AMRs
Since DRSs in a clausal form come close to the triple notation of AMRs , and both aim to model meaning of natural language expressions, it is instructive to compare these two meaning representations. The main difference between AMRs and DRSs is that the latter ones have explicit scopes (boxes) and scopal operators such as negation. Due to the presence of scope in DRSs, their clauses are more complex than AMR triples. The length of DRS clauses varies from three to four, in contrast to the constant length of AMR triples. Additionally, DRS clauses contain two different types of variables, for scopes and discourse referents, whereas AMR triples have just one type. Unlike AMRs, DRSs model tense. In general, the tense related information is encoded in a clausal form with three additional clauses, which express a WordNet concept, semantic role and a comparison operator. In order to give an intuition about the diversity of clauses in DRSs, Table 1 shows a distribution of various types of clauses in a corpus of DRSs (see Section 3). Since every logical operator carries a scope, their number represents a lower bound of the number of scopes in the meaning representations. In addition to logical operators, scopes are introduced by presupposition triggers like proper names or pronouns.
To make a meaningful comparison between AMRs and DRSs in terms of size, we compare the DRSs of 250,000 English sentences from the Parallel Meaning Bank (PMB; to AMRs of the same sentences, produced by the state-of-the-art AMR parser from van Noord and Bos (2017). Statistics of the comparison are shown in Figure 2. On average, DRSs are about twice as large as AMRs, in terms of the number of clauses as well as the number of unique variables. This is obviously due to the explicit presence of scope in the meaning representation. However, for both meaning representations the number of clauses and variables increase linearly with sentence length.
The Parallel Meaning Bank
The scoped meaning representations, integrating word senses, thematic roles, and the list of operators, form the final product of our semantically annotated corpus: the Parallel Meaning Bank. The PMB is a semantically annotated corpus of English texts aligned with translations in Dutch, German and Italian . It uses the same framework as the Groningen Meaning Bank (Bos et al., 2017), but aims to abstract away from language-specific annotation models. There are five annotation layers present in the PMB: segmentation of words, multi-word expressions and sentences (Evang et al., 2013), semantic tagging (Bjerva et al., 2016;, syntactic analysis based on CCG (Lewis and Steedman, 2014), word senses based on WordNet (Fellbaum, 1998), and thematic role labelling . The semantic analysis for English is projected on the other languages, to save manual annotation efforts . All the information provided by these layers is combined into a single meaning representation using the semantic parser Boxer (Bos, 2015), in the form of Discourse Representation Structures. Note that the goal is to produce annotations that capture the most probable interpretation of a sentence; no ambiguities or under-specification techniques are employed. At each step in this pipeline, a single component produces the automatic annotation for all four languages, using language-specific models. Human annotators can correct machine output by adding 'Bits of Wisdom' (Basile et al., 2012). These corrections serve as data for training better models, and create a gold standard annotated subset of the data. Annotation quality is defined per layer and language, at three levels: bronze (fully automatic), silver (automatic with some manual corrections), and gold (fully manually checked and corrected). If all layers are marked as gold, it follows that the resulting DRS can be considered gold standard, too. The first public release 1 of the PMB contains gold standard scoped meaning representations for over 3,000 sentences in total (see Table 2). The release includes mainly relatively short sentences involving several semantic scope phenomena. A detailed distribution of clause types in the dataset is given in Table 1. A larger amount of texts and more complex linguistic phenomena will be included in future releases.
In addition to the released data, the PMB documents are publicly accessible through a web interface, called the PMB explorer. 2 In the explorer, visitors can view natural lan- Figure 3: The edit mode of the PMB explorer: semantic tag (sem) and symbol (sym) layers of the document are bronze and therefore editable, while the word sense (sns), semantic role (rol) and CCG category (cat) layers are gold and uneditable.
guage texts with several layers of annotations and compositionally derived meaning representations, and, after registration, edit the annotations. It is also possible to use a word or a phrase search to find certain words or constructions with their semantic analyses. Figure 3 shows the PMB explorer with the semantic analysis of a sentence in the edit mode.
Matching Scoped Representations
Evaluation by Matching
In the context of the Parallel Meaning Bank there are two main reasons to verify whether two scoped meaning representations capture the same meaning or not: (1) to be able to evaluate semantic parsers that produce scoped meaning representations by comparing gold-standard DRSs to system output; and (2) to check whether translations are meaningpreserving; a discrepancy in meaning between source and target could indicate a mistranslation. The ideal way to compare two meaning representations would be one based on inference. This can be implemented by translating DRSs to first-order formulas and using an off-the-shelf theorem prover to find out whether the two meanings are logically equivalent (Blackburn and Bos, 2005). This method can compare meaning representation that have different syntactic structures but still are equivalent in meaning. The disadvantage of this approach is that it yields just a binary answer: if a proof is found the meanings are the same, else they are not. An alternative way of comparing meaning representations is comparing the corresponding clausal forms by computing precision and recall over matched clauses (Allen et al., 2008). The advantage of this approach is that it returns a score between 0 and 1, preferring meaning representations that better approximate the gold standard over those that are completely different. Since the variables of different clausal forms are independent from each other, the comparison of two clausal forms boils down to finding a (partial) one-to-one variable mapping that maximizes intersection of the clausal forms. For example, the maximal matching for the clausal forms in Figure 4 is achieved by the following partial mapping from the variables of the left form into the variables of the right one: {k0 →b0, e1 →v1}. For AMRs, finding a maximal matching is done using a hill-climbing algorithm called SMATCH . This algorithm is based on a simple principle: it checks if a single change in the current mapping results in a better matching mapping. If this is the case, it continues with the new mapping. Otherwise, the algorithm stops and has arrived at the final mapping. This means that it can easily get stuck in local optima. To avoid this, SMATCH does a predefined number of restarts of this process, where each restart starts with a new and random initial mapping. The first restart always uses a 'smart' initial mapping, based on matching concepts. Our evaluation system, called COUNTER 3 , is a modified version of SMATCH. Even though clausal forms do not form a graph and clauses consist of either three or four components, the principle behind the variable matching is the same. The actual implementation differs, mainly because SMATCH was not designed to handle clauses with three variables, e.g. k0 Agent e1 x1 . In contrast to SMATCH, COUNTER takes a set of clauses directly as input. COUNTER also uses two smart initial map-01/3445: He smiled. 00/3514: She fled Australia.
x 1 e 1 t 1 male.n.02(x 1 ) smile.v.01(e 1 ) Time(e 1 , t 1 ) Agent(e 1 , x 1 ) time.n.08(t 1 )
t 1 ≺ now SPAR DRS x 1 x 2 v 1 t 1 female.n.02(x 1 ) flee.v.01(v 1 ) Time(v 1 , t 1 ) Source(v 1 , x 2 ) Theme(v 1 , x 1 ) time.n.08(t 1 ) t 1 ≺ now country.n.02(x 2 )
Name(x 2 , australia)
Evaluating Matching
As we showed in Figure 2, DRSs are about twice as large as AMRs. This increase in size might be problematic, since it increases the average runtime for comparing DRSs. Moreover, if there are more variables, more restarts might be needed to ensure a reliable score, again increasing runtime. Therefore, our goal is that COUNTER gets close to optimal performance in reasonable time. Since we want to be sure that this also holds for longer sentences, we use a balanced data set. We take 1,000 DRSs produced by the semantic parser Boxer for each sentence length from 2 to 20 (punctuation excluded), resulting in a set of 19,000 DRSs. To test COUNTER in a realistic setting, we cannot compare the DRSs to themselves or to a DRS of the translation, since those are too similar. Therefore, the 19,000 English sentences of the DRS are parsed by an existing AMR parser and subsequently converted into a DRS by a rule-based system, AMR2DRS, as motivated by Bos (2016). An example of translating an AMR to a clausal form of a DRS is shown in Figure 5. We convert AMR relations to DRS roles by employing a manually created translation dictionary, including rules for semantic roles (e.g. :ARG0 → Agent and :ARG1 → Patient) and pronouns (e.g. she → female.n.02). Since AMRs do not contain tense information, past tense clauses 4 are produced for the first verb in the AMR (see four tense related clauses in Figure 5). Also, since AMRs do not use Word-Net synsets, all concepts get a default first sense, except for concepts that are added by concept-specific rules, such as female.n.02 and time.n.08. We compare the sets of DRSs using different numbers of restarts to find the best trade-off between speed and accuracy. The results are shown in Table 3. The optimal scores are obtained using a Prolog script that performs an exhaustive search for the optimal mapping. As expected, increasing the number of restarts benefits performance. Cai and Knight (2013) consider four restarts the optimal trade-off between accuracy and speed, showing no improvement in F-score when using more than ten restarts. 5 Contrary to SMATCH, performance for COUNTER still increases with more than 4 restarts. In our case, it is a bit harder to select an optimal number of restarts, since this number depends on the length of the sentence, as shown in Figure 6. We see that for long sentences, 5 and 10 restarts are not sufficient to get close to the optimal, while for short sentences 5 restarts might be considered enough. In general, the best trade-off between speed and accuracy is approximately 20 restarts. Difference to optimal (F-score %) 5 restarts 10 restarts 20 restarts 30 restarts 50 restarts 100 restarts Figure 6: Comparison of the differences to the optimal F-score per sentence length for different number of restarts.
COUNTER in Action
Semantic Parsing
The first purpose of COUNTER is to evaluate semantic parsers for DRSs. Since this is a new task, there are no existing systems that are able to do this. Therefore, we show the results of three baseline systems PMB PIPELINE, SPAR, and AMR2DRS (Subsection 4.2). 6 The PMB PIPELINE produces a DRS via the pipeline of the tools used for automatic annotation of the PMB. 7 This means that it has no access to manual corrections, and hence it uses the most frequent word senses and default VerbNet roles. SPAR is a trivial semantic 'parser' which always outputs the DRS that is most similar to all other DRSs in the most recent PMB release (the left-hand DRS in Figure 4). The results of the three baseline parsers are shown in Table 4. The surprisingly high score of SPAR is explained by the fact that the first PMB release mainly contains rel- , where a substantial share (approximately 3) comes from tense related clauses. Due to this fact, guessing temporal clauses for short sentences has a big impact on F-score. This is illustrated by the comparison of the clausal forms in Figure 4, where matching only temporal clauses results in an F-score of 40%. AMR2DRS outperforms SPAR by a considerable margin, but is still far from optimal. This is also the case for PMB PIPELINE, which shows that, within the PMB, manual annotation is still required to obtain gold standard meaning representations.
Comparing Translations
The second purpose of COUNTER is checking whether translations are meaning-preserving. As a pilot study, we compare the gold standard meaning representations of German, Italian and Dutch translations in the release to their English counterparts. The results are shown in Manual analysis of these discrepancies showed that there are several different causes for a discrepancy to arise. In most of the cases (38%), a human annotation error was made. In 34% of cases, a definite description was used in one language but not in the other. Examples are 'has long hair' with the Italian translation 'ha i capelli lunghi', and 'escape from prison' with the Dutch translation 'vluchtte uit de gevangenis'. In 15% of cases proper names were translated (e.g. 'United States' and 'Stati Uniti'). This is not accounted for, since we do not currently make use of grounding proper names to a unique identifier, for instance by wikification (Cucerzan, 2007), or by using a languageindependent transliteration of names. In 13% of cases the translation was either non-literal or incorrect. Examples are 'Tom lacks experience' with the Dutch translation 'Tom She removed the dishes from the table. Ze ruimde de tafel af.
x 1 x 2 e 1 x 3 t 1 female.n.02(x 1 ) remove.v.01(e 1 ) Time(e 1 , t 1 ) Source(e 1 , x 3 ) Theme(e 1 , x 2 ) Agent(e 1 , x 1 ) time.n.08(t 1 ) t 1 ≺ now dish.n.01(x 2 ) table.n.03(x 3 )
x 1 x 2 e 1 t 1 female.n.02(x 1 ) unclutter.v.01(e 1 )
Time(e 1 , t 1 ) Source(e 1 , x 2 ) Agent(e 1 , x 1 ) time.n.08(t 1 ) t 1 ≺ now heeft geen ervaring' (lit. 'Tom has no experience'), 'can't use chopsticks' with the German 'kann nicht mit Stäbchen essen' (lit. 'cannot eat with sticks'), and 'remove the dishes from the table' with the Dutch translation 'ruimde de tafel af' (lit. 'uncluttered the table'). The mapping of clausal forms involving non-literal translations is illustrated in Figure 7. This preliminary analysis shows that this comparison of meaning representations provides an an additional method for detecting mistakes in annotation. It also showed that there are cases where our semantic analysis needs to be revised and improved.
Conclusions and Future Work
Large semantically annotated corpora are rare. Within the Parallel Meaning Bank project, we are creating a large, open-domain corpus annotated with formal meaning representations. We take advantage of parallel corpora, enabling the production of meaning representations for several languages at the same time. Currently, the these are languages similar to English, two Germanic languages (Dutch and German) and one Romance language (Italian). Ideally, future work would include more non-Germanic languages. The DRSs that we present are meaning representations with substantial expressive power. They deal with negation, universal quantification, modals, tense, and presupposition. As a consequence, semantic parsing for DRSs is a challenging task. Compared to Abstract Meaning Representations, the number of clauses and variables in a DRS is about two times larger on average. Moreover, compared to AMRs, DRSs rarely contain clauses with single variables. All nonlogical symbols used in DRSs are grounded in WordNet and VerbNet (with a few extensions). This makes evaluation using matching computationally challenging, in particular for long sentences, but our matching system COUNTER achieves a reasonable trade-off between speed and accuracy. Several extensions to the annotation scheme are possible. Currently, the DRSs for the non-English languages contain references to synsets of the English WordNet. Conceptually, there is nothing wrong with this (as synsets can be viewed as identifiers for concepts that are languageindependent), but for practical reasons it makes more sense to provide links to synsets of the original language (Hamp and Feldweg, 1997;Postma et al., 2016;Roventini et al., 2000;Pianta et al., 2002). In addition, we consider implementing semantic grounding such as wikification in the Parallel Meaning Bank. As for other future work, we plan to include a more finegrained matching regarding WordNet synsets, since the current evaluation of concepts is purely string-based, with only identical strings resulting in a matching clause. For many synsets, however, it is possible to refer to them with more than one word.POS.SenseNum triple, and this should be accounted for (e.g. fox.n.02 and dodger.n.01 both refer to the same synset). In a similar vein, we plan to experiment with including WordNet concept similarity techniques in COUNTER to compute semantic distances between synsets, in case they do not fully match. Finally, we would like to stimulate research on semantic parsing with scoped meaning representations. Not only are we planning to extend the coverage of phenomena and the number of texts with gold-standard meaning representations for the four languages, we also aim to organize a shared task on DRS parsing for English, German, Dutch and Italian in the near future.
be meta-variables ranging over DRSs or terms. Let C be a set of WordNet concepts, T a set of the thematic roles, and O the set of DRS operators (REF, NOT, POS, NEC, EQU, NEQ, APX, LES, LEQ, TPR, TAB, IMP, DIS, PRP, DRS).
Figure 2 :
2Comparison of the number of triples/clauses and variables between AMRs and DRSs for sentences of different length.
Figure 5 :
5A clausal form obtained from an automatically generated AMR of the document 14/0849.
Figure 7 :
7English and Dutch non-literal translations of the document 14/0849. Their clausal forms match each other (excl. redundant REF-clauses) with an F-score of 77.8%. This matching is achieved by the mapping of variables {b5 →b4, b4 →b2}.
Table 2 :
2Statistics of the first PMB release.
PMB document with an F-score of 54.5%. If redundant REF-clauses are ignored, the F-score drops to 40%. These results are achieved with the help of the mapping {k0 →b0, e1 →v1}.b1 REF x1
b1 male n.02 x1
b3 REF t1
b3 TPR t1 "now"
b3 time n.08 t1
k0 Agent e1 x1
k0 REF e1
k0 Time e1 t1
k0 smile v.01 e1
b1 REF x1
b1 female n.02 x1
b3 REF t1
b3 TPR t1 "now"
b3 time n.08 t1
b0 Theme v1 x1
b0 Source v1 x2
b0 REF v1
b0 Time v1 t1
b0 flee v.01 v1
b2 REF x2
b2 Name x2 "australia"
b2 country n.02 x2
Figure 4: The SPAR DRS (Section 5.1) matches the DRS
of 00/3514 pings, based on either role-clauses, like k0 Agent e1
x1 , or concept-clauses, like k0 smile v.01 e1 .
Also specific to this method is the treatment of REF-clauses
in the matching process. Before matching two DRSs, re-
dundant REF-clauses are removed. A REF-clause b1
REF x1 is redundant if its discourse referent x1 occurs in
some basic condition of the same DRS b1. Figure 4 shows
some examples of redundant REF-clauses. Not removing
these redundant clauses would lead to inflated matching
scores since for each matched variable the corresponding
REF-clause will also match. Comparison of the clausal
forms in Figure 4 demonstrates this fact. Note that not all
REF-clauses are redundant: if a discourse referent is de-
clared outside the scope of negation or an other scope op-
erator, the REF-clause is kept. This is very infrequent in
our data, since only a single REF-clause was preserved in
2,049 examples.
She removed the dishes from the table.(r / remove-01
:ARG0 (s / she)
:ARG1 (d / dish)
:ARG2 (t / table)) «
b0 REF x1
b0 remove v.01 x1
b4 REF x5
b4 TPR x5 "now"
b4 time n.08 x5
b0 Time x1 x5
b0 Agent x1 x2
b1 REF x2
b1 female n.02 x2
b0 Patient x1 x3
b2 REF x3
b2 dish n.01 x3
b0 Theme x1 x4
b3 REF x4
b3 table n.01 x4
Table 3 :
3Results of comparing 19,000 Boxer-produced
DRSs to DRSs produced by AMR2DRS, for different num-
ber of restarts. For three or more restarts, we always use the
smart role and concept mapping.
Table 4 :
4Comparison of three baseline DRS parsers to the gold-standard data set.atively short sentences with little structural diversity. The
average number of clauses per clausal form (excluding re-
dundant REF-clauses) is 8.7
Table 5 .
5The
Table 5 :
5Comparing meaning representations of English texts to those of German, Italian and Dutch translations.
table.n.03(x 2 ) b1 REF x1 b1 female n.02 x1 b5 REF t1 b5 TPR t1 "now" b5 time n.08 t1 k0 Agent e1 x1 k0 REF e1 k0 Theme e1 x2 k0 Time e1 t1 k0 remove v.01 e1 b2 REF x2 b2 dish n.01 x2 k0 Source e1 x3 b4 REF x3 b4 table n.03 x3 b1 REF x1 b1 female n.02 x1 b4 REF t1 b4 TPR t1 "now" b4 time n.08 t1 k0 Agent e1 x1 k0 REF e1 k0 Source e1 x2 k0 Time e1 t1 k0 unclutter v.01 e1 b2 REF x2 b2 table n.03 x2
http://pmb.let.rug.nl/data.php 2 http://pmb.let.rug.nl/explorer
http://github.com/RikVN/DRS_parsing/
Past tense was chosen because it is the most frequent tense in the data set. 5 However, we found that, in practice, SMATCH still improves when using more restarts. Parsing the development set of the AMR dataset LDC2016E25 with the baseline parser of van Noord and Bos (2017) yields an F-score of 55.0 for 10 restarts, but 55.4 for 100 restarts.
SPAR and AMR2DRS are available at: https://github.
Towards universal semantic tagging. L Abzianidze, J Bos, Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017) -Short Papers. the 12th International Conference on Computational Semantics (IWCS 2017) -Short PapersMontpellier, FranceSeptember. Association for Computational LinguisticsAbzianidze, L. and Bos, J. (2017). Towards universal se- mantic tagging. In Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017) - Short Papers, Montpellier, France, September. Associa- tion for Computational Linguistics.
The Parallel Meaning Bank: Towards a multilingual corpus of translations annotated with compositional meaning representations. L Abzianidze, J Bjerva, K Evang, H Haagsma, R Van Noord, P Ludmann, D.-D Nguyen, J Bos, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, Spain2Association for Computational LinguisticsAbzianidze, L., Bjerva, J., Evang, K., Haagsma, H., van Noord, R., Ludmann, P., Nguyen, D.-D., and Bos, J. (2017). The Parallel Meaning Bank: Towards a multi- lingual corpus of translations annotated with composi- tional meaning representations. In Proceedings of the 15th Conference of the European Chapter of the Asso- ciation for Computational Linguistics: Volume 2, Short Papers, pages 242-247, Valencia, Spain, April. Associa- tion for Computational Linguistics.
Deep Semantic Analysis of Text. J F Allen, M Swift, W Beaumont, Semantics in Text Processing. Johan Bos et al.College Publications1STEPAllen, J. F., Swift, M., and de Beaumont, W. (2008). Deep Semantic Analysis of Text. In Johan Bos et al., editors, Semantics in Text Processing. STEP 2008 Conference Proceedings, volume 1 of Research in Computational Semantics, pages 343-354. College Publications.
Logics of conversation. N Asher, A Lascarides, Studies in natural language processing. Cambridge University PressAsher, N. and Lascarides, A. (2003). Logics of conversa- tion. Studies in natural language processing. Cambridge University Press.
Reference to Abstract Objects in Discourse. N Asher, Kluwer Academic PublishersAsher, N. (1993). Reference to Abstract Objects in Dis- course. Kluwer Academic Publishers.
Abstract Meaning Representation for sembanking. L Banarescu, C Bonial, S Cai, M Georgescu, K Griffitt, U Hermjakob, K Knight, P Koehn, M Palmer, N Schneider, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. the 7th Linguistic Annotation Workshop and Interoperability with DiscourseSofia, BulgariaBanarescu, L., Bonial, C., Cai, S., Georgescu, M., Griffitt, K., Hermjakob, U., Knight, K., Koehn, P., Palmer, M., and Schneider, N. (2013). Abstract Meaning Represen- tation for sembanking. In Proceedings of the 7th Lin- guistic Annotation Workshop and Interoperability with Discourse, pages 178-186, Sofia, Bulgaria.
A platform for collaborative semantic annotation. V Basile, J Bos, K Evang, N Venhuizen, Proceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012). the Demonstrations at the 13th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2012)Avignon, FranceBasile, V., Bos, J., Evang, K., and Venhuizen, N. (2012). A platform for collaborative semantic annotation. In Pro- ceedings of the Demonstrations at the 13th Conference of the European Chapter of the Association for Computa- tional Linguistics (EACL 2012), pages 92-96, Avignon, France.
Semantic tagging with deep residual networks. J Bjerva, B Plank, J Bos, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanBjerva, J., Plank, B., and Bos, J. (2016). Semantic tagging with deep residual networks. In Proceedings of COLING 2016, the 26th International Conference on Computa- tional Linguistics: Technical Papers, pages 3531-3541, Osaka, Japan.
Representation and Inference for Natural Language. A First Course in Computational Semantics. P Blackburn, J Bos, CSLIBlackburn, P. and Bos, J. (2005). Representation and In- ference for Natural Language. A First Course in Compu- tational Semantics. CSLI.
A hierarchical unification of LIRICS and VerbNet semantic roles. C Bonial, W J Corvey, M Palmer, V Petukhova, H Bunt, Proceedings of the 5th IEEE International Conference on Semantic Computing. the 5th IEEE International Conference on Semantic ComputingICSC 2011Bonial, C., Corvey, W. J., Palmer, M., Petukhova, V., and Bunt, H. (2011). A hierarchical unification of LIRICS and VerbNet semantic roles. In Proceedings of the 5th IEEE International Conference on Semantic Computing (ICSC 2011), pages 483-489.
Annotating semantic roles in a lexicalised grammar environment. J Bos, K Evang, M Nissim, Proceedings of the Eighth Joint ACL-ISO Workshop on Interoperable Semantic Annotation (ISA-8). the Eighth Joint ACL-ISO Workshop on Interoperable Semantic Annotation (ISA-8)Pisa, ItalyBos, J., Evang, K., and Nissim, M. (2012). Annotating semantic roles in a lexicalised grammar environment. In Proceedings of the Eighth Joint ACL-ISO Workshop on Interoperable Semantic Annotation (ISA-8), pages 9-12, Pisa, Italy.
The Groningen Meaning Bank. J Bos, V Basile, K Evang, N Venhuizen, J Bjerva, Handbook of Linguistic Annotation. Nancy Ide et al.SpringerBos, J., Basile, V., Evang, K., Venhuizen, N., and Bjerva, J. (2017). The Groningen Meaning Bank. In Nancy Ide et al., editors, Handbook of Linguistic Annotation, vol- ume 2, pages 463-496. Springer.
Open-domain semantic parsing with Boxer. J Bos, Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015). Beáta Megyesithe 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)Bos, J. (2015). Open-domain semantic parsing with Boxer. In Beáta Megyesi, editor, Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015), pages 301-304.
Expressive power of Abstract Meaning Representations. J Bos, Computational Linguistics. 423Bos, J. (2016). Expressive power of Abstract Meaning Representations. Computational Linguistics, 42(3):527- 535.
Indexicals and compositionality: Inside-out or outside-in?. J Bos, Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017) -Short Papers. the 12th International Conference on Computational Semantics (IWCS 2017) -Short PapersMontpellier, FranceSeptember. Association for Computational LinguisticsBos, J. (2017). Indexicals and compositionality: Inside-out or outside-in? In Proceedings of the 12th International Conference on Computational Semantics (IWCS 2017) - Short Papers, Montpellier, France, September. Associa- tion for Computational Linguistics.
Smatch: an evaluation metric for semantic feature structures. S Cai, K Knight, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaShort Papers2Association for Computational LinguisticsCai, S. and Knight, K. (2013). Smatch: an evaluation met- ric for semantic feature structures. In Proceedings of the 51st Annual Meeting of the Association for Computa- tional Linguistics (Volume 2: Short Papers), pages 748- 752, Sofia, Bulgaria, August. Association for Computa- tional Linguistics.
Large-scale named entity disambiguation based on Wikipedia data. S Cucerzan, Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)Prague, Czech RepublicAssociation for Computational LinguisticsCucerzan, S. (2007). Large-scale named entity disam- biguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 708-716, Prague, Czech Republic. Association for Computational Linguistics.
Cross-lingual learning of an open-domain semantic parser. K Evang, J Bos, Proceedings of COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COL-ING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanEvang, K. and Bos, J. (2016). Cross-lingual learning of an open-domain semantic parser. In Proceedings of COL- ING 2016, the 26th International Conference on Compu- tational Linguistics: Technical Papers, pages 579-588, Osaka, Japan.
Elephant: Sequence labeling for word and sentence segmentation. K Evang, V Basile, G Chrupała, J Bos, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP). Evang, K.the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP)Seattle, Washington, USAUniversity of GroningenCross-lingual Semantic Parsing with Categorial Grammars. Ph.D. thesisEvang, K., Basile, V., Chrupała, G., and Bos, J. (2013). Elephant: Sequence labeling for word and sentence segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1422-1426, Seattle, Washington, USA. Evang, K. (2016). Cross-lingual Semantic Parsing with Categorial Grammars. Ph.D. thesis, University of Groningen.
WordNet. An Electronic Lexical Database. Christiane FellbaumThe MIT PressCambridge, Ma., USAChristiane Fellbaum, editor. (1998). WordNet. An Elec- tronic Lexical Database. The MIT Press, Cambridge, Ma., USA.
GermaNet -a lexicalsemantic net for German. B Hamp, H Feldweg, Proceedings of ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications. ACL workshop Automatic Information Extraction and Building of Lexical Semantic Resources for NLP ApplicationsHamp, B. and Feldweg, H. (1997). GermaNet -a lexical- semantic net for German. In In Proceedings of ACL workshop Automatic Information Extraction and Build- ing of Lexical Semantic Resources for NLP Applications, pages 9-15.
From Discourse to Logic. H Kamp, U Reyle, Kamp, H. and Reyle, U. (1993). From Discourse to Logic;
A* CCG parsing with a supertag-factored model. Drt, Kluwer, Dordrecht, M Lewis, M Steedman, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAn Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic andAn Introduction to Modeltheoretic Semantics of Natural Language, Formal Logic and DRT. Kluwer, Dordrecht. Lewis, M. and Steedman, M. (2014). A* CCG pars- ing with a supertag-factored model. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 990-1000, Doha, Qatar.
The proposition bank: An annotated corpus of semantic roles. M Palmer, D Gildea, P Kingsbury, Computational Linguistics. 131Palmer, M., Gildea, D., and Kingsbury, P. (2005). The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1).
Multi-WordNet: developing an aligned multilingual database. E Pianta, L Bentivogli, C Girardi, Proceedings of the First International Conference on Global WordNet. the First International Conference on Global WordNetPianta, E., Bentivogli, L., and Girardi, C. (2002). Multi- WordNet: developing an aligned multilingual database. In Proceedings of the First International Conference on Global WordNet, pages 293-302.
Open Dutch WordNet. M Postma, E Van Miltenburg, R Segers, A Schoen, P Vossen, Proceedings of the Eight Global Wordnet Conference. the Eight Global Wordnet ConferenceBucharest, RomaniaPostma, M., van Miltenburg, E., Segers, R., Schoen, A., and Vossen, P. (2016). Open Dutch WordNet. In Proceed- ings of the Eight Global Wordnet Conference, Bucharest, Romania.
ItalWordNet: a large semantic database for Italian. A Roventini, A Alonge, N Calzolari, B Magnini, F Bertagna, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2000). the Ninth International Conference on Language Resources and Evaluation (LREC 2000)Roventini, A., Alonge, A., Calzolari, N., Magnini, B., and Bertagna, F. (2000). ItalWordNet: a large semantic database for Italian. In Proceedings of the Ninth Inter- national Conference on Language Resources and Evalu- ation (LREC 2000), pages 783-790.
Neural semantic parsing by character-based translation: Experiments with Abstract Meaning Representations. R Van Noord, J Bos, Computational Linguistics in the Netherlands Journal. 7van Noord, R. and Bos, J. (2017). Neural semantic pars- ing by character-based translation: Experiments with Abstract Meaning Representations. Computational Lin- guistics in the Netherlands Journal, 7:93-108.
Discourse semantics with information structure. N J Venhuizen, J Bos, P Hendriks, H Brouwer, Journal of Semantics. Venhuizen, N. J., Bos, J., Hendriks, P., and Brouwer, H. (2018). Discourse semantics with information structure. Journal of Semantics.
Projection in Discourse: A datadriven formal semantic analysis. N J Venhuizen, University of GroningenPh.D. thesisVenhuizen, N. J. (2015). Projection in Discourse: A data- driven formal semantic analysis. Ph.D. thesis, Univer- sity of Groningen.
| [
"http://github.com/RikVN/DRS_parsing/"
] |
[
"STREAMING SMALL-FOOTPRINT KEYWORD SPOTTING USING SEQUENCE-TO-SEQUENCE MODELS",
"STREAMING SMALL-FOOTPRINT KEYWORD SPOTTING USING SEQUENCE-TO-SEQUENCE MODELS"
] | [
"Yanzhang He yanzhanghe@google.com \nGoogle Inc\nMountain ViewCAU.S.A\n",
"Rohit Prabhavalkar prabhavalkar@google.com \nGoogle Inc\nMountain ViewCAU.S.A\n",
"Kanishka Rao kanishkarao@google.com \nGoogle Inc\nMountain ViewCAU.S.A\n",
"Wei Li mweili@google.com \nGoogle Inc\nMountain ViewCAU.S.A\n",
"Anton Bakhtin bakhtin@google.com \nGoogle Inc\nMountain ViewCAU.S.A\n",
"Ian Mcgraw imcgraw@google.com \nGoogle Inc\nMountain ViewCAU.S.A\n"
] | [
"Google Inc\nMountain ViewCAU.S.A",
"Google Inc\nMountain ViewCAU.S.A",
"Google Inc\nMountain ViewCAU.S.A",
"Google Inc\nMountain ViewCAU.S.A",
"Google Inc\nMountain ViewCAU.S.A",
"Google Inc\nMountain ViewCAU.S.A"
] | [] | We develop streaming keyword spotting systems using a recurrent neural network transducer (RNN-T) model: an all-neural, end-toend trained, sequence-to-sequence model which jointly learns acoustic and language model components. Our models are trained to predict either phonemes or graphemes as subword units, thus allowing us to detect arbitrary keyword phrases, without any out-ofvocabulary words. In order to adapt the models to the requirements of keyword spotting, we propose a novel technique which biases the RNN-T system towards a specific keyword of interest.Our systems are compared against a strong sequence-trained, connectionist temporal classification (CTC) based "keyword-filler" baseline, which is augmented with a separate phoneme language model. Overall, our RNN-T system with the proposed biasing technique significantly improves performance over the baseline system. | 10.1109/asru.2017.8268974 | [
"https://arxiv.org/pdf/1710.09617v1.pdf"
] | 2,160,694 | 1710.09617 | 181d9d1bc48628496f28c56a2dd41cd6748a04bd |
STREAMING SMALL-FOOTPRINT KEYWORD SPOTTING USING SEQUENCE-TO-SEQUENCE MODELS
Yanzhang He yanzhanghe@google.com
Google Inc
Mountain ViewCAU.S.A
Rohit Prabhavalkar prabhavalkar@google.com
Google Inc
Mountain ViewCAU.S.A
Kanishka Rao kanishkarao@google.com
Google Inc
Mountain ViewCAU.S.A
Wei Li mweili@google.com
Google Inc
Mountain ViewCAU.S.A
Anton Bakhtin bakhtin@google.com
Google Inc
Mountain ViewCAU.S.A
Ian Mcgraw imcgraw@google.com
Google Inc
Mountain ViewCAU.S.A
STREAMING SMALL-FOOTPRINT KEYWORD SPOTTING USING SEQUENCE-TO-SEQUENCE MODELS
Index Terms-Keyword spottingsequence-to-sequence mod- elsrecurrent neural network transducerattentionembedded speech recognition
We develop streaming keyword spotting systems using a recurrent neural network transducer (RNN-T) model: an all-neural, end-toend trained, sequence-to-sequence model which jointly learns acoustic and language model components. Our models are trained to predict either phonemes or graphemes as subword units, thus allowing us to detect arbitrary keyword phrases, without any out-ofvocabulary words. In order to adapt the models to the requirements of keyword spotting, we propose a novel technique which biases the RNN-T system towards a specific keyword of interest.Our systems are compared against a strong sequence-trained, connectionist temporal classification (CTC) based "keyword-filler" baseline, which is augmented with a separate phoneme language model. Overall, our RNN-T system with the proposed biasing technique significantly improves performance over the baseline system.
INTRODUCTION
Keyword spotting (KWS), sometimes also referred to as spoken term detection, is the task of detecting specific words, or multi-word phrases in speech utterances. Many previous works consider the problem of developing "offline" (i.e., non-streaming) KWS technologies. In this setting, the dominant paradigm consists of recognizing the entire speech corpus using a large vocabulary continuous speech recognizer (LVCSR) to build word or sub-word lattices, which can then be indexed to perform efficient search, e.g., [1,2,3].
In contrast to the methods described above, there is growing interest in building "online" (i.e., streaming) KWS systems which can be deployed on mobile devices which are significantly limited in terms of memory and computational capabilities. In such applications, when deployed for inference, the KWS system must continuously process incoming audio, and only trigger when a specific keyword is uttered. In order to simplify the problem further, most previous works assume that the model will only be required to detect a small number of possible keywords, thus allowing the development of keyword-specific models. Many previous works propose to train neural networks to identify word targets in individual keywords: for example, using feed-forward deep neural networks [4,5,6], convolutional networks [7] or recurrent neural networks [8,9,10]. Such systems assume the availability of a large number of examples of the keywords of interest in order to train models robustly. Prominent examples of such technologies include speech-enabled assistants such as "Okay/Hey Google" on Google Home [11], "Alexa" on the Amazon Echo, and "Hey Siri" on Apple devices. There has also been some prior work which has explored building low-footprint KWS systems which can detect arbitrary keywords in the incoming speech: for example, using structured support vector machines [12,13], and techniques based on matching incoming audio to example templates of the keyword (Query-by-Example) [14,15].
Recently, end-to-end trained, sequence-to-sequence models have become popular for speech recognition. Examples of such models include the recurrent neural network transducer (RNN-T) [16,17], the recurrent neural aligner [18], connectionist temporal classification (CTC) [19] with grapheme [20,21], syllable [22] or word targets [23], and attention-based models [24,25,26]. Such models combine the acoustic, and language model components of a traditional speech recognition system into a single, jointly trained model. In recent work, we have shown that RNN-T and attentionbased models, trained on ∼12,500 hours of transcribed speech data to directly predict grapheme sequences without a separate language model, perform competitively on dictation test sets when compared against a state-of-the-art, discriminatively sequence-trained, contextdependent phone-based recognizer, augmented with a large language model [27]. We have also shown, that sequence-to-sequence models trained to predict phoneme-based targets, can be effective when used in a second pass rescoring framework [28].
There has been some recent work which has explored sequenceto-sequence models in the context of KWS. Zhuang et al. [29] use a long short-term memory (LSTM) [30] network with CTC to train a KWS system that generates phoneme lattices for efficient search. Rosenberg et al. [31] apply attention-based models to compute nbest lists of recognition results which are then indexed for efficient search; performance, however, was found to be worse than a traditional lattice-based KWS approach. Audhkhasi et al. [32] train an end-to-end system to predict whether a given keyword (represented as a grapheme string) is present in the speech utterance without explicitly decoding utterances into output phoneme or word strings.
In the present work, we explore the use of sequence-to-sequence models, specifically, RNN-T, to build a streaming KWS system which can be used to detect arbitrary keywords. Unlike a number of previous works which have only examined sequence-to-sequence models in the context of graphemes, we train RNN-T systems to predict graphemes as well as phonemes as sub-word units. Additionally, we propose a novel technique to bias the search towards a specific keyword of interest using an attention mechanism (described in more detail in Section 2.3). We find that RNN-T system trained to predict phonemes, when augmented with an additional "endof-word" symbol (see Section 3.2) strongly outperforms a strong keyword-filler baseline derived from a sequence-trained CTC-based recognizer [33]. Overall, our best performing system achieves a false reject (FR) rate of 8 compared to the baseline which achieves 14.5% at the same FA threshold, which corresponds to a 39% reduction in the FR rate.
The organization of the rest of the paper is as follows. In Section 2 we describe various modeling strategies used in this paper. Section 3 describes our baseline approaches for keyword spotting. We present our experimental setup in Section 4, and discuss our results in Section 5, before concluding in Section 6.
MODELING STRATEGIES
In subsequent sections, we denote a sequence of parameterized acoustic features as, x = [x1, · · · , xT ], where, xt ∈ R d ; T denotes the number of acoustic frames in the utterance. We denote the corresponding sequence of output targets (e.g., graphemes or phonemes) corresponding to the utterance as y = [y1, · · · , yL], where, yi ∈ Y. In the context of ASR, the input label sequence is typically much longer than the target label sequence, i.e., T > L.
Connectionist Temporal Classification
CTC [19] is a technique for modeling a conditional probability distribution over sequence data, P (y|x), when frame-level alignments of the target label sequence are unknown. CTC augments the set of output targets with an additional symbol, referred to as the blank symbol, denoted as b . We denote byŷ = [ŷ1, · · · ,ŷT ] ∈ B(x, y), the set of all label sequences of length |x| = T , such thatŷt ∈ {Y ∪ b }, for 1 ≤ t ≤ T , which are equivalent to y after first removing consecutive identical symbols, and then removing any blank symbols: e.g., xx b b y b → xy.
CTC models the output probability of the target sequence, y, conditioned on the input, x, by marginalizing over all possible frame-level alignments, where each output label is assumed to be independent of the other labels, conditioned on x:
P (y|x) = ŷ∈B(x,y) P (ŷ|x) = ŷ∈B(x,y) T t=1 P (ŷt|x1, · · · , xt) (1)
The conditional probability, P (ŷt|x1, · · · , xt), can be computed using a recurrent neural network (which we refer to as the encoder network), as illustrated in Figure 1(a.). As shown in the figure, the encoder maps each input frame, xt, into a higher-level representation, h enc t , followed by a softmax layer which converts h enc t into a probability distrubution P (ŷt|x1, · · · , xt) over the output labels in {Y ∪ b }. The model can be trained using stochastic gradient descent to optimize likelihood over the training set, given paired input and target sequences (x, y). The gradients required for this process can be computed using the forward-backward algorithm [19].
RNN Transducer
Although CTC has been used successfully in many previous works in the context of ASR (e.g., [34,35,23]), it makes a strong conditional independence assumption since it assumes that outputs at each step are independent of the history of previous predictions. The RNN-T model improves the CTC approach by augmenting it with an additional prediction network [16,17], which is explicitly conditioned on the history of previous outputs, as illustrated in Figure 1(b.). The RNN-T model may be viewed as a type of sequence-to-sequence model architecture [24,25], where the encoder (referred to as a transcription network in [16]) corresponds to the RNN acoustic model in a traditional recognizer, and the prediction network (together with the joint network) corresponds to the decoder. The decoder network may be viewed as an RNN language model which attempts to predict the current label given the history of labels. We note that unlike most attention-based models that have been explored in the past (e.g., [24,25]), output targets can be extracted from the RNN-T in a streaming fashion, since the model does not have to examine the entire encoded utterance in order to compute an output target label.
The prediction network is provided with the previous non-blank input label, yu ∈ Y, as input, and produces a single output vector, denoted as pu. The prediction network is fed a special symbol at the start of decoding, y0 = sos , which denotes the start of the sentence.
The joint network consists of a set of feed-forward layers which compute logits zt,u for every input frame t and label u, using additional parameters A, B, b, D, d, as follows:
h joint t,u = tanh(Ah enc t + Bpu + b) (2) zt,u = Dh joint t,u + d(3)
These logits are passed to a final softmax layer which computes probabilities over targets in {Y ∪ b }. 1 The model can be trained to optimize likelihood over the training set, by marginalizing over all possible alignments (i.e., B(x, y)) similar to CTC, using stochastic gradient descent where the required gradients are computed using the dynamic programming algorithm described in [16,17].
Biasing the RNN-Transducer with the keyword of interest using the attention mechanism
Previous works that have examined the use of sequence-to-sequence models for KWS (e.g., [31]) have typically only done so indirectly; the models is trained for ASR, and used to generate n-best lists which can be indexed for efficient search. A notable exception, is work by Audhkhasi et al. [32] where the model is trained directly for the KWS task which is similar to the query-by-example approach that has been investigated previously [14].
With the goal of improving KWS performance, we extend the RNN-T system described in Section 2.2 with an attention-based keyword biasing mechanism in the prediction network to make the model aware of the keyword of interest during the search process. This model can be thought of as a variant of the RNN-T model augmented with attention, proposed in our previous work [27], wherein we replace the prediction network with an attention-based decoder that computes attention over the targets in the keyword phrase. The intuition is that during inference, when the suffix of the current predicted label sequence is close to the prefix of the keyword, the attention vector is activated in the corresponding position within the keyword. This, in turn, generates a context vector to bias the network prediction towards the remaining part of the keyword. Critically, since the keyword phrase only consists of a small number of targets, the use of attention over the keyword does not introduce any latency or significant computational overhead during inference. This model is depicted in Figure 1(c.).
Specifically, at each step, the prediction network recieves, in addition to the previous non-blank label yu−1, a context vector, cu which is computed using dot-product attention [24] over the keyword targets (phoneme targets, in our experiments). We denote the sequence of phoneme targets in the keyword phrase to be detected, as k = [k1, · · · , kM , kM+1], where M is the number of targets in the keyword phrase, and kM+1 is a special target that corresponds to "not applicable", denoted n/a . 2 The keyword encoder takes as input the phoneme sequence, and outputs a matrix
k enc = [k enc 1 , · · · , k enc M , k enc M +1 ], where k enc i
is a one-hot embedding vector of ki, and k enc M +1 is a zero vector. If we denote the state of the prediction network after predicting u − 1 labels as h att u−1 , the context vector, cu is computed as follows:
βj,u = φ(k enc j ), ψ(h att u−1 ) for each 1 ≤ j ≤ M + 1 (4) αj,u = e β j,u M +1 j =1 e β j ,u (5) cu = M +1 j=1 αj,uk enc j(6)
where, φ(·) and ψ(·) represent linear embeddings, and ·, · represents the dot product between two vectors. Thus, the prediction network produces an output pu conditioned on both the previously predicted labels, as well as the keyword of interest. Unlike the RNN-T model, which can be trained given pairs of input and output sequences (x, y), in order to train the RNN-T model with keyword biasing, we need to also associate a keyword phrase, k, with the training instance. We create examples where the keyword, k, is present in x, as well as examples where the keyword is absent in x as follows: with probability p kw we uniformly sample one of the words in x as the keyword, k, and with probability 1 − p kw we uniformly sample a word which is not in x as the keyword, k. If we select one of the words in x as the target, we modify the target labels y by inserting a special symbol eokw after the occurence of the keyword. For example, when training with phoneme targets, for the utterance the cat sat, (which corresponds to the phoneme sequence 3
[D V eow k { t eow s { t eow ])
, if we sampled k =cat as the keyword, then we would modify the target labels as,
y = [D V eow k { t eow eokw s { t eow ]
. Note that the eow token marks the end of each word token (see Section 3.2). The intuition behind adding the eokw at the end of the keyword phrase in the transcript, is that it might serve as a marker that the model should attend to the targets in the keyword phrase. As a final note, the training and inference algorithms for this model are similar to the standard RNN-T model. 2 We also experimented with excluding this symbol, and only using the targets in the keyword, and found that the overall performance was similar to a model with this target. In this work, we only present results with the n/a keyword target. 3 We use X-SAMPA to denote phonemes throughout the paper.
BASELINE SYSTEMS
We present two baseline approaches for the task of streaming KWS. First, we adapt an embedded LVCSR system, designed for efficient real-time recognition on a wide variety of smartphones, developed in our previous work [33]. Second, we explore "keyword-filler" models [36] using the acoustic model component of the LVCSR system. These approaches are described in the following sections.
LVCSR with CTC
Our first approach directly uses an embedded LVCSR system developed in our previous work [33] to recognize input utterances; this is followed by a simple confidence estimation scheme in order to detect a particular keyword of interest. In particular, we recognize the input utterance, x, and create an n-best list of hypotheses, denoted as W. Note that the output vocabulary of the system is limited to 64K words, which results in a significant number of out-of-vocabulary words during the search process. In previous works, e.g., [37], the KWS confidence metric is defined as a likelihood ratio of the keyword model to a background model. Similar to the approaches, we define a simple confidence metric based on the n-best list, as follows.
Given an utterance x, we identify the highest probability hypothesis in W containing k: P (w + |x), and the highest probability hypothesis in W which does not contain k: (P (w − |x)), setting these to 0 if no such hypothesis exists in the n-best list. We can then compute a confidence metric C(x) ∈ [0, 1] as:
C(x) = P (w + |x) P (w + |x) + P (w − |x)(7)
Thus, in the case where all n-best entries contain the keyword, the confidence score is set to one; when none of the entries contain the keyword, the score is set to zero. This same confidence metric is used for all systems, including the RNN-T systems presented in this paper.
Keyword-Filler Models with CTC
An alternative approach to KWS is through the use of "keywordfiller" models [36], which corresponds to constucting a decoder graph with two basic paths: the first is a path through the keyword(s), and the second is a path through a filler (background) that models all non-keyword speech. We use this approach to create our next set of keyword spotters. Instead of defining a single decoder graph with keyword and filler paths, we find it advantageous to use two decoders on separate graphs as depicted in Figure 2. This effectively corresponds to using two beams during decoding: one for the filler model (Figure 2 (a)), and one for the keyword paths, (Figure 2 (b)). The scores of the most likely paths from each these graph can be used to estimate P (w − |x) and P (w + |x), respectively, which can be used to generate a confidence score using Equation 7.
The simplest example of a filler model is a phone loop. However, we remove all paths from the filler model which contain the keyword's phones, so that any path containing the keyword must pass through the keyword model.
In previous work it has been shown that constraining filler models yields accuracy improvements [38,39,40]. We therefore explore two variants along these lines. In the first, we replace the simple phone loops with unweighted word loops (using the 64k word vocabulary from [33]), thus adding in word-level constraints. In the second, we apply an n-gram phone LM, trained on automatically generated phonetic transcriptions of the same utterances that are used to train the word-level LM in [33]; the number of parameters in the phone LM is trained to match the number of parameters of the word LM in [33]. In this case, we compose the LM with both the filler and keyword graphs.
In preliminary experiments, we found that a source of falsepositives during KWS with phoneme based models was when a part of word's phonetic transcription matched that of the keyword. For example, the keyword Erica (E r\ @ k @) is incorrectly detected in utterances containing the word, America (@ m E r\ @ k @); Marilyn (m E r\ @ l @ n) is incorrectly detected in utterances containing the word, Maryland (m E r\ @ l @ n d). We therefore expanded the phoneme LM by inserting a special symbol eow at the end of each word's pronunciation when creating training data, e.g., the cat sat → D V eow k { t eow s { t eow . The eow token is the analog of the space symbol which delimits words in their graphemic representation; from the long context along with the eow symbol, the phone LM is expected to implicitly model wordlevel dependencies and learn the correct segmentation of a phone sequence into words. During search, we only consider keywords in between two end-of-word markers, or between a start-of-sentence marker and an end-of-word marker, in the hypotheses. For instance, Erica would not be false triggered in the phrase: In America (I n eow @ m E r @ k @ eow ), but will correctly trigger when the utterance contains Call Erica (k O l eow E r\ @ k @ eow ).
The idea of using an end-of-word symbol has also been explored in [29], however the authors added it to the transcript for training the CTC acoustic model instead. We believe it would be more explicit and effective to use the symbol for LM training, in which the label dependencies are modeled directly, whereas in CTC the output targets are conditionally independent to each other. As is shown in the results below, we also use the end-of-word symbol for training RNN-T models and find it useful, where AM and LM are jointly trained.
EXPERIMENTAL DETAILS
Data and Evaluation Metric
Our models are trained on a set of ∼22M hand-transcribed anonymized utterances extracted from Google voice-search traffic, which corresponds to ∼18,000 hours of training data. In order to improve system robustness to noise and reverberation, multi-condition training (MTR) data are generated: training utterances are artificially distorted using a room simulator, by adding in noise samples extracted from YouTube videos and environmental recordings of daily events. To further improve robustness to variation in signal loudness, we perform multi-loudness training by scaling the loudness of each training utterance to a randomly selected level.
We construct separate development and test sets to measure KWS performance. As keyword phrases we consider personal names which contain three or more syllables (e.g., Olivia or Erica). The development set consists of 328 keywords, each of which is contained in ∼75 positive utterances, collected from multiple speakers, of the form "keyword, query", (e.g., Olivia, how tall is the Eiffel tower?). A set of ∼37K negative utterances (∼50 hours in total) are shared across keywords, which are collected as queries without a keyword, to form the full development set. Each keyword is evaluated separately on a set consisting of its own positive utterances and the shared negative utterances. A test set is created similarly, with 228 keywords each contained in ∼500 positive utterances, and a set of ∼20k negative utterances (∼60 hours in total) shared across keywords, which consist of handtranscribed anonymized utterances extracted from Google traffic from the domains of open-ended dictation and voice-search queries. We evaluate performance in terms of the receiver operating characteristic (ROC) curve [41], which is constructed by sweeping a threshold over all possible confidence values and plotting false reject (FR) rates against false alarm (FA) rates. Our goal is to achieve low FR rates while maintaining extremely low FA rates (e.g. no more than 0.1 false alarms per hour of audio).
Following [42], we employ a score normalization approach to map system confidence score at the utterance level for a keyword to the probability of false alarm (pFA) for that keyword, which allows us to use a single consistent score for all keywords and set the decision threshold reliably. A confidence-score-to-pFA mapping is estimated from the development set, and applied to both the development and the test sets. All ROC curve results in this work are plotted after the score normalization.
Model Details
The input acoustic signal is represented with 80-dimensional logmel filterbank energies, computed with a 25ms window, and a 10ms frame-shift. Following previous work [34], we stack three consecutive frames and present only every third stacked frame as input to the encoder. The same acoustic frontend is used for all experiments described in this work. The CTC acoustic model (AM) consists of 5 layers of 500 LSTM cells, that predict context-independent phonemes as output targets. The system is heavily compressed, both by quantization [43], and by the application of low-rank projection layers with 200 units between consecutive LSTM layers [44]. The AM consists of 4.6 million parameters in total. The model is first trained to optimize the CTC objective function [19] until convergence. Once CTC-training is complete, the model is discriminatively sequencetrained to optimize expected word errors by minimizing word-level, edit-based, minimum Bayes risk (EMBR) proposed recently by Shannon [45].
The encoder networks used in all RNN transducer models are identical in size and configuration to the encoder used in the CTC model (without the softmax output layer). During the training of an RNN transducer, the weights from the encoders are initialized from a pre-trained CTC model, since this was found to significantly speed up convergence, following which the weights are trained jointly with the rest of the network. For the RNN-T model that is trained to directly output grapheme targets, the CTC model used for initialization is also trained to predict graphemes. The grapheme inventory includes the 26 lower-case letters (a-z), the numerals (0-9), a label representing 'space' ( space ), and punctuation symbols (e.g., the apostrophe symbol ('), hyphen (-), etc.).
The prediction network used in the RNN transducer models, both with and without attention, consists of a single layer of 500 LSTM cells with coupled input and forget gate (CIFG) [46], and the joint network consists of a single feed-forward layer of 500 units with a tanh activation function, as described in Section 2.2. The decoder network (including prediction network and the joint network) has 1.5 million parameters in total.
The RNN transducer models are decoded using a beam-search algorithm [16], where at most 50 highest scoring candidates are retained at every step during decoding. In general, the output posterior distribution of sequence-to-sequence models like RNN-T is peaky (i.e., low entropy); such over-confidence is typically suboptimal for keyword spotting, since diversity in hypotheses is critical to reduce the number of false rejects. We find that smoothing the output posteriors with a temperature τ , i.e. mapping each posterior to its τ -th root and renormalizing them, can help improve KWS performance significantly. The optimal temperature value is determined by tuning on the development set; we set τ = 2.0 for all RNN-T models without attention, and τ = 2.2 for the ones with attention. However smoothing the output posteriors of the CTC acoustic model does not help, possibly because it does not combine well with the LM.
The language model (LM) used in the keyword-filler model with CTC is trained to predict phoneme targets on the same ∼22M utterances used for training RNN-T models. The LM is pruned to ∼1.5million 6-grams using entropy pruning, similar to the number of parameters in the decoder of our RNN-T models. We choose n = 6 which is optimized from the development set.
The LM for our embedded LVCSR system is a standard wordlevel 5-gram, which is trained on a larger corpus with ∼100M automatically-transcribed anonymized utterances extracted from Google voice-search traffic. This LM is also pruned to ∼1.5-million n-grams using entropy pruning. The vocabulary is limited to 64K words, allowing us to shrink the data structures used to maintain the LM [33]. Note that the fixed vocabulary results in out-of-vocabulary keywords on the development and test sets. Utterances are decoded with a heavily pruned version of the LM in the first-pass, while rescoring with the full LM on-the-fly, thus allowing us to reduce the size of the decoder graph used in the first-pass.
All models are trained using asynchronous stochastic gradient descent [47], and are implemented in TensorFlow [48].
RESULTS
Baselines
The performance of our CTC-trained "keyword-filler" baseline models is shown in Figure 3. As can be seen, we find that a phoneme language model is important for a keyword-filler system even with a strong CTC model for extremely low FA rates ( 0.05 FAs per hour). The effect of the language model can be seen by comparing different levels of constraints added in the keyword-filler graphs.
CTC with unweighted phoneme loops allows for arbitrary phoneme paths in the graph to be treated as equally likely, thus entirely relying on the CTC model to recognize the keyword from the background, which performs the worst. Adding word constraints in the graph, albeit without weights, helps to improve performance since it eliminates many confusable paths that correspond to invalid words. Note that in this case, we can add the keyword phrase into the vocabulary for the search since the word loop filler models are unweighted.
The addition of a phoneme language model without the eow token helps to recognize phoneme sequences in context, but does not account for word constraints. As described in Section 3.2, this model has an increased number of false triggers (e.g., the keyword Erica is detected incorrectly in utterances containing the word America). The addition of eow to the phoneme language models, however, significantly improves performance over the other baseline systems.
For reference, a KWS system constructed from the embedded LVCSR system, as described in Section 3.1, achieves an FR rate of 29.8% at 0.05 FAs per hour on the test set for only in-vocabulary keywords (196 out of total 228 keywords), while the best CTC system above achieves 13.4% on the same set.
RNN-T Models with Graphemes and Phonemes Targets
Compared to CTC with a phoneme n-gram LM, an RNN-T model with phoneme targets jointly trains an acoustic model component and a language model component in a single all-neural system. As can be seen from Figure 4, an RNN-T phoneme model (with eow ) outperforms the best CTC baseline. If the eow token is not used, however, the RNN-T phoneme system has significantly higher false alarms as explained in Section 3.2.
The RNN-T system trained to predict grapheme targets performs worse than the one trained with phoneme targets. We conduct an analysis to determine the cause of this performance degradation and found that it is partly due to variant orthographic representations of some of the keyword phrases: e.g., the keyword kathryn is encountered very rarely in the training data, and as a result the RNN-T model typically recognizes these examples as catherine, which is more common in the training data. We therefore considered a variant system where we replace each keyword with the most frequent orthographic representation (as determined by its unigram probability) during the search. This technique significantly improves false you're welcome you know (b) Attention matrix of a negative utterance for the keyword "afternoon", with the transcript "you're welcome you know". reject rates for the RNN-T grapheme system from 15.5% to 14.0% at 0.05 FAs per hour; however this system was still worse than the RNN-T phoneme model, which achieves an FR rate of 11.1% at 0.05 FAs per hour.
RNN-T with Keyword Biasing
We train an RNN-T phoneme system with eow and eokw labels, by setting p KW = 0.5, determined by tuning on the development set. As is shown in Figure 6, adding attention-based keyword biasing to an RNN-T phoneme system improves the overall performance significantly. The final results are reported on the test set, where CTC, RNN-T phoneme and RNN-T phoneme with biasing achieve 14.5%, 11.1% and 8.9% false reject rates respectively at 0.05 FAs per hour.
We also plot a histogram of the FR rates across keywords at a threshold corresponding to 0.05 FAs per hour for the RNN-T phoneme system with keyword biasing in Figure 7. As can be seen in the figure, most of the keywords have low FR rates in the 0-15% range, with only a few outliers.
Finally, in Figure 5 we plot representative examples of the attention weights αj,u computed by the attention model during inference on a positive ( Figure 5 (a)) and a negative ( Figure 5 (b)) utterance extracted from the training data. These plots were generated by feeding as input the expected target label sequence (i.e., the labels are not determined by a beam-search decoding).
As can be seen in the figure, when decoding the positive utterance, the attention weights are concentrated on the first target. Fig. 7: Histogram of keyword-specific false reject rates for the RNN-T phoneme system with keyword biasing at 0.05 FAs per hour, plotted for the keywords on the test set.
When the model begins to predict the phonemes corresponding to the keyword (sounds (s aU n d z)), the attention weights are focussed on consecutive keyword targets, as revealed by the prominent diagonal pattern (although admittedly, the model also appears to attend to other keyword targets during this process). We also note the prominent attention weight assigned to the n/a label after the keyword has been detected.
In the case of the negative utterances, however, the attention does not evolve diagonally across the labels, but is instead spread across the second keyword target (i.e., the initial part of the hotword), and the n/a label.
CONCLUSIONS
In this work, we developed streaming keyword spotting systems using a recurrent neural network transducer, a sequence-to-sequence model that jointly trains acoustic and language model components. We proposed a novel techinque which biases the RNN-T system towards a specific keyword of interest based on an attention mechanism over the keyword. In experimental evaluations, we find that our RNN-T system trained with phoneme targets performs significantly better on keyword spotting than a strong CTC-based keyword-filler baseline which is augmented with a phoneme n-gram LM. We also find that the proposed biasing techique provides further gains over the vanilla RNN-T model.
Fig. 1 :
1A schematic representation of the models used in this work.
Fig. 2 :
2Two decoder graphs representing the building blocks of our baseline CTC-based keyword spotters.
Fig. 3 :
3Comparison among multiple CTC baseline systems on the test set.
Fig. 4 :
4A comparison of the various RNN-T systems against the best performance CTC baseline on the test set.
Fig. 5 :
5Attention matrices for two representative utterances computed by the RNN-T phoneme system with keyword biasing. The Y-axis corresponds to targets k1, · · · , kM+1 in the keyword k. The X-axis corresponds to the expected sequence of phoeneme targets given the utterance transcript. The entry at row j and column u corresponds to αj,u inEquation 5, with values in each column summing up to 1. Brighter colors correspond to values closer to 1, while darker colors correspond to values closer to 0.
Fig. 6 :
6A comparison of the RNN-T phoneme model with keyword biasing against the best CTC baseline and the RNN-T phoneme system without biasing on the test set. All systems use the eow token.
.9% at 0.05 false alarms (FA) per hour, To appear in Proceedings of IEEE ASRU 2017. arXiv:1710.09617v1 [cs.CL] 26 Oct 2017
(a) Attention matrix of a positive utterance for the keyword "sounds", with the transcript "sounds good".<sos>
sil
<eow>
s
aU
n
d
z
<eow>
<eohw>
g
U
d
<eow>
sil
<eow>
s
aU
n
d
z
<n/a>
sounds good
<sos>
sil
<eow>
j
O
r\
<eow>
w
E
l
k
@
m
<eow>
j
u
<eow>
n
oU
<eow>
sil
<eow>
{
f
t
@ǹ
u
n
<n/a>
These equations correspond to Eq. 15-18 in[17].
Results of the 2006 spoken term detection evaluation. J G Fiscus, J Ajot, J S Garofolo, G Doddingtion, Proc. of Special Interest Group on Information Retrieval (SIGIR. of Special Interest Group on Information Retrieval (SIGIRJ. G. Fiscus, J. Ajot, J. S. Garofolo, and G. Doddingtion, "Re- sults of the 2006 spoken term detection evaluation," in Proc. of Special Interest Group on Information Retrieval (SIGIR), 2007, pp. 51-57.
Rapid and accurate spoken term detection. D R H Miller, M Kleber, C.-L Kao, O Kimball, T Colthurst, S A Lowe, R M Schwartz, H Gish, Proc. of Interspeech. of InterspeechD. R. H. Miller, M. Kleber, C.-L. Kao, O. Kimball, T. Colthurst, S. A. Lowe, R. M. Schwartz, and H. Gish, "Rapid and accurate spoken term detection," in Proc. of Interspeech, 2007.
The SRI/OGI 2006 spoken term detection system. D Vergyri, I Shafran, A Stolcke, R R Gadde, M Akbacak, B Roark, W Wang, Proc. of Interspeech. of InterspeechD. Vergyri, I. Shafran, A. Stolcke, R. R. Gadde, M. Akbacak, B. Roark, and W. Wang, "The SRI/OGI 2006 spoken term detection system," in Proc. of Interspeech, 2007.
Small footprint keyword spotting using deep neural networks. G Chen, C Parada, G Heigold, Proc. of ICASSP. of ICASSPG. Chen, C. Parada, and G. Heigold, "Small footprint keyword spotting using deep neural networks," in Proc. of ICASSP, 2014.
Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural networks. R Prabhavalkar, R Alvarez, C Parada, P Nakkiran, T N Sainath, Proc. of ICASSP. of ICASSPR. Prabhavalkar, R. Alvarez, C. Parada, P. Nakkiran, and T. N. Sainath, "Automatic gain control and multi-style training for robust small-footprint keyword spotting with deep neural net- works," in Proc. of ICASSP, 2015.
Model compression applied to smallfootprint keyword spotting. G Tucker, M Wu, M Sun, S Panchapagesan, G Fu, S Vitaladevuni, Proc. of Interspeech. of InterspeechG. Tucker, M. Wu, M. Sun, S. Panchapagesan, G. Fu, and S. Vitaladevuni, "Model compression applied to small- footprint keyword spotting," in Proc. of Interspeech, 2016.
Convolutional neural networks for small footprint keyword spotting. T N Sainath, C Parada, Proc. of Interspeech. of InterspeechT. N. Sainath and C. Parada, "Convolutional neural networks for small footprint keyword spotting," in Proc. of Interspeech, 2015.
Maxpooling loss training of long short-term memory networks for small-footprint keyword spotting. M Sun, A Raju, G Tucker, S Panchapagesan, G Fu, A Mandal, S Matsoukas, N Strom, S Vitaladevuni, Proc. of IEEE Spoken Language Technology Workshop (SLT). of IEEE Spoken Language Technology Workshop (SLT)M. Sun, A. Raju, G. Tucker, S. Panchapagesan, G. Fu, A. Man- dal, S. Matsoukas, N. Strom, and S. Vitaladevuni, "Max- pooling loss training of long short-term memory networks for small-footprint keyword spotting," in Proc. of IEEE Spoken Language Technology Workshop (SLT), 2016, pp. 474-480.
Convolutional recurrent neural networks for small-footprint keyword spotting. S Ö Arık, M Kliegl, R Child, J Hestness, A Gibiansky, C Fougner, R Prenger, A Coates, ArXiv e-printsS.Ö. Arık, M. Kliegl, R. Child, J. Hestness, A. Gibiansky, C. Fougner, R. Prenger, and A. Coates, "Convolutional re- current neural networks for small-footprint keyword spotting," ArXiv e-prints, March 2017.
An application of recurrent neural networks to discriminative keyword spotting. S Fernández, A Graves, J Schmidhuber, Proc. of International Conference on Artificial Neural Networks (ICANN). of International Conference on Artificial Neural Networks (ICANN)S. Fernández, A. Graves, and J. Schmidhuber, "An application of recurrent neural networks to discriminative keyword spot- ting," in Proc. of International Conference on Artificial Neural Networks (ICANN), 2007, pp. 220-229.
Acoustic modeling for google home. B Li, T N Sainath, J Caroselli, A Narayanan, M Bacchiani, A Misra, I Shafran, H Sak, G Pundak, K Chin, K Sim, R J Weiss, K W Wilson, E Variani, C Kim, O Siohan, M Weintraub, E Mcdermott, R Rose, M Shannon, Proc. of Interspeech. of InterspeechB. Li, T. N. Sainath, J. Caroselli, A. Narayanan, M. Bacchiani, A. Misra, I. Shafran, H. Sak, G. Pundak, K. Chin, K. Sim, R. J. Weiss, K. W. Wilson, E. Variani, C. Kim, O. Siohan, M. Wein- traub, E. McDermott, R. Rose, and M. Shannon, "Acoustic modeling for google home," in Proc. of Interspeech, 2017.
Discriminative keyword spotting. J Keshet, D Grangier, S Bengio, Speech Communication. 514J. Keshet, D. Grangier, and S. Bengio, "Discriminative key- word spotting," Speech Communication, vol. 51, no. 4, pp. 317-329, 2009.
Discriminative articulatory models for spoken term detection in low-resource conversational settings. R Prabhavalkar, K Livescu, E Fosler-Lussier, J Keshet, Proc. of ICASSP. of ICASSPR. Prabhavalkar, K. Livescu, E. Fosler-Lussier, and J. Keshet, "Discriminative articulatory models for spoken term detection in low-resource conversational settings," in Proc. of ICASSP, 2013, pp. 8287-8291.
Query-by-example spoken term detection using phonetic posteriorgram templates. T J Hazen, W Shen, C M White, Proc. of ASRU. of ASRUT. J. Hazen, W. Shen, and C. M. White, "Query-by-example spoken term detection using phonetic posteriorgram tem- plates," in Proc. of ASRU, 2009, pp. 421-426.
Query-by-example keyword spotting using long short-term memory networks. G Chen, C Parada, T N Sainath, Proc. of ICASSP. of ICASSPG. Chen, C. Parada, and T. N. Sainath, "Query-by-example keyword spotting using long short-term memory networks," in Proc. of ICASSP, 2015.
Sequence transduction with recurrent neural networks. A Graves, Proc. of ICASSP. of ICASSPA. Graves, "Sequence transduction with recurrent neural net- works," in Proc. of ICASSP, 2012.
Speech recognition with deep recurrent neural networks. A Graves, A.-R Mohamed, G E Hinton, Proc. of International Conference on Machine Learning: Representation Learning Workshop. of International Conference on Machine Learning: Representation Learning WorkshopA. Graves, A.-R. Mohamed, and G. E. Hinton, "Speech recog- nition with deep recurrent neural networks," in Proc. of In- ternational Conference on Machine Learning: Representation Learning Workshop, 2013.
Recurrent neural aligner: An encoder-decoder neural network model for sequence-to-sequence mapping. H Sak, M Shannon, K Rao, F Beaufays, Proc. of Interspeech. of InterspeechH. Sak, M. Shannon, K. Rao, and F. Beaufays, "Recurrent neural aligner: An encoder-decoder neural network model for sequence-to-sequence mapping," in Proc. of Interspeech, 2017.
Connectionist temporal classification: Labeling unsegmented sequence data with recurrent neural networks. A Graves, S Fernández, F Gomez, J Schmidhuber, Proc. of the International Conference on Machine Learning (ICML). of the International Conference on Machine Learning (ICML)A. Graves, S. Fernández, F. Gomez, and J. Schmidhuber, "Connectionist temporal classification: Labeling unsegmented sequence data with recurrent neural networks," in Proc. of the International Conference on Machine Learning (ICML), 2006.
Online keyword spotting with a character-level recurrent neural network. K Hwang, M Lee, W Sung, ArXiv e-printsK. Hwang, M. Lee, and W. Sung, "Online keyword spot- ting with a character-level recurrent neural network," ArXiv e-prints, December 2015.
An end-to-end architecture for keyword spotting and voice activity detection. C Lengerich, A Hannun, ArXiv e-printsC. Lengerich and A. Hannun, "An end-to-end architecture for keyword spotting and voice activity detection," ArXiv e-prints, November 2016.
End-toend keywords spotting based on connectionist temporal classification for mandarin. Y Bai, J Yi, H Ni, Z Wen, B Liu, Y Li, J Tao, Proc. of International Symposium on Chinese Spoken Language Processing. of International Symposium on Chinese Spoken Language essingY. Bai, J. Yi, H. Ni, Z. Wen, B. Liu, Y. Li, and J. Tao, "End-to- end keywords spotting based on connectionist temporal clas- sification for mandarin," in Proc. of International Symposium on Chinese Spoken Language Processing (ISCSLP), 2016, pp. 1-5.
Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition. H Soltau, H Liao, H Sak, ArXiv e-printsH. Soltau, H. Liao, and H. Sak, "Neural speech recognizer: Acoustic-to-word lstm model for large vocabulary speech recognition," ArXiv e-prints, October 2016.
Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. W Chan, N Jaitly, Q Le, O Vinyals, Proc. of ICASSP. of ICASSPW. Chan, N. Jaitly, Q. Le, and O. Vinyals, "Listen, attend and spell: A neural network for large vocabulary conversational speech recognition," in Proc. of ICASSP, 2016, pp. 4960-4964.
End-to-end attention-based large vocabulary speech recognition. D Bahdanau, J Chorowski, D Serdyuk, P Brakel, Y Bengio, Proc. of ICASSP. of ICASSPD. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, and Y. Ben- gio, "End-to-end attention-based large vocabulary speech recognition," in Proc. of ICASSP, 2016, pp. 4945-4949.
On training the recurrent neural network encoder-decoder for large vocabulary end-toend speech recognition. L Lu, X Zhang, S Renals, Proc. of ICASSP. of ICASSPL. Lu, X. Zhang, and S. Renals, "On training the recurrent neural network encoder-decoder for large vocabulary end-to- end speech recognition," in Proc. of ICASSP, 2016.
A comparison of sequence-to-sequence models for speech recognition. R Prabhavalkar, K Rao, T N Sainath, B Li, L Johnson, N Jaitly, Proc. of Interspeech. of InterspeechR. Prabhavalkar, K. Rao, T. N. Sainath, B. Li, L. Johnson, and N. Jaitly, "A comparison of sequence-to-sequence models for speech recognition," in Proc. of Interspeech, 2017.
An analysis of "attention" in sequence-to-sequence models. R Prabhavalkar, T N Sainath, B Li, K Rao, N Jaitly, Proc. of Interspeech. of InterspeechR. Prabhavalkar, T. N. Sainath, B. Li, K. Rao, and N. Jaitly, "An analysis of "attention" in sequence-to-sequence models," in Proc. of Interspeech, 2017.
Unrestricted vocabulary keyword spotting using lstm-ctc. Yimeng Zhuang, Xuankai Chang, Yanmin Qian, Kai Yu, Proc. of Interspeech. of InterspeechYimeng Zhuang, Xuankai Chang, Yanmin Qian, and Kai Yu, "Unrestricted vocabulary keyword spotting using lstm-ctc," in Proc. of Interspeech, 2016, pp. 938-942.
long short-term memory. S Hochreiter, J Schmidhuber, Neural Computation. 98S. Hochreiter and J. Schmidhuber, "long short-term memory," Neural Computation, vol. 9, no. 8, pp. 1735-1780, November 1997.
End-to-end speech recognition and keyword search on low-resource languages. A Rosenberg, K Audhkhasi, A Sethy, B Ramabhadran, M Picheny, Proc. of ICASSP. of ICASSPA. Rosenberg, K. Audhkhasi, A. Sethy, B. Ramabhadran, and M. Picheny, "End-to-end speech recognition and keyword search on low-resource languages," in Proc. of ICASSP, 2017, pp. 5280-5284.
End-to-end asr-free keyword search from speech. K Audhkhasi, A Rosenberg, A Sethy, B Ramabhadran, B Kingsbury, Proc. of ICASSP. of ICASSPK. Audhkhasi, A. Rosenberg, A. Sethy, B. Ramabhadran, and B. Kingsbury, "End-to-end asr-free keyword search from speech," in Proc. of ICASSP, 2017, pp. 4840-4844.
Personalized speech recognition on mobile devices. I Mcgraw, R Prabhavalkar, R Alvarez, M Gonzalez Arenas, K Rao, D Rybach, O Alsharif, H Sak, A Gruenstein, F Beaufays, C Parada, Proc. of ICASSP. of ICASSPI. McGraw, R. Prabhavalkar, R. Alvarez, M. Gonzalez Are- nas, K. Rao, D. Rybach, O. Alsharif, H. Sak, A. Gruenstein, F. Beaufays, and C. Parada, "Personalized speech recognition on mobile devices," in Proc. of ICASSP, 2016.
Fast and accurate recurrent neural network acoustic models for speech recognition. H Sak, A W Senior, K Rao, F Beaufays, Proc. of Interspeech. of InterspeechH. Sak, A. W. Senior, K. Rao, and F. Beaufays, "Fast and accurate recurrent neural network acoustic models for speech recognition," in Proc. of Interspeech, 2015.
Eesen: End-to-end speech recognition using deep rnn models and wfst-based decoding. Y Miao, M Gowayyed, F Metze, Proc. of ASRU. of ASRUY. Miao, M. Gowayyed, and F. Metze, "Eesen: End-to-end speech recognition using deep rnn models and wfst-based de- coding," in Proc. of ASRU, 2015, pp. 167-174.
Phoneme Based Acoustics Keyword Spotting in Informal Continuous Speech. I Szöke, P Schwarz, P Matějka, L Burget, M Karafiát, J Černocký, SpringerBerlin Heidelberg; Berlin, HeidelbergI. Szöke, P. Schwarz, P. Matějka, L. Burget, M. Karafiát, and J.Černocký, Phoneme Based Acoustics Keyword Spotting in Informal Continuous Speech, pp. 302-309, Springer Berlin Heidelberg, Berlin, Heidelberg, 2005.
A hidden markov model based keyword recognition system. R C Rose, D B Paul, Proc. of ICASSP. of ICASSPR. C. Rose and D. B. Paul, "A hidden markov model based keyword recognition system," in Proc. of ICASSP, 1990, pp. 129-132.
Context dependent acoustic keyword spotting using deep neural network. G Wang, K C Sim, Proc. of APSIPA. of APSIPAG. Wang and K. C. Sim, "Context dependent acoustic keyword spotting using deep neural network," in Proc. of APSIPA, 2013, pp. 1-10.
A keyword-aware grammar framework for lvcsr-based spoken keyword search. I.-F Chen, C Ni, B P Lim, N F Chen, C.-H Lee, Proc. of ICASSP. of ICASSPI.-F. Chen, C. Ni, B. P. Lim, N. F. Chen, and C.-H. Lee, "A keyword-aware grammar framework for lvcsr-based spoken keyword search," in Proc. of ICASSP, 2015, pp. 5196-5200.
Keyword-spotting using sri's decipher large-vocabulary speech-recognition system. Mitchel Weintraub, Proc. of ICASSP. of ICASSPMitchel Weintraub, "Keyword-spotting using sri's decipher large-vocabulary speech-recognition system," in Proc. of ICASSP, 1993, pp. 463-466.
Confidence intervals for the area under the roc curve. C Cortes, M Mohri, Proc. of NIPS. of NIPSC. Cortes and M. Mohri, "Confidence intervals for the area under the roc curve," in Proc. of NIPS, 2004.
White listing and score normalization for keyword spotting of noisy speech. B Zhang, R Schwartz, S Tsakalidis, L Nguyen, S Matsoukas, Proc. of Interspeech. of InterspeechB. Zhang, R. Schwartz, S. Tsakalidis, L. Nguyen, and S. Mat- soukas, "White listing and score normalization for keyword spotting of noisy speech," in Proc. of Interspeech, 2012.
On the efficient representation and execution of deep acoustic models. R Alvarez, R Prabhavalkar, A Bakhtin, Proc. of Interspeech. of InterspeechR. Alvarez, R. Prabhavalkar, and A. Bakhtin, "On the effi- cient representation and execution of deep acoustic models," in Proc. of Interspeech, 2016.
On the compression of recurrent neural networks with an application to lvcsr acoustic modeling for embedded speech recognition. R Prabhavalkar, O Alsharif, A Bruguier, L Mcgraw, Proc. of ICASSP. of ICASSPR. Prabhavalkar, O. Alsharif, A. Bruguier, and L. McGraw, "On the compression of recurrent neural networks with an application to lvcsr acoustic modeling for embedded speech recognition," in Proc. of ICASSP, 2016, pp. 5970-5974.
Optimizing expected word error rate via sampling for speech recognition. M Shannon, Proc. of Interspeech. of InterspeechM. Shannon, "Optimizing expected word error rate via sam- pling for speech recognition," in Proc. of Interspeech, 2017.
Lstm: A search space odyssey. Klaus Greff, K Rupesh, Jan Srivastava, Koutník, R Bas, Jürgen Steunebrink, Schmidhuber, IEEE transactions on neural networks and learning systems. Klaus Greff, Rupesh K Srivastava, Jan Koutník, Bas R Ste- unebrink, and Jürgen Schmidhuber, "Lstm: A search space odyssey," IEEE transactions on neural networks and learning systems, 2016.
Large Scale Distributed Deep Networks. J Dean, G S Corrado, R Monga, K Chen, M Devin, Q V Le, M Z Mao, M Ranzato, A Senior, P Tucker, K Yang, A Y Ng, Proc. of NIPS. of NIPSJ. Dean, G. S. Corrado, R. Monga, K. Chen, M. Devin, Q. V. Le, M. Z. Mao, M. Ranzato, A. Senior, P. Tucker, K. Yang, and A. Y. Ng, "Large Scale Distributed Deep Networks," in Proc. of NIPS, 2012, pp. 1223-1231.
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, S Ghemawat, I Goodfellow, A Harp, G Irving, M Isard, Y Jia, R Jozefowicz, L Kaiser, M Kudlur, J Levenberg, D Mane, R Monga, S Moore, D Murray, C Olah, M Schuster, J Shlens, B Steiner, I Sutskever, K Talwar, P Tucker, V Vanhoucke, V Vasudevan, F Viegas, O Vinyals, P Warden, M Wattenberg, M Wicke, Y Yu, X Zheng, M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M.Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Leven- berg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Tal- war, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng, "TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems," Available online: http://download.tensorflow.org/paper/whitepaper2015.pdf, 2015.
| [] |
[
"Quantum Interference in Cognition: Structural Aspects of the Brain",
"Quantum Interference in Cognition: Structural Aspects of the Brain"
] | [
"Diederik Aerts \nCenter Leo Apostel for Interdisciplinary Studies\nBrussels Free University\nKrijgskundestraat 331160BrusselsBelgium\n",
"Sandro Sozzo ssozzo@vub.ac.be \nCenter Leo Apostel for Interdisciplinary Studies\nBrussels Free University\nKrijgskundestraat 331160BrusselsBelgium\n"
] | [
"Center Leo Apostel for Interdisciplinary Studies\nBrussels Free University\nKrijgskundestraat 331160BrusselsBelgium",
"Center Leo Apostel for Interdisciplinary Studies\nBrussels Free University\nKrijgskundestraat 331160BrusselsBelgium"
] | [] | We identify the presence of typically quantum effects, namely superposition and interference, in what happens when human concepts are combined, and provide a quantum model in complex Hilbert space that represents faithfully experimental data measuring the situation of combining concepts. Our model shows how 'interference of concepts' explains the effects of underextension and overextension when two concepts combine to the disjunction of these two concepts. This result supports our earlier hypothesis that human thought has a superposed two-layered structure, one layer consisting of classical logical thought and a superposed layer consisting of quantum conceptual thought. Possible connections with recent findings of a grid-structure for the brain are analyzed, and influences on the mind/brain relation, and consequences on applied disciplines, such as artificial intelligence and quantum computation, are considered. | null | [
"https://arxiv.org/pdf/1204.4914v1.pdf"
] | 9,719,308 | 1204.4914 | 75f514e735e3141e4a833d2742f2a44481e15167 |
Quantum Interference in Cognition: Structural Aspects of the Brain
Diederik Aerts
Center Leo Apostel for Interdisciplinary Studies
Brussels Free University
Krijgskundestraat 331160BrusselsBelgium
Sandro Sozzo ssozzo@vub.ac.be
Center Leo Apostel for Interdisciplinary Studies
Brussels Free University
Krijgskundestraat 331160BrusselsBelgium
Quantum Interference in Cognition: Structural Aspects of the Brain
concept theoryquantum cognitioncognitive processesinterferencebrain structure
We identify the presence of typically quantum effects, namely superposition and interference, in what happens when human concepts are combined, and provide a quantum model in complex Hilbert space that represents faithfully experimental data measuring the situation of combining concepts. Our model shows how 'interference of concepts' explains the effects of underextension and overextension when two concepts combine to the disjunction of these two concepts. This result supports our earlier hypothesis that human thought has a superposed two-layered structure, one layer consisting of classical logical thought and a superposed layer consisting of quantum conceptual thought. Possible connections with recent findings of a grid-structure for the brain are analyzed, and influences on the mind/brain relation, and consequences on applied disciplines, such as artificial intelligence and quantum computation, are considered.
Introduction
In recent years it has become clear that quantum structures do not only appear within situations in the micro world, but that also situations of the macro world exhibit a quantum behavior [1]- [18]. Mainly in domains such as cognitive science (decision theory, concept theory), biology (evolution theory, ecology, population dynamics) and computer science (semantic theories, information retrieval, artificial intelligence), aspects have been identified where the application of classical structures is problematic while the application of quantum structures is promising. The aspects of these domains where classical theories fail, and quantum structures are successful, reveal quite systematically four specific and very characteristic quantum effects, namely interference, contextuality, emergence and entanglement. Sometimes it has been possible to use the full quantum apparatus of linear operators in complex Hilbert space to model these effects as they appear in these situations. However, in quite some occasions a mathematical formalism more general than standard quantum mechanics in complex Hilbert space is needed. We have introduced in [19] a general modeling scheme for contextual emergent entangled interfering entities. In the present article we instead focus on the identification of quantum superposition and interference in cognition to explain 'how' and 'why' interference models the well documented effects of overextension and underextension when concepts combine in disjunction [20]. Possible connections with some recent and interesting research on the structure of the brain and technological applications to symbolic artificial intelligence and computation are also presented.
Interference effects have been studied in great detail and are very common for quantum entities, the famous 'double slit situation' being an archetypical example of them [21]- [26]. Also for concepts we have studied some effects related to the phenomenon of interference in earlier work [10,19], [27]- [29]. In the present article, we concentrate on the situation where two concepts, more specifically the concepts Fruits and Vegetables are combined by using the logical connective 'or' into a new concept Fruits or Vegetables. Such disjunctive combinations of concepts have been studied intensively by James Hampton [20]. Hampton collected experimental data from subjects being asked to estimate the typicality of a collection of exemplars with respect to Fruits and with respect to Vegetables. Then he asked the subjects also to estimate the typicality of the same exemplars with respect to the combination Fruits or Vegetables. By using the data of these experiments we identify interference between the concepts Fruits and Vegetables, and explain how this interference accounts for the effects of underextension and overextension identified by Hampton. In Sec. 2 we consider the set of data collected by Hampton, and work out a quantum description modeling these data. In Sec. 3 we illustrate the phenomenon of interference as it appears in the considered conceptual combination, and in Sec. 4 we present an explanation for the occurrence of this quantum effect by comparing it with the interference typical of the two-slit experiment. This modeling suggests the hypothesis in Sec. 5 that a quantum conceptual layer is present in human thought which is superposed to the usually assumed classical logical thought, the former being responsible of deviations from classically expected behavior in cognition. Finally, we present in Sec. 6 a suggestion inspired by recent research where a grid, rather than a neural network, pattern, is identified in the structure of the brain [30]. More specifically, we put forward the hypothesis, albeit speculative, that the interference we identity between concepts, and the complex Hilbert space that we structurally use to model this interference, might contain elements that have their isomorphic counterparts in the dynamics of the brain. Aspects of the impact of this hypothesis on the modeling and formalizing of natural and artificial knowledge, as well as the implications on artificial intelligence, robotics and quantum computation, are also inquired.
Fruits interfering with Vegetables
Let us consider the two concepts Fruits and Vegetables, and their combination Fruits or Vegetables, and work out a quantum model for the data collected by J. Hampton for this situation [20,27]. The concepts Fruits and Vegetables are two exemplars of the concept Food. And we consider a collection of exemplars of Food, more specifically those listed in Tab. 1. Then we consider the following experimental situation: Subjects are asked to respond to the following three elements: Question A: 'Choose one of the exemplars from the list of Tab. 1 that you find a good example of Fruits'. Question B: 'Choose one of the exemplars from the list of Tab. 1 that you find a good example of Vegetables'. Question A or B: 'Choose one of the exemplars from the list of Tab. 1 that you find a good example of Fruits or Vegetables'. Then we calculate the relative frequency µ(A) k , µ(B) k and µ(A or B) k , i.e the number of times that exemplar k is chosen divided by the total number of choices made in response to the three questions A, B and A or B, respectively, and interpret this as an estimate for the probabilities that exemplar k is chosen for questions A, B and A or B, respectively. These relative frequencies are given in Tab Let us now explicitly construct a quantum mechanical model in complex Hilbert space for the pair of concepts Fruit and Vegetable and their disjunction 'Fruit or Vegetable', and show that quantum interference models the experimental data gathered in [20]. We represent the measurement of 'a good example of' by means of a self-adjoint operator with spectral decomposition {M k | k = 1, . . . , 24} where each M k is an orthogonal projection of the Hilbert space H corresponding to item k from the list of items in Tab. 1.
µ(A) k µ(B) k µ(A or B) k µ(A) k +µ(B) k 2 λ k φ k A=Fruits,A) k = A|M k |A , µ(B) k = B|M k |B , hence µ(A or B) k = 1 2 A + B|M k |A + B = 1 2 (µ(A) k + µ(B) k ) + A|M k |B ,(1)
where A|M k |B is the interference term. Let us introduce |e k the unit vector on M k |A and |f k the unit vector on M k |B , and put e k |f k = c k e iγ k . Then we have |A = 24 k=1 a k e iα k |e k and |B = 24 k=1 b k e iβ k |f k , which gives
A|B = ( 24 k=1 a k e −iα k e k |)( 24 l=1 b l e iβ l |f l ) = 24 k=1 a k b k c k e iφ k(2)
where we put
φ k = β k − α k + γ k . Further we have µ(A) k = a 2 k , µ(B) k = b 2 k , A|M k |B = a k b k c k e iφ k ,
which gives, by using (1),
µ(A or B) k = 1 2 (µ(A) k + µ(B) k ) + c k µ(A) k µ(B) k cos φ k(3)
We choose φ k such that
cos φ k = 2µ(A or B) k − µ(A) k − µ(B) k 2c k µ(A) k µ(B) k(4)
and hence (3) is satisfied. We now have to determine c k in such a way that A|B = 0. Recall that from 24 k=1 µ(A or B) k = 1 and (3), and with the choice of cos φ k that we made in (4), it follows that
24 k=1 c k µ(A) k µ(B) k cos φ k = 0. Taking into account (2), which gives A|B = 24 k=1 a k b k c k (cos φ k + i sin φ k ), and making use of sin φ k = ± 1 − cos 2 φ k , we have A|B = 0 ⇔ 24 k=1 c k µ(A) k µ(B) k (cos φ k + i sin φ k ) = 0 ⇔ 24 k=1 c k µ(A) k µ(B) k sin φ k = 0 ⇔ 24 k=1 ± c 2 k µ(A) k µ(B) k − (µ(A or B) k − µ(A) k + µ(B) k 2 ) 2 = 0 (5)
We introduce the following quantities
λ k = ± µ(A) k µ(B) k − (µ(A or B) k − µ(A) k + µ(B) k 2 ) 2(6)
and choose m the index for which |λ m | is the biggest of the |λ k |'s. Then we take c k = 1 for k = m. We explain now the algorithm that we use to choose a plus or minus sign for λ k as defined in (6), with the aim of being able to determine c m such that (5) is satisfied. We start by choosing a plus sign for λ m . Then we choose a minus sign in (6) for the λ k for which |λ k | is the second biggest; let us call the index of this term m 2 . This means that 0 ≤ λ m + λ m 2 . For the λ k for which |λ k | is the third biggest -let us call the index of this term m 3 -we choose a minus sign in case 0 ≤ λ m + λ m 2 + λ m 3 , and otherwise we choose a plus sign, and in this case we have 0 ≤ λ m + λ m 2 + λ m 3 . We continue this way of choosing, always considering the next biggest |λ k |, and hence arrive at a global choice of signs for all of the λ k , such that 0 ≤ λ m + k =m λ k . Then we determine c m such that (5) is satisfied, or more specifically such that
c m = (− k =m λ k ) 2 + (µ(A or B) m − µ(A)m+µ(B)m 2 ) 2 µ(A) m µ(B) m(7)
We choose the sign for φ k as defined in (4) equal to the sign of λ k . The result of the specific solution that we have constructed is that we can take M k (H) to be rays of dimension 1 for k = m, and M m (H) to be a plane. This means that we can make our solution still more explicit. Indeed, we take H = C 25 the canonical 25 dimensional complex Hilbert space, and make the following choices
|A = ( µ(A) 1 , . . . , µ(A) m , . . . , µ(A) 24 , 0) (8) |B = (e iβ 1 µ(B) 1 , . . . , c m e iβm µ(B) m , . . . , e iβ 24 µ(B) 24 , µ(B) m (1 − c 2 m )) (9) β m = arccos( 2µ(A or B) m − µ(A) m − µ(B) m 2c m µ(A) m µ(B) m ) (10) β k = ± arccos( 2µ(A or B) k − µ(A) k − µ(B) k 2 µ(A) k µ(B) k )(11)
where the plus or minus sign in (11) is chosen following the algorithm we introduced for choosing the plus and minus sign for λ k in (6). Let us construct this quantum model for the data given in Tab. 1.The exemplar which gives rise to the biggest value of |λ k | is Tomato, and hence we choose a plus sign and get λ 19 = 0.0768. The exemplar giving rise to the second biggest value of λ k is Pumpkin, and hence we choose a minus sign, and get λ 20 = −0.0733. Next comes Yam, and since λ 19 +λ 20 −0.0615 < 0, we choose a plus sign for λ 18 . Next is Green Pepper, and we look at 0 ≤ λ 19 + λ 20 + λ 18 − 0.0503, which means that we can choose a minus sign for λ 17 . The fifth exemplar in the row is Apple. We have λ 19 + λ 20 + λ 18 + λ 17 − 0.0428 < 0, which means that we need to choose a plus sign for λ 8 (7) it follows that c 19 = 0.7997. Making use of (8), (9), (10) and (11), and the values of the angles given in Tab
|B = (0.1154e i83.8854 • , 0.1040e −i94.5520 • , 0.1484e −i95.3620 • , 0.1640e i91.8715 • , 0.1120e i57.9533 • , 0.1302e i95.8648 • , 0.1302e −i113.2431 • , 0.1246e i87.6039 • , 0.1580e −i105.9806 • , 0.1596e i99.3810 • , 0.1798e i50.0889 • , 0.2112e −i86.4374 • , 0.1734e −i57.6399 • , 0.2334e i18.6744 • , 0.2565e −i69.0705 • , 0.2670e i104.7126 • , 0.2806e −i95.6518 • , 0.2690e i98.0833 • , 0.2606e i100.7557 • , 0.2670e −i103.4804 • , 0.3584e −i99.6048 • , 0.2031e −i96.6635 • , 0.1630e −i61.1698 • , 0.1716e i86.6308 • , 0.1565).(12)
This proves that we can model the data of [20] by means of a quantum mechanical model, and such that the values of µ(A or B) k are determined from the values of µ(A) k and µ(B) k as a consequence of quantum interference effects. For each k the value of φ k in Tab. 1 gives the quantum interference phase of the exemplar number k.
Graphics of the interference patterns
In [27] we worked out a way to 'chart' the quantum interference patterns of the two concepts when combined into conjunction or disjunction. Since it helps our further analysis in the present article, we put forward this 'chart' for the case of the concepts Fruits and Vegetables and their disjunction 'Fruits or Vegetables'. More specifically, we represent the concepts Fruits, Vegetables and 'Fruits or Vegetables' by complex valued wave functions of two real variables ψ A (x, y), ψ B (x, y) and ψ AorB (x, y). We choose ψ A (x, y) and ψ B (x, y) such that the real part for both wave functions is a Gaussian in two dimensions, which is always possible since we have to fit in only 24 values, namely the values of ψ A and ψ B for each of the exemplars of Tab. 1. The squares of these Gaussians are graphically represented in Figs. 1 and 2, and the different exemplars of Tab. 1 are located in spots such that the Gaussian distributions |ψ A (x, y)| 2 and |ψ B (x, y)| 2 properly model the probabilities µ(A) k and µ(B) k in Tab. 1 for each one of the exemplars. For example, for Fruits represented in Fig. 1, Apple is located in the center of the Gaussian, since Apple was most frequently chosen by the test subjects when asked Question A. Elderberry was the second most frequently chosen, and hence closest to the top of the Gaussian in Fig. 1. In Fig. 3 the data for 'Fruits or Vegetables' are graphically represented. This is not 'just' a normalized sum of the two Gaussians of Figs. 1 and 2, since it is the probability distribution corresponding to 1 √ 2 (ψ A (x, y) + ψ B (x, y)), which is the normalized superposition of the wave functions in Figs. 1 and 2. The numbers are placed at the locations of the different exemplars with respect to the probability distribution 1 2 |ψ A (x, y) + ψ B (x, y)| 2 = 1 2 (|ψ A (x, y)| 2 + |ψ B (x, y)| 2 ) + |ψ A (x, y)ψ B (x, y)| cos φ(x, y), where |ψ A (x, y)ψ B (x, y)| cos φ(x, y) is the interference term and φ(x, y) the quantum phase difference at (x, y). The values of φ(x, y) are given in Tab. 1 for the locations of the different exemplars. The interference pattern shown in Fig. 3 is very similar to well-known interference patterns of light passing through an elastic material under stress. In our case, it is the interference pattern corresponding to 'Fruits or Vegetables'.
Explaining quantum interference
The foregoing section showed how the typicality data of two concepts and their disjunction are quantum mechanically modeled such that the quantum effect of interference accounts for the measured values. We also showed that it is possible to metaphorically picture the situation such that each of the concepts is represented by light passing through a hole and the disjunction of both concepts corresponds to the situation of the light passing through both holes (see Fig. 6). This is indeed where interference is best known from in the traditional double-slit situation in optics and quantum physics. If we apply this to our specific example by analogy, we can imagine the cognitive experiment where a subject chooses the most appropriate answer for one of the concepts, e.g., Fruits, as follows: 'The photon passes with the Fruits hole open and hits a screen behind the hole in the region where the choice of the person is located'. We can do the same for the cognitive experiment where the subject chooses the most appropriate answer for the concept Vegetables. This time the photon passes with the Vegetables hole open and hits the screen in the region where the choice of the person is located. The third situation, corresponding to the choice of the most appropriate answer for the disjunction concept 'Fruits or Vegetables', consists in the photon passing with both the Fruits hole and the Vegetables hole open and hitting the screen where the choice of the person is located. This third situation is the situation of interference, viz. the interference between Fruits and Vegetables. These three situations are clearly illustrated in Figs. 1, 2 and 3.
In [10,28,29] we analyzed the origin of the interference effects that are produced when concepts are combined, and we provided an explanation that we investigated further in [31].
Let us now take a closer look at the experimental data and how they are produced by interference. The exemplars for which the interference is a weakening effect, i.e. where µ(A or B) < 1/2(µ(A) + µ(B)) or 90 • ≤ φ or φ ≤ −90 • , are the following: Elderberry, Mustard, Lentils, Pumpkin, Tomato, Broccoli, Wheat, Yam, Rice, Raisin, Green Pepper, Peanut, Acorn and Olive. The exemplars for which interference is a strengthening effect, i.e. where 1/2(µ(A) + µ(B)) < µ(A or B) or φ < 90 • or −90 • ≤ φ, are the following: Mushroom, Root Ginger, Garlic, Coconut, Parsley, Almond, Chili Pepper, Black Pepper, and Apple. Let us consider the two extreme cases, viz. Elderberry, for which interference is the most weakening (φ = −113.2431 • ), and Mushroom, for which it is the most strengthening (φ = 18.6744). For Elderberry, we have µ(A) = 0.1138 and µ(B) = 0.0170, which means that test subjects have classified Elderberry very strongly as Fruits (Apple is the most strongly classified Fruits, but Elderberry is next and close to it), and quite weakly as Vegetables. For Mushroom, we have µ(A) = 0.0140 and µ(B) = 0.0545, which means that test subjects have weakly classified Mushroom as Fruits and moderately as Vegetables. Let us suppose that 1/2(µ(A) + µ(B)) is the value estimated by test subjects for 'Fruits or Vegetables'. In that case, the estimates for Fruits and Vegetables apart would be carried over in a determined way to the estimate for 'Fruits or Vegetables', just by applying this formula. This is indeed what would be the case if the decision process taking place in the human mind worked as if a classical particle passing through the Fruits hole or through the Vegetables hole hit the mind and left a spot at the location of one of the exemplars. More concretely, suppose that we ask subjects first to choose which of the questions they want to answer, Question A or Question B, and then, after they have made their choice, we ask them to answer this chosen question. This new experiment, which we could also indicate as Question A or Question B, would have 1/2(µ(A) + µ(B)) as outcomes for the weight with respect to the different exemplars. In such a situation, it is indeed the mind of each of the subjects that chooses randomly between the Fruits hole and the Vegetables hole, subsequently following the chosen hole. There is no influence of one hole on the other, so that no interference is possible. However, in reality the situation is more complicated. When a test subject makes an estimate with respect to 'Fruits or Vegetables', a new concept emerges, namely the concept 'Fruits or Vegetables'. For example, in answering the question whether the exemplar Mushroom is a good example of 'Fruits or Vegetables', the subject will consider two aspects or contributions. The first is related to the estimation of whether Mushroom is a good example of Fruits and to the estimation of whether Mushroom is a good example of Vegetables, i.e. to estimates of each of the concepts separately. It is covered by the formula 1/2(µ(A) + µ(B)). The second contribution concerns the test subject's estimate of whether or not Mushroom belongs to the category of exemplars that cannot readily be classified as Fruits or Vegetables. This is the class characterized by the newly emerged concept 'Fruits or Vegetables'. And as we know, Mushroom is a typical case of an exemplar that is not easy to classify as 'Fruits or Vegetables'. That is why Mushroom, although only slightly covered by the formula 1/2(µ(A)+µ(B)), has an overall high score as 'Fruits or Vegetables'. The effect of interference allows adding the extra value to 1/2(µ(A) + µ(B)) resulting from the fact that Mushroom scores well as an exemplar that is not readily classified as 'Fruits or Vegetables'. This explains why Mushroom receives a strengthening interference effect, which adds to the probability of it being chosen as a good example of 'Fruits or Vegetables'. Elderberry shows the contrary. Formula 1/2(µ(A) + µ(B)) produces a score that is too high compared to the experimentally tested value of the probability of its being chosen as a good example of 'Fruits or Vegetables'. The interference effect corrects this, subtracting a value from 1/2(µ(A) + µ(B)). This corresponds to the test subjects considering Elderberry 'not at all' to belong to a category of exemplars hard to classify as Fruits or Vegetables, but rather the contrary. As a consequence, with respect to the newly emerged concept 'Fruits or Vegetables', the exemplar Elderberry scores very low, and hence the 1/2(µ(A) + µ(B)) needs to be corrected by subtracting the second contribution, the quantum interference term. A similar explanation of the interference of Fruits and Vegetables can be put forward for all the other exemplars. The following is a general presentation of this. 'For two concepts A and B, with probabilities µ(A) and µ(A) for an exemplar to be chosen as a good example of 'A or B', the interference effect allows taking into account the specific probability contribution for this exemplar to be chosen as a good exemplar of the newly emerged concept 'A or B', adding or subtracting to the value 1/2(µ(A) + µ(B)), which is the average of µ(A) and µ(B).'
To conclude we observe that 'Fruits or Vegetables' is not the only case where quantum interference explains deviations from classically expected behavior. Various examples have been found, for disjunctions, as well as for conjunctions, of concepts [10].
A two-layered structure in human thought
The detection of quantum structures in cognition has suggested us to put forward the hypothesis that two specifically structured and superposed layers can be identified in human thought as a process [10,31], as follows.
(i) A classical logical layer. The thought process in this layer is given form by an underlying classical logical conceptual process. The manifest process itself may be, and generally will be, indeterministic, but the indeterminism is due to a lack of knowledge about the underlying deterministic classical process. For this reason the process within the classical logical layer can be modeled by using a classical Kolmogorovian probability description.
(ii) A quantum conceptual layer. The thought process in this layer is given form under influence of the totality of the surrounding conceptual landscape, where the different concepts figure as individual entities, also when they are combinations of other concepts, at variance with the classical logical layer where combinations of concepts figure as classical combinations of entities and not as individual entities. In this sense one can speak of a conceptual emergence taking place in this quantum conceptual layer, certainly so for combinations of concepts. Quantum conceptual thought has been identified in different domains of knowledge and science related to different, often as paradoxically conceived, problems in these domains. The sorts of measurable quantities being able to experimentally identify quantum conceptual thought have been different in these different domains, depending on which aspect of the conceptual landscape was most obvious or most important for the identification of the deviation of classically expected values of these quantities. For example, in a domain of cognitive science where representations of concepts are studied, and hence where concepts and combinations of concepts, and relations of items, exemplars, instances or features with concepts are considered, measurable quantities such as 'typicality', 'membership', 'similarity' and 'applicability' have been studied and used to experimentally put into evidence the deviation of what classically would be expected for the values of these quantities. In decision theory measurable quantities such as 'representativeness', 'qualitative likelihood', 'similarity' and 'resemblance' have played this role. The quantum conceptual thought process is indeterministic in essence, i.e. there is not necessarily an underlying deterministic process independent of the context. Hence, if analyzed deeper with the aim of finding more deterministic sub-processes, unavoidably effects of context will come into play. Since all concepts of the interconnected web that forms the landscape of concepts and combinations of them attribute as individual entities to the influences reigning in this landscape, and more so since this happens dynamically in an environment where they are all quantum entangled structurally speaking, the nature of quantum conceptual thought contains aspects that we strongly identify as holistic and synthetic. However, the quantum conceptual thought process is not unorganized or irrational. Quantum conceptual thought is as firmly structured as classical logical thought though in a different way. We believe that the reason why science has hardly uncovered the structure of quantum conceptual thought is because it has been believed to be intuitive, associative, irrational, etc., meaning 'rather unstructured'. As a consequence of its basic features, an idealized version of this quantum conceptual thought process can be modeled as a quantum mechanical process.
The assumed existence of a quantum conceptual layer in mind fits in with some impressive achievements that have been recently obtained in neuroscience [30], as we will see in the next section.
6 Quantum cognition and the structure of the brain A traditional view of the relation between brain and mind is based on the neuroscience paradigm [32], according to which the architecture of the brain is determined by connections between neurons, their inibitory/excitatory character, and the strength of their connections. Following this view, roughly speaking, the brain can be seen as a parallel distributed computer containing many billions of neurons, that is, elementary processors interconnected into a complex neural network. In this architecture, the mind and the brain constitute one single unit, which is characterized by a complementary dualism. The mind is in this approach understood as a program carried out by the brain, the program being specified by the neural network architecture. Distributed representations of cognitive structures are studied in such an approach (see, e.g., holographic reduced representations [33]- [36]).
Although the holographic approach is inspired by waves and interference, it is not able to model the complex type of interference that quantum entities undergo. It can be seen by considering the values of the interference angles of the interference pattern we obtain (see equation (12)), that the modeling for the concept Fruit or Vegetables is intrinsically quantum mechanical, not able to be reduced to interference of classical waves. This means that, although along the same lines as the holographic memory view [33], our approach can introduce a way to consider and study the brain as a quantum mechanical interference producing entity. Concretely we produce a projection of a multi-dimensional complex Hilbert space -25 dimensional for the Fruits or Vegetables case -in three-dimenesional real space, which is the environment where the bio-mass of the brain is located.
In this respect it is worthy to mention a recent finding [30], where relationships of adjacency and crossing between cerebral fiber pathways in primates and humans were analyzed by using diffusion magnetic resonance imaging. The cerebral fiber pathways have been found to form a rectilinear three-dimensional grid continuous with the three principal axes of development. Cortico-cortical pathways formed parallel sheets of interwoven paths in the longitudinal and medio-lateral axes, in which major pathways were local condensations. Cross-species homology was strong and showed emergence of complex gyral connectivity by continuous elaboration of this grid structure. This architecture naturally supports functional spatiotemporal coherence, developmental path-finding, and incremental rewiring with correlated adaptation of structure and function in cerebral plasticity and evolution [30]. The three-dimensional layered structure schematized above puts at stake the 'neural network' modeling of the brain, together with some aspects of the neuroscience paradigm, and the brain/mind relation. Such a very mathematically structured grid form would be much closer to what one expect as an ideal medium for interference than this is the case for the structure of a traditional network.
At first sight it might seem that the layered structures that have been detected [30] are too simple to give rise to complex cognition, even if interference is allowed to play a prominent role, but that is misleading. Indeed, one should not look upon the brain as 'a container of complex cognition', but rather as 'the canvas for the potentiality of emergence of such complex cognition'. That makes a whole difference. Indeed, we know how the rather simple mathematical structure of superposition in a linear vector space and tensor product of linear vector spaces give rise to both emergence and entanglement in quantum mechanics. Also there this mathematical structure plays the role of canvas, where the emergent and entangled states can find a seat to be realized. This is exactly what the role of the recently detected grid could be, due to its rather simple mathematical structure, at least compared to the structure of a network, it could make available in a mathematically systematic way the canvas where emergent states of new concepts can find their seat. This is then a mechanism fundamentally different from what one expects in networks, where 'new connections are only made when they are needed'. Structures that have generative power can shape 'empty space' for potentiality, and 'creation of new', hence emergence can take place in a much more powerful way. Of course, there will be a bias coming from the generating structures, which is a drawback compared to the network way. This bias could exactly be an explanation for the functioning of the human brain leading to automated aspects of conceptual reasoning such as 'the disjunction and conjunction effects'. The above analysis is highly relevant for representations of genuine cognitive models in technology, for example as attempted in artificial intelligence and robotics [37]- [39].
1 :
1Interference data for concepts A=Fruits and B=Vegetables. The probability of a person choosing one of the exemplars as an example of Fruits (and as an example of Vegetables, respectively), is given by µ(A) (and µ(B), respectively) for each of the exemplars. The probability of a person choosing one of the exemplars as an example of Fruits or Vegetables is µ(A or B) for each of the exemplars. The classical probability would be given by µ(A)+µ(B) 2 , and φ k is the quantum phase angle provoking the quantum interference effect. The concepts Fruits, Vegetables and 'Fruits or Vegetables' are represented by unit vectors |A , |B and 1 √ 2 (|A +|B ) of the Hilbert space H, where |A and |B are orthgonal, and 1 √ 2 (|A +|B ) is their normalized superposition. Following standard quantum rules we have µ(
. 1, we put forward the following explicit representation of the vectors |A and |B in C 25 representing concepts Fruits and Vegetables
Figure 1 :
1The probabilities µ(A) k of a person choosing the exemplar k as a 'good example' of Fruits are fitted into a twodimensional quantum wave function ψA(x, y). The numbers are placed at the locations of the different exemplars with respect to the Gaussian probability distribution |ψA(x, y)| 2 . This can be seen as a light source shining through a hole centered on the origin, and regions where the different exemplars are located. The brightness of the light source in a specific region corresponds to the probability that this exemplar will be chosen as a 'good example' of Fruits.
Figure 2 :
2The probabilities µ(B) k of a person choosing the exemplar k as an example of Vegetables are fitted into a twodimensional quantum wave function ψB(x, y). The numbers are placed at the locations of the different exemplars with respect to the probability distribution |ψB(x, y)| 2 . As inFig. 1, it can be seen as a light source shining through a hole centered on point 21, where Broccoli is located. The brightness of the light source in a specific region corresponds to the probability that this exemplar will be chosen as a 'good example' of Vegetables.Then come Raisin, Tomato and Pumpkin, and so on, with Garlic and Lentils as the least chosen 'good examples' of Fruits. For Vegetables, represented inFig. 2, Broccoli is located in the center of the Gaussian, since Broccoli was the exemplar most frequently chosen by the test subjects when asked Question B. Green Pepper was the second most frequently chosen, and hence closest to the top of the Gaussian inFig. 2. Then come Yam, Lentils and Pumpkin, and so on, with Coconut and Acorn as the least chosen 'good examples' of Vegetables. Metaphorically, we could regard the graphical representations of Figs. 1 and 2 as the projections of two light sources each shining through one of two holes in a plate and spreading out their light intensity following a Gaussian distribution when projected on a screen behind the holes.
Figure 3 :
3The probabilities µ(A or B) k of a person choosing the exemplar k as an example of 'Fruits or Vegetables' are fitted into the two-dimensional quantum wave function 1 √ 2 (ψA(x, y) + ψB(x, y)), which is the normalized superposition of the wave functions in Figs. 1 and 2. The numbers are placed at the locations of the different exemplars with respect to the probability distribution1 2 |ψA(x, y)+ψB(x, y)| 2 = 1 2 (|ψA(x, y)| 2 +|ψB(x, y)| 2 )+|ψA(x, y)ψB(x, y)| cos φ(x, y), where φ(x, y) is the quantum phase difference at (x, y). The values of φ(x, y) are given in Tab. 1 for the locations of the different exemplars. The interference pattern is clearly visible.The center of the first hole, corresponding to the Fruits light source, is located where exemplar Apple is at point (0, 0), indicated by 8 in both figures. The center of the second hole, corresponding to the Vegetables light source, is located where exemplar Broccoli is at point(10,4), indicated by 21 in both figures.
Bearing in mind the analogy with the light sources for Figs. 1 and 2, in Fig. 3 we can see the interference pattern produced when both holes are open.
Figure 4 :
4A three-dimensional representation of the interference landscape of the concept 'Fruits or Vegetables' as shown in Fig. 3. Exemplars are represented by little green balls, and the numbers refer to the numbering of the exemplars in Tab. 1 and in Figs. 1, 2 and 3.
Figure 5 :
5Probabilities 1/2(µ(A) k + µ(B) k ), which are the probability averages for Fruits and Vegetables shown in Figs. 1 and 2. This would be the resulting pattern in case φ(x, y) = 90 • for all exemplars. It is called the classical pattern for the situation since it is the pattern that, without interference, results from a situation where classical particles are sent through two slits. These classical values for all exemplars are given in Tab. 1.
Fig. 4
4represents a three-dimensional graphic of the interference pattern of Fig. 3, and, for the sake of comparison, in Fig. 5, we have graphically represented the averages of the probabilities of Figs. 1 and 2, i.e. the values measured if there were no interference. For the mathematical details -the exact form of the wave functions and the explicit calculation of the interference pattern -and for other examples of conceptual interference, we refer to[27].
Figure 6 :
6A typical interference pattern of a quantum two-slit situation with slits A and B. The 'A open B closed' curve represents the probability of detection of the quantum entity in case only Slit A is open; the 'B open A closed' curve reflects the situation where only Slit B is open; and the 'A and B open classical' curve is the average of both. The 'A and B open quantum' curve represents the probability of detection of the quantum entity if both slits are open.
Acorn, hence µ(A) 2 = 0.0425, 372 chose Peanut, hence µ(A) 3 = 0.0372, . . ., and 127 chose Black Pepper, hence µ(A) 24 = 0.0127. Analogously for Question B, from 10,000 subjects, 133 chose Almond, hence µ(B) 1 = 0.0133, 108 chose Acorn, hence µ(B) 2 = 0.0108, 220 chose Peanut, hence µ(B) 3 = 0.0220, . . ., and 294 chose Black Pepper, hence µ(B) 24 = 0.0294, and for Question A or B, 269 chose Almond, hence µ(A or B) 1 = 0.0269, 249 chose Acorn, hence µ(A or B) 2 = 0.249, 269 chose Peanut, hence µ(A or B) 3 = 0.269, . . ., and 222 chose Black Pepper, hence µ(A or B) 24 = 0.222.. 1.
For example, for Question A, from 10,000 subjects, 359 chose Almond, hence µ(A) 1 = 0.0359, 425 chose
Table
. Next comes Broccoli and verifying shows that we can choose a minus sign for λ 21 . We determine in an analogous way the signs for the exemplars Raisin, plus sign, Elderberry, minus sign, Olive, plus sign, Peanut, minus sign, Chili Pepper, minus sign, Coconut, plus sign, Watercress, minus sign, Lentils, plus sign, Rice, minus sign, Almond, plus sign, Acorn, minus sign, Black Pepper, plus sign, Mustard, minus sign, Wheat, plus sign, Parsley, minus sign, Root Ginger, plus sign, Garlic, minus sign, and finally Mushroom, plus sign. In Tab. 1 we give the values of λ k calculated following this algorithm, and from
Applications of quantum statistics in psychological studies of decision processes. D Aerts, S Aerts, Found. Sci. 1D. Aerts and S. Aerts, "Applications of quantum statistics in psychological studies of decision pro- cesses," Found. Sci., vol. 1, pp. 85-97, 1995.
Orthogonal negation in vector spaces for modelling word-meanings and document retrieval. D Widdows, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics. the 41st Annual Meeting of the Association for Computational LinguisticsD. Widdows, "Orthogonal negation in vector spaces for modelling word-meanings and document re- trieval," in Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics, 2003, pp. 136-143.
The Geometry of Information Retrieval. K Van Rijsbergen, Cambridge University PressCambridge, UKK. van Rijsbergen, The Geometry of Information Retrieval, Cambridge, UK: Cambridge University Press, 2004.
Quantum aspects of semantic analysis and symbolic artificial intelligence. D Aerts, M Czachor, J. Phys. A-Math. Gen. 37D. Aerts and M. Czachor, "Quantum aspects of semantic analysis and symbolic artificial intelligence," J. Phys. A-Math. Gen., vol. 37, pp. L123-L132, 2004.
A theory of concepts and their combinations I & II. D Aerts, L Gabora, Kybernetes. 34D. Aerts and L. Gabora, "A theory of concepts and their combinations I & II," Kybernetes, vol. 34, pp. 167-191 & 192-221, 2005.
Quantum logic of semantic space: An explanatory investigation of context effects in practical reasoning. P D Bruza, R J Cole, We Will Show Them: Essays in Honour of Dob Gabbay. S. Artemov et al.College PublicationsP. D. Bruza and R. J. Cole, "Quantum logic of semantic space: An explanatory investigation of context effects in practical reasoning," in We Will Show Them: Essays in Honour of Dob Gabbay, S. Artemov et al., Eds., College Publications, 2005.
. D Widdows, Geometry, Meaning, Publications, University of Chicago PressILD. Widdows, Geometry and Meaning, CSLI Publications, IL: University of Chicago Press, 2006.
Quantum dynamics of human decision-making. J R Busemeyer, Z Wang, J T Townsend, J. Math. Psych. 50J. R. Busemeyer, Z. Wang, and J. T. Townsend, "Quantum dynamics of human decision-making," J. Math. Psych., vol. 50, pp. 220-241, 2006.
Entangling words and meaning. P D Bruza, K Kitto, D Mcevoy, C Mcevoy, Proceedings of the Second Quantum Interaction Symposium. the Second Quantum Interaction SymposiumOxford, UKOxford University PressP. D. Bruza, K. Kitto, D. McEvoy, and C. McEvoy, "Entangling words and meaning," in Proceedings of the Second Quantum Interaction Symposium, Oxford, UK, Oxford University Press, 2008, pp. 118-124.
Quantum structure in cognition. D Aerts, J. Math. Psych. 53D. Aerts, "Quantum structure in cognition," J. Math. Psych., vol. 53, pp. 314-348, 2009.
Geometric analogue of holographic reduced representation. D Aerts, M Czachor, B De Moor, J. Math. Psych. 53D. Aerts, M. Czachor, and B. De Moor, "Geometric analogue of holographic reduced representation," J. Math. Psych., Vol. 53, pp. 389-398, 2009.
Extracting spooky-activation-at-a-distance from considerations of entanglement. P D Bruza, K Kitto, D Nelson, C Mcevoy, Proceedings of QI 2009-Third International Symposium on Quantum Interaction. P. D. Bruza, D. Sofge, W. Lawless, C. J. van Rijsbergen, and M. KluschQI 2009-Third International Symposium on Quantum InteractionBerlin, HeidelbergSpringer5494P. D. Bruza, K. Kitto, D. Nelson, and C. McEvoy, "Extracting spooky-activation-at-a-distance from considerations of entanglement," in Proceedings of QI 2009-Third International Symposium on Quan- tum Interaction, P. D. Bruza, D. Sofge, W. Lawless, C. J. van Rijsbergen, and M. Klusch, Eds., LNCS vol. 5494, Berlin, Heidelberg: Springer, 2009, pp. 71-83.
A quantum probability explanation for violations of 'rational' decision theory. E M Pothos, J R Busemeyer, Proc. Roy. Soc. B. 276E. M. Pothos and J. R. Busemeyer, "A quantum probability explanation for violations of 'rational' decision theory," Proc. Roy. Soc. B, vol. 276, pp. 2171-2178, 2009.
Quantum mechanics and violations of the Sure-Thing Principle: The use of probability interference and other concepts. A Y Khrennikov, E Haven, J. Math. Psych. 53A. Y. Khrennikov and E. Haven, "Quantum mechanics and violations of the Sure-Thing Principle: The use of probability interference and other concepts," J. Math. Psych., vol. 53, pp. 378-388, 2009.
Quantum experimental data in psychology and economics. D Aerts, B D'hooghe, E Haven, Int. J. Theor. Phys. 49D. Aerts, B. D'Hooghe, and E. Haven, "Quantum experimental data in psychology and economics," Int. J. Theor. Phys., vol. 49,pp. 2971-2990, 2010.
Quantum structure in cognition: Why and how concepts are entangled. D Aerts, S Sozzo, Proceedings of QI 2011-Fourth International Symposium on Quantum Interaction. D. Song, M. Melucci, and I. FrommholzQI 2011-Fourth International Symposium on Quantum InteractionBerlin, HeidelbergSpringer7052D. Aerts and S. Sozzo, "Quantum structure in cognition: Why and how concepts are entangled," in Proceedings of QI 2011-Fourth International Symposium on Quantum Interaction, D. Song, M. Melucci, and I. Frommholz, Eds., Berlin, Heidelberg: Springer, 2011, LNCS, vol. 7052, pp. 116-127.
Quantum interaction approach in cognition, artificial intelligence and robotics. D Aerts, M Czachor, S Sozzo, Proceedings of the The Fifth International Conference on Quantum, Nano and Micro Technologies. V. Privman and V. Ovchinnikovthe The Fifth International Conference on Quantum, Nano and Micro TechnologiesICQNM 2011D. Aerts, M. Czachor, and S. Sozzo, "Quantum interaction approach in cognition, artificial intelligence and robotics,"in Proceedings of the The Fifth International Conference on Quantum, Nano and Micro Technologies (ICQNM 2011), V. Privman and V. Ovchinnikov, Eds., IARIA, 2011, pp. 35-40, 2011.
Quantum interaction approach in cognition, artificial intelligence and robotics. D Aerts, L Gabora, S Sozzo, T Veloz, Proceedings of the Fifth International Conference on Quantum, Nano and Micro Technologies (ICQNM). V. Privman and V. Ovchinnikovthe Fifth International Conference on Quantum, Nano and Micro Technologies (ICQNM)D. Aerts, L. Gabora, S. Sozzo, and T. Veloz, "Quantum interaction approach in cognition, artificial intelligence and robotics," in Proceedings of the Fifth International Conference on Quantum, Nano and Micro Technologies (ICQNM), V. Privman and V. Ovchinnikov, Eds., IARIA, 2011, pp. 57-62, 2011.
A general modeling scheme for contextual emergent entangled interfering entities. D Aerts, S Sozzo, Submitted to the Proceedings of QI 2012-Fifth International Symposium on Quantum Interaction. D. Aerts and S. Sozzo, "A general modeling scheme for contextual emergent entangled interfering entities," Submitted to the Proceedings of QI 2012-Fifth International Symposium on Quantum In- teraction, 2012.
Disjunction of natural concepts. J A Hampton, Memory & Cognition. 16J. A. Hampton, "Disjunction of natural concepts," Memory & Cognition, vol. 16, pp. 579-591, 1988.
On the theory of light and colours. T Young, The Wave Theory of Light. Crew, H.New York92Reprinted in partT. Young, "On the theory of light and colours," Phil. Trans. Roy. Soc., vol. 92, pp. 12-48, 1802. Reprinted in part in: Crew, H. (ed.) The Wave Theory of Light, New York (1990).
Ondes et quanta. L De Broglie, Comptes Rendus. 177L. de Broglie, "Ondes et quanta," Comptes Rendus, vol. 177, pp. 507-510, 1923.
Quantizierung als Eigenwertproblem(Erste Mitteilung). E Schrödinger, Ann. Phys. 79E. Schrödinger, "Quantizierung als Eigenwertproblem(Erste Mitteilung)," Ann. Phys., vol. 79, pp. 361-376, 1926.
R P Feynman, The Feynman Lectures on Physics. New YorkAddison-WesleyR. P. Feynman, The Feynman Lectures on Physics, New York: Addison-Wesley, 1965.
Electron diffraction at multiple slits. C Jönsson, Am. J. Phys. 4C. Jönsson, "Electron diffraction at multiple slits," Am. J. Phys, vol. 4, pp. 4-11, 1974.
Wave-particle duality of C 60 molecules. M Arndt, O Nairz, J Vos-Andreae, C Keller, G Van Der Zouw, A Zeilinger, Nature. 401M. Arndt, O. Nairz, J. Vos-Andreae, C. Keller, G. van der Zouw, and A. , Zeilinger, "Wave-particle duality of C 60 molecules," Nature, vol. 401, pp. 680-682, 1999.
Quantum particles as conceptual entities. A Possible Explanatory Framework for Quantum Theory. D Aerts, Found. Sci. 14D. Aerts, "Quantum particles as conceptual entities. A Possible Explanatory Framework for Quantum Theory," Found. Sci., vol. 14, pp. 361-411, 2009.
Quantum interference and superposition in cognition: Development of a theory for the disjunction of concepts. D Aerts, Us: Bridging [37]-[39Worldviews. D. Aerts, B. D'Hooghe, and N. NoteSingaporeWorld ScientificKnowledge and Its Implications for Our Perspectives of the WorldD. Aerts, "Quantum interference and superposition in cognition: Development of a theory for the disjunction of concepts," in Worldviews, Science and Us: Bridging [37]-[39]Knowledge and Its Im- plications for Our Perspectives of the World, D. Aerts, B. D'Hooghe, and N. Note, Eds., Singapore: World Scientific, 2011, pp. 169-211.
General quantum modeling of combining concepts: A quantum field model in Fock space. D Aerts, D. Aerts, "General quantum modeling of combining concepts: A quantum field model in Fock space," Archive reference and link: http://uk.arxiv.org/abs/0705.1740, 2007.
V J Weeden, D L Rosene, R Wang, G Dai, F Mortazavi, P Hagmann, J H Kaas, W I Tseng, The geometric structure of the brain fiber pathways. 335V. J. Weeden, D. L. Rosene, R. Wang, G. Dai, F. Mortazavi, P. Hagmann, J. H. Kaas, and W. I. Tseng, "The geometric structure of the brain fiber pathways," Science, vol. 335, pp. 1628-1634, 2012.
Classical logical versus quantum conceptual thought: Examples in economy, decision theory and concept theory. D Aerts, B D'hooghe, Lecture Notes in Artificial Intelligence. 5494D. Aerts and B. D'Hooghe, "Classical logical versus quantum conceptual thought: Examples in econ- omy, decision theory and concept theory," Lecture Notes in Artificial Intelligence, vol. 5494, pp. 128-142, 2009.
Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vols. J L M Mcclelland, D E , The MIT Press1Cambridge, MARumelhart, and the PDP research groupJ. L. M. McClelland, D. E. Rumelhart, and the PDP research group, Eds., Parallel Distributed Pro- cessing: Explorations in the Microstructure of Cognition, vols. 1 and 2, Cambridge, MA: The MIT Press, 1986.
Holographic model for temporal recall. D Gabor, Nature. 217D. Gabor, "Holographic model for temporal recall," Nature, vol. 217, 1288-1289, 1968.
Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology. K H Pribram, Prentice HallNew York, NYK. H. Pribram, Languages of the Brain: Experimental Paradoxes and Principles in Neuropsychology, New York, NY: Prentice Hall, 1971.
Large patterns make great symbols: An example of learning from example. P Kanerva, Hybrid Neural SystemsP. Kanerva, "Large patterns make great symbols: An example of learning from example," Hybrid Neural Systems, pp. 194-203, 1998.
Holographic Reduced Representation: Distributed Representation for Cognitive Structures. T Plate, CSLI PublicationsStanford, CAT. Plate, 'Holographic Reduced Representation: Distributed Representation for Cognitive Structures, Stanford, CA: CSLI Publications, 2003.
The Emperor's New Mind. R Penrose, Oxford University PressOxford, UKR. Penrose, The Emperor's New Mind, Oxford, UK: Oxford University Press, 1990.
Quantum robots and environments. P Benioff, Phys. Rev. A. 582P. Benioff, "Quantum robots and environments," Phys. Rev. A, vol. 58, no. 2, pp. 893-904, 1998.
D Dong, C Chen, C Zhang, Z Chen, Quantum robots: Structure, algorithms and applications. 24D. Dong, C. Chen, C. Zhang, and Z. Chen, "Quantum robots: Structure, algorithms and applications," Robotica, vol. 24, pp. 513-521, 2006.
| [] |
[
"Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing",
"Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing"
] | [
"Sanchit Sinha \nDepartment of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA\n",
"Hanjie Chen \nDepartment of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA\n",
"Arshdeep Sekhon \nDepartment of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA\n",
"Yangfeng Ji yangfeng@virginia.edu \nDepartment of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA\n",
"Yanjun Qi \nDepartment of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA\n"
] | [
"Department of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA",
"Department of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA",
"Department of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA",
"Department of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA",
"Department of Computer Science\nUniversity of Virginia Charlottesville\nVAUSA"
] | [
"Online"
] | Interpretability methods like INTEGRATED GRADIENT and LIME are popular choices for explaining natural language model predictions with relative word importance scores. These interpretations need to be robust for trustworthy NLP applications in high-stake areas like medicine or finance. Our paper demonstrates how interpretations can be manipulated by making simple word perturbations on an input text. Via a small portion of word-level swaps, these adversarial perturbations aim to make the resulting text semantically and spatially similar to its seed input (therefore sharing similar interpretations).Simultaneously, the generated examples achieve the same prediction label as the seed yet are given a substantially different explanation by the interpretation methods. Our experiments generate fragile interpretations to attack two SOTA interpretation methods, across three popular Transformer models and on three different NLP datasets. We observe that the rank order correlation drops by over 20% when less than 10% of words are perturbed on average. Further, rank-order correlation keeps decreasing as more words get perturbed. Furthermore, we demonstrate that candidates generated from our method have good quality metrics. Our code is available at: github.com/QData/ TextAttack-Fragile-Interpretations. | 10.18653/v1/2021.blackboxnlp-1.33 | [
"https://www.aclanthology.org/2021.blackboxnlp-1.33.pdf"
] | 236,976,089 | 2108.04990 | fa6dea0bcd97211f65feb1e04851e6c71d764724 |
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
November 11, 2021
Sanchit Sinha
Department of Computer Science
University of Virginia Charlottesville
VAUSA
Hanjie Chen
Department of Computer Science
University of Virginia Charlottesville
VAUSA
Arshdeep Sekhon
Department of Computer Science
University of Virginia Charlottesville
VAUSA
Yangfeng Ji yangfeng@virginia.edu
Department of Computer Science
University of Virginia Charlottesville
VAUSA
Yanjun Qi
Department of Computer Science
University of Virginia Charlottesville
VAUSA
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Online
the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLPNovember 11, 2021420
Interpretability methods like INTEGRATED GRADIENT and LIME are popular choices for explaining natural language model predictions with relative word importance scores. These interpretations need to be robust for trustworthy NLP applications in high-stake areas like medicine or finance. Our paper demonstrates how interpretations can be manipulated by making simple word perturbations on an input text. Via a small portion of word-level swaps, these adversarial perturbations aim to make the resulting text semantically and spatially similar to its seed input (therefore sharing similar interpretations).Simultaneously, the generated examples achieve the same prediction label as the seed yet are given a substantially different explanation by the interpretation methods. Our experiments generate fragile interpretations to attack two SOTA interpretation methods, across three popular Transformer models and on three different NLP datasets. We observe that the rank order correlation drops by over 20% when less than 10% of words are perturbed on average. Further, rank-order correlation keeps decreasing as more words get perturbed. Furthermore, we demonstrate that candidates generated from our method have good quality metrics. Our code is available at: github.com/QData/ TextAttack-Fragile-Interpretations.
Introduction
Recently, the use of natural language processing (NLP) has gained popularity in many securityrelevant tasks like fake news identification (Zhou et al., 2019), authorship identification (Okuno et al., 2014), toxic content detection (Jigsaw, 2017), and for text-based automated privacy policy understanding (Harkous et al., 2018). Since interpretations of NLP predictions have become necessary building blocks of the SOTA deep NLP workflow, explanations have the potential to mislead human users into trusting a problematic interpretation. How-ever, there has been little analysis of the reliability and robustness of the explanation techniques, especially in high-stake settings, making their utility for critical applications unclear.
Research has shown that it is possible to disrupt and even manipulate interpretations in deep neural networks (Ghorbani et al., 2019;Dombrowski et al., 2019). The core idea in this literature centers around "fragile interpretations". (Ghorbani et al., 2019) defined that an interpretation is fragile if, for a given input, it is possible to generate perturbed input that achieves the same prediction label as the seed, yet is given a substantially different interpretation. Fragility limits how much we can trust and learn from specific interpretations. An adversary for "fragile interpretations" could manipulate the input to draw attention away from relevant words or onto desired features. Such input manipulation might be especially hard to detect because the actual labels have not changed.
The literature includes two relevant groups: (1) to conduct model manipulations (Slack et al., 2019; (details in Sec. 2), and (2) to manipulate input samples (Ghorbani et al., 2019). There has been little attention studying fragile interpretations via input manipulation in deep NLP.
In this paper, we propose a simple algorithm "Ex-plainFooler" that can make small adversarial perturbations on text inputs and demonstrate fragility of interpretations. We focus on optimizing two objective metrics -"L2 Norm" or a proposed "Delta LOM", searching for small word-swap-based input manipulation to produce misleading interpretations and using semantic-oriented constraints to constrain the manipulations. Figure 3 provides one example perturbation process. In summary, this paper provides the following contributions: • Our input perturbation optimizes to increase the objective metric ("L2 Norm" or "Delta LOM") that measures difference between the original and generated interpretations. The LOM score captures the approximate center "position" of an interpretation and summarizes it to a scalar. • We propose an effective algorithm "Explain-Fooler" to optimize the objective metric via an iterative procedure. Our algorithm generates a series of increasingly perturbed text inputs such that their explanations are significantly different from the original but preserving predictions. • Empirically, we show that it is possible to find
perturbed text examples to fool interpretations by INTEGRATED GRADIENT and LIME, even on NLP models that are relatively more robust. The approximate process and results of word perturbation using our approach is detailed in Figure 1.
Related Work
Interpretation Methods: Several interpretation methods have been proposed (Shrikumar et al., 2017;Li et al., 2015;Bach et al., 2015;Shrikumar et al., 2017) to calculate feature importance scores. Two well-known methods in this area are Integrated Gradients (IG) (Sundararajan et al., 2017) and Local Interpretable Model Explanations (LIME) (Ribeiro et al., 2016b). IG computes the scores by summing up the gradients along a path from the baseline to the input in a fixed number of steps and subsequently multiplied by the input itself. IG overcomes the saturation problem discussed in (Shrikumar et al., 2017;Sundararajan et al., 2017). On the other hand, LIME is a completely black-box approach which explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction by training the model on perturbations generated around the input. Fragile Interpretations More recently, several works have focused on discussing the robustness of the said interpretations. Studies have demonstrated that the interpretations generated are not robust and can be easily manipulated due to high dimensionality of networks. (Ghorbani et al., 2019;Dombrowski et al., 2019;Slack et al., 2019;. Multiple other works have tried to fix the problem by making interpretations robust (Lakkaraju et al., 2020;Rieger and Hansen, 2020). demonstrated that it is possible to introduce a new model over the original and alter gradients, to fool gradient-based interpretation methods. Similarly, (Slack et al., 2019) showed that black-box interpretation methods can also be fooled by allowing an adversarial classifier component. More recently, (Zafar et al., 2021) demonstrated empirically. that interpretability methods produce varying results on the same models but differently initialized. Adversarial Examples that fool NLP Predictions: Adversarial examples are inputs to a predictive machine learning model that are maliciously designed to fool the model predictions (Goodfellow et al., 2014). Multiple recent works have focused on applying the concept of adversarial examples on language inputs, including (1) attacks by Character Substitution (Ebrahimi et al., 2017;Gao et al., 2018;Li et al., 2018); (2) attacks by Paraphrase (Ribeiro et al., 2018;Iyyer et al., 2018); (c) attacks by Synonym Substitution (Alzantot et al., 2018;Kuleshov et al., 2018;Papernot et al., 2016); (d) attacks by Word Insertion or Removal (Liang et al., 2017;Samanta and Mehta, 2017); (e) attacks by limiting L p distance in a latent embedding space (Zhao et al., 2017). Our proposed algorithm is closely connected to the TextFooler algorithm that searches for input perturbations to achieve mis-classification. Differently, we optimize the "L2 Norm" and "Location of Mass (LOM)" objective directly on the input space for fragile explanations.
Proposed Method
In this section, we present our algorithm to generate perturbed sentences that demonstrate fragile interpretations. First, we propose the metric "Location of Mass (LOM)" and L2 Norm, followed by a discussion on the search strategy to optimize the objective metrics. Subsequently, we discuss the interpretation method choices and end with the final candidates' selection procedure and pseudocode for our algorithm (Algorithm1). We denote a text input as x and its word importance score vector (from a specific interpretation strategy on a particular NLP model) using notation I.
Difference Metrics on Interpretation
To quantify the difference between two interpretations, we propose two objective metrics -"Delta LOM" and "L2 Norm". These metrics are divergent -that is higher the metric, the more different the interpretations.
"Location of Mass (LOM)" Score
First, we propose a metric inspired by (Ghorbani et al., 2019) which provides a quantifiable "position" of the interpretations of a sentence. First, we define the "Location of Mass (LOM)" score as:
LOM (I) = t=n−1 t=0 (i t * t) t=n−1 t=0 i t(1)
Here n is the length of the sentence (along with starting/end special tokens). And i t is the interpretability score assigned to the token at index 't'.
We then propose to calculate the "Delta LOM" metric as: the difference between the LOM scores on the two interpretations I 1 and I 2 :
∆LOM (I 1 , I 2 ) = |LOM (I 1 ) − LOM (I 2 )| (2)
The intuition behind this metric comes from the fact that changing the approximate position of the "center" of interpretations changes the relative position and magnitudes of interpretations. This observation is demonstrated in Figure 3.
L2 Norm Metric
We also propose to use a standard L2 Norm to measure difference between two interpretations. Mathematically it is computed as follows:
L2N orm(I 1 , I 2 ) = I 1 − I 2 2
(3)
L2 Norm quantifies the extent of difference, higher the L2 Norm-higher the difference in pattern of two interpretations.
Searching for Word-level Perturbations
Our objective is to perturb a seed input x, into a slightly-modified text x adv , so that ∆LOM or L2N orm is maximized under a set of constraints.
First, we rank each word of an input sentence in the order of their importance to a model's predictions. This is done by the Leave-one-out approach (Li et al., 2016), which removes each word from the sentence one at a time and measures the change in prediction values, ranking the words which produce the greatest change as most important. Subsequently, we start our search in decreasing order of word importance and substituting each word with their k closest nearest-neighbors according to their counter-fitting synonym embeddings (Mrksic et al., 2016). For every subsequent word replacement, interpretation is calculated according to victim interpretation strategy we try to attack.
Ensuring Constraints
We enforce the following four constraints for each perturbed candidate to ensure candidates do not lose their linguistic structure and approximate semantic meaning of the seed input.
• Repeat Modification: Stops the same word from getting perturbed more than once. • Stop Word Modification: This excludes predefined stop words from getting perturbed. • Word Embedding Distance: Swaps the original word with words that have less than a particular embedding distance using Counter-Fitting Embeddings. • Part of Speech: Replaces the original word with only words from the same part of speech. • Sentence Embedding: Ensure the difference in the Universal Sentence Embedding is less than a pre-defined threshold (Cer et al., 2018).
Victim Interpretation Choices
Integrated Gradient: We calculate INTEGRATED GRADIENT (Sundararajan et al., 2017) interpretations of NLP models using the open-source package Captum (Kokhlikyan et al., 2020) that provides accurate implementations of various interpretation methods. We use the popular INTEGRATED GRA-DIENT algorithm to calculate the importance scores on the embedding space of the models. Once the interpretations are calculated, they are summed up along the dimension axis to derive the word importance scores. Subsequently, the ∆LOM and L2 Norm scores of each candidate perturbation are calculated against the original input's interpretation. LIME: The LIME interpretations are calculated using the official LIME code provided by (Ribeiro et al., 2016a). We normalize the LIME scores by dividing the vector with its L 2 -norm. Subsequently, the ∆LOM and L2 Norm scores of each candidate perturbation are calculated against the original input's interpretation.
Finding the ideal candidate
Once we obtain all the candidates and their metric scores on every candidate achieving the same prediction label as the original, we store those ideal candidates with each 'm' number of words perturbed. This gives us a list of candidates for each level of word perturbation and the associated change in objective metric scores. Next, for each level, the candidate with the highest metric score against the original is chosen. Finally, we convert the number of perturbed words into a ratio with respect to the input's length. This is done to take into account the varying sentence lengths and get a normalized measure. The ratio is limited to 50% because once more than half the words are perturbed, the sentence starts losing its semantic meaning. The complete selection process is schematically detailed in Figure 2
Algorithm
Algorithm 1 "ExplainFooler" provides pseudocode to compute and select a list of candidates that can induce fragile explanations. Our implementation adapts and builds on top of the open-source package TextAttack (Morris et al., 2020).
Experiments
Data Summary
The experiments are conducted on three different datasets for text classification task. Experiments are conducted on the validation set for SST-Result: A -list of candidate sentences ordered by number of words perturbed from original For each sentence in dataset A ←empty S ←original sentence
I 0 ← InterpretMethod(S) P ←ordered list of important words (LOO) while <=50% of words perturbed from P do w ← P [0]
C ←empty while Possible perturbations exist do c ←Perturb S and get candidate if constraints pass and prediction label is same as S then
I ← InterpretMethod(c) ∆dif f ← dif f (I 0 , I), C ← C ∪ (∆dif f, c) else continue A ← A ∪ c where max(diff) P ← remove P [0]
Algorithm 1: The "ExplainFooler" algorithm 2 (Socher et al., 2013), test set for AG News (Zhang et al., 2015) and test set for IMDB dataset (Maas et al., 2011). We select the first 500 sentences from the SST-2 and AGNews datasets and 100 sentences from the test set from IMDB dataset to run our experiments. We discard sentences with just 2 words or less.
• SST-2: The Stanford Sentiment Treebank-2 dataset for movie review classification. It has two classes: positive and negative. Experiments are conducted on the first 500 sentences of the validation set. • AG News: A collection of raw news articles belonging to 4 different classes including World, Sports, Science/Technology and Business. Experiments are conducted on the first 500 sentences of test set. • IMDB: IMDB website dataset for binary sentiment classification containing a set of highly polar movie reviews. Experiments are conducted on first 500 sentences of test set except for LIME where only 100 sentences are used due to very high computation time due to very long average sentence length.
Interpretability Parameters
IG: As integrated gradients is a gradient based approach and requires a reference baseline, we compute the attributions on the embedding space and set the reference baseline to the special token <PAD> which is reserved in transformers as a special character. The step size for Integrated Gradients were chosen as 50 i.e. from reference to baseline, the gradients were summed up in 50 continuous steps. LIME: The number of perturbations for LIME were chosen as 500 and the maximum number of top-k words were chosen as 512 words -the truncation limit for all the models.
Perturbation Parameters
We choose the number of nearest neighbours as 50 for swapping the words to limit the number of candidates. The maximum embedding cosine similarity between sentences was set as 0.5 to ensure sentences do not lose their semantic meaning.
Under the Hood
Pre-processing: All sentences with less than 2 words in all datasets are removed due to word perturbations not existing in some cases. In other cases, the smaller sentence have a very big difference in rank correlation which can spuriously decrease evaluation metrics. Each sentence from all datasets is also converted to lower-case.
Fixing Tokenizations: As pre-trained tokenizers for transformer models contain a ML matching based lookup vocabulary, many words in candidate sentences are tokenized in an unexpected manner. This results in the change of length of the token list which in turn changes the length of interpretations. To alleviate this problem, we test 2 distinct approaches to combine the unnaturally tokenized words into their original form.
• Average: The first approach combines all the tokens prefixed by a set character (## in case of DistilBERT) into one single word and assigns the average value of the tokens to the combined tokens • Max: The second approach combines all the tokens prefixed by a set character (## in case of DistilBERT) into one single word and assigns the absolute maximum value with sign to the combined word.
Upon careful review, we utilize the second approach for our experiments. This is because, in uncommon cases where tokens hold opposite polarity to the ones in the word result in 'diluted' value of the original token. An example of the effectiveness of the 'Max' approach is given in Figure 4.
Evaluation Metrics
Rank Correlation
To compare the correlation between interpretations of 2 sentences, we use the Spearman rank correlation metric. The more the ranks of the interpretations agree with each other, the higher the rank correlations. Importantly, we clip the negative values of the metric to 0. This is done because a negative correlation does not make sense when only comparing the difference in ranks and can spuriously bring down the average scores.
R − Correlation = max(0, Spearman(I 1 , I 2 )) (4) We report results in Tables 1,3,8,10 and corresponding violin graphs Figures 7,10 of average Spearman rank order correlations and standard deviations versus ratio of words perturbed for 3 datasets (SST-2, AG News and IMDB) across both models (DistilBERT-uncased and RoBERTa-base) using 2 interpretability methods -INTEGRATED GRADI-ENT and LIME.
Top-50% Intersection
To compare the extent to which the words with highest attributions are correctly predicted by both the interpretation methods, we use the Top-k% intersection metric. To compute the intersection, we first find the words with the maximum absolute value of attributions (most important for prediction). We calculate the intersection of the top 50% highest attribution words.
Intersection = (argsort(I 1 ), argsort(I 2 )) 0.5 * length(I 1 ) (5) where argsort returns the indices of the top-50% of the words in a sentence with highest attributions.
Candidate Quality
To judge the quality of the candidates generated using "ExplainFooler", we calculate two different commonly used quality metrics from adversarial attack literature -Perplexity and absolute number of grammar errors similar to (Li et al., 2020). Perplexity We first use perplexity to estimate the fluency of candidates generated using "Explain-Fooler". The lower the value, the more fluent the candidates, measured using a small size GPT-2 model (50k vocabulary) (Radford et al., 2019). Grammatical Errors Estimates the average number of absolute difference in grammatical errors between the original and the candidate sentences. We use the Language Tool (Naber et al., 2003) to compute the errors.
Model Choices
The robustness concern of interpretation strategies challenges their use in critical applications, raising concerns like lack of trust. However, it is unclear what causes the "fragile explanations", the model or the interpretation? We therefore select three different transformer models namely, DistilBERTuncased (Sanh et al., 2019), RoBERTa-base and BERT-base (Devlin et al., 2018) to conduct our experiments. More importantly, we retrain the BERT-base to obtain the BERT-base-adv model that is an adversarially trained version of the BERT-base model. The rationale behind the choices is to investigate the impact of model's robustness on the robustness of the interpretations.
(1) First, a generic transformer model like Distil-BERT is relatively smaller and faster but less robust than the other two. (2) Next RoBERTa is extensively better pre-trained and has a far more robust performance.
(3) Lastly, BERT-base-adv model is trained from adversarial training. We use the popular TextFooler
Empirical Results
Rank Order and Top-50% Intersection
The results are reported in a tabular manner across 3 datasets (SST-2, AG News and IMDB), 3 models (DistilBERT, RoBERTa and BERT-adv (Section A(Appendix)) and 2 interpretability methods covering both metrics -L2 Norm, "Delta LOM" and compared against random candidate selection independent of both metrics. The first set of tables (Tables 1 and 3) report the average rank-order correlation between interpretations from the perturbed and the original, across different perturbation ratios in buckets of 10%. The second set of tables (Tables 2 and 4) report the average top-50% intersection. The rank correlation results for the IMDB datasets are reported only on IG due to excessive computational constraints. Due to space constraints, the results for both AGNews and IMDB datasets are reported in (Tables 8-11 and Tables 12-13 respectively, Section A.2 (Appendix)) along with a more detailed representation of the intra-bucket distribution in the form of Violin Graphs (Section A.3 (Appendix)). Table 3: Change in average rank-order correlation using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: LIME, for dataset-SST-2 over 3 models -DistilBERT, RoBERTa and BERT-adv. Table 4: Change in average Top-50% intersection using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: LIME, for dataset-SST-2 over 3 models -DistilBERT, RoBERTa and BERT-adv. Table 5: Average values of perplexity calculated using a small GPT-2 model over all candidates generated by "ExplainFooler" (C-avg). The values in columns LOM and L2 denote the perplexity values calculated on the selected sentences using the proposed metrics. The average value of perplexity of original sentences in dataset are given in parentheses. Selection using metrics give more fluent sentences. A bucket represents all instances of perturbed candidates in the ratio between that lower and higher range. For example, bucket between "0.1-0.2" contains all rank-order correlations from sentences with a percentage of words perturbed between 10% and 20%. We also provide violin plots in appendix showcasing intra-bucket distribution for the dataset SST-2 (Figures 7-10). We observe that both average rank-order correlation and top-50% intersection scores decrease as the ratio of words being perturbed increases. Observations imply that interpretations of sentences become increasingly dissimilar to the original sentence as more words are perturbed even though the prediction robustness of the models remains high (see Table 6, Figure 12). Similar trends are observed across all models, datasets, and covering both victim interpretability methods. These empirical observations demonstrate interpretations generated by INTEGRATED GRADIENT and LIME are fragile for all models -even models that are adversarially more robust (BERT-adv). To further demonstrate effectiveness of proposed metrics, we plot violin plots on SST-2 dataset for avg. rank correlation versus selection using metrics and random. (Figure 5 -Appendix)
SST-2 DistilBERT
SST-2 DistilBERT
Quality of candidates
Perplexity The average perplexity values over all models and datasets are reported in Table 7: Average number of grammatical errors on candidates generated using "ExplainFooler" on the SST-2 dataset (C-avg). The accompanying values in columns ∆LOM and L2 denote the grammar errors calculated on the sentences selected using the proposed metrics. each dataset, model pair values corresponding to proposed metrics and random selection are reported. It can be observed that perplexity of candidates selected using proposed metrics have lower perplexity score (implying better fluency) than average of all candidates generated by "ExplainFooler". Grammatical Errors Estimates average number of absolute difference in grammatical errors between the original and candidate sentences. We use Language Tool (Naber et al., 2003) to compute the errors. The results for SST-2 dataset are reported in Table 7.
Conclusions
Literature sees a growing emphasis on interpretation techniques for explaining NLP model predictions. Our work demonstrates a novel algorithm that generates perturbed inputs that provide evidence of fragile interpretations. We demonstrate the effectiveness of our approach across three different models, with one of them adversarially trained.
Our results show that it is possible to attack interpretations using simple input-level word swaps under certain constraints. We also demonstrate that both black and white-box interpretability approaches (LIME and INTEGRATED GRADIENT) show fragility in their derived interpretations. We hope our findings can pave lights for future studies on defending against problem of fragile interpretations in NLP.
A Appendix
A.1 Compare with Baseline Figures 5,6 show the decrease in average rank correlation when considering random candidates as opposed to selection using the LOM metric.
A.2 Additional Results
In this section we report the average rank order correlation and the average top-50% intersection scores for AGNews and IMDB datasets. The Tables 8,9 correspond to AGNews' rank correlation and top-50% scores using INTEGRATED GRADI-ENT whereas Tables 10,11 show same values using LIME. Tables 12 and 13 show similar values but for IMDB dataset.
A.3 Violin Plots for intra-bucket distribution analysis
The Violin plots convey more information about the relative distribution of average rank correlations and Top-50% values for various bucket ratios. The following figures are only reported on the SST-2 dataset for each combination of evaluation metric and interpretability methods.
A.4 Visual Results
A few visual results demonstrating the gradual change in interpretations of candidate adversaries is shown in Figure 12. It can be observed that ∆LOM score gradually increases with word perturbations. The examples demonstrate the same 3 sentences from the dataset perturbed under DistilBERT and RoBERTa respectively. Figure 5: The violin graphs demonstrate the effectiveness of candidate selection based on the proposed metrics LOM and L2 Norm over random selection for SST-2 dataset. As it can be seen that the selection based on the proposed metrics disrupts rank correlation more as compared to randomly selecting candidates. Figure 6: The violin graphs demonstrate the effectiveness of candidate selection based on the proposed metrics LOM and L2 Norm over random selection for AG-News dataset. As it can be seen that the selection based on the proposed metrics disrupts rank correlation more as compared to randomly selecting candidates. Table 10: Change in average rank-order correlation using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: LIME, for dataset-AGNews over 3 models -DistilBERT, RoBERTa and BERT-adv. Table 11: Change in average Top-50% intersection using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: LIME, for dataset-AGNews over 3 models -DistilBERT, RoBERTa and BERT-adv. Table 13: Change in average rank-order correlation using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: INTEGRATED GRADIENT, for dataset-IMDB over 3 models -DistilBERT, RoBERTa and BERT-adv.
IMDB
Figure 1 :
1The figure demonstrates the input perturbation process for an increasing number (levels) of word perturbations. The red color depicts negative attribution, and the green shows positive attribution. The saturation of the colors signifies the magnitude of the said attributions. Note: the interpretations gradually become more and more different from the original, although the semantic meaning of the sentence does not change drastically.
Figure 2 :
2A schematic diagram of the proposed "Ex-plainFooler" algorithm. In the figure, the "Perturb" step generates a list of all possible perturbations according to the constraints as discussed in Sec. 3.3. The interpretation are generated as discussed in Sec. 3.4. The selection process uses objective metrics explained by Sec. 3.1.
Figure 3 :
3The figure demonstrates the ∆LOM score for an increasing number of word perturbations. The interpretations gradually become more and more different from the original although the semantic meaning of the sentence does not change drastically. We can see that the model still predicts the original output but the interpretations become senseless as the ∆LOM score increases.[Best viewed in color]
Figure 4 :
4The figure demonstrates the combining of tokens of a sentence tokenized using DistilBERT's pretrained tokenizer. The top group of sentences demonstrates averaging approach and the bottom group of sentences are combined using Abs-Max approach detailed in 4.4. [Best viewed in color]
Figure 7 :Figure 8 :Figure 9 :Figure 10 :
78910Average Rank-correlation for the dataset: SST-2, using metric: LOM on models DistilBERT, RoBERTa and BERT-adv using interpretability method -INTEGRATED GRADIENT Average Rank-correlation for the dataset: SST-2, using metric: LOM on models DistilBERT, RoBERTa and BERT-adv using interpretability method -LIME Average Rank-correlation for the dataset: SST-2, using metric: L2 Norm on models DistilBERT, RoBERTa and BERT-adv using interpretability method -INTEGRATED GRADIENT Average Rank-correlation for the dataset: SST-2, using metric: L2 Norm on models DistilBERT, RoBERTa and BERT-adv using interpretability method -LIMEFigure 11: A few random sentence explanations from the SST-2 dataset calculated on DistilBERT-uncased using INTEGRATED GRADIENT. [Best viewed in color].
Figure 12 :
12The same sentence visualizations calculated on RoBERTa-base. It is clear RoBERTa is much more robust in making predictions but both DistilBERT and RoBERTa are susceptible to such attacks on their interpretations.[Best viewed in color]
algorithm to generate adversarial examples via the open-source package Textattack. DistilBERT and RoBERTa models were from pre-trained models, fine-tuned on the respective datasets and we take them from the Huggingface's transformer model hub(Wolf et al., 2020) without change. Differently, BERTbase-adv model is adversarially trained by attacking 10000 training examples for the IMDB and AG datasets and attacking all training samples for SST-2 dataset.
Table 2 :
2Change in average Top-50% intersection using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: INTEGRATED GRADIENT, for dataset-SST-2 over 3 models -DistilBERT, RoBERTa and BERT-adv.SST-2
DistilBERT
RoBERTa
BERT-adv
Ratio
L2 ∆LOM Random L2 ∆LOM Random L2 ∆LOM Random
0-0.1
0.64
0.7
0.79
0.59
0.66
0.76
0.57
0.68
0.72
0.1-0.2 0.52
0.58
0.65
0.58
0.63
0.7
0.37
0.52
0.59
0.2-0.3 0.46
0.51
0.56
0.52
0.58
0.62
0.34
0.47
0.54
0.3-0.4 0.39
0.43
0.46
0.48
0.54
0.58
0.31
0.36
0.36
0.4-0.5 0.23
0.29
0.46
0.55
0.55
0.54
0.28
0.2
0.24
Table 6 :
6Average model confidence for correct predic-
tion values for increasing number of words perturbed
over models -DistilBERT, RoBERTa and BERT-adv on
datasets -SST-2, AGNews and IMDB
Table 5 .
5ForGrammatical Errors (lower is better)
Model
C-avg L2 ∆LOM
DistilBERT 0.59 0.59
0.58
RoBERTa
0.79 0.76
0.75
BERT-adv
0.60 0.51
0.52
: Change in average rank-order correlation using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: INTEGRATED GRADIENT, for dataset-AGNews over 3 models -DistilBERT, RoBERTa and BERT-adv. : Change in average Top-50% intersection using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: INTEGRATED GRADIENT, for dataset-AGNews over 3 models -DistilBERT, RoBERTa and BERT-adv.AGNews
DistilBERT
RoBERTa
BERT-adv
Ratio
L2 ∆LOM Random L2 ∆LOM Random L2 ∆LOM Random
0-0.1
0.81
0.84
0.86
0.73
0.68
0.82
0.38
0.56
0.63
0.1-0.2 0.72
0.75
0.78
0.65
0.57
0.72
0.32
0.42
0.46
0.2-0.3 0.64
0.66
0.69
0.62
0.52
0.66
0.28
0.32
0.29
0.3-0.4 0.55
0.58
0.58
0.58
0.48
0.62
0.25
0.25
0.26
0.4-0.5 0.49
0.52
0.56
0.52
0.42
0.56
0.18
0.23
0.24
Table 8AGNews
DistilBERT
RoBERTa
BERT-adv
Ratio
L2 ∆LOM Random L2 ∆LOM Random L2 ∆LOM Random
0-0.1
0.64
0.65
0.71
0.7
0.74
0.85
0.48
0.51
0.79
0.1-0.2 0.57
0.58
0.69
0.61
0.64
0.8
0.37
0.4
0.69
0.2-0.3 0.57
0.58
0.62
0.55
0.59
0.77
0.24
0.27
0.64
0.3-0.4 0.53
0.53
0.58
0.52
0.55
0.74
0.22
0.24
0.6
0.4-0.5 0.51
0.52
0.56
0.45
0.5
0.71
0.19
0.24
0.58
Table 9AGNews
DistilBERT
RoBERTa
BERT-adv
Ratio
L2 ∆LOM Random L2 ∆LOM Random L2 ∆LOM Random
0-0.1
0.65
0.69
0.71
0.58
0.57
0.61
0.7
0.61
0.72
0.1-0.2 0.59
0.6
0.62
0.55
0.54
0.56
0.69
0.45
0.7
0.2-0.3 0.53
0.53
0.58
0.54
0.53
0.48
0.65
0.35
0.66
0.3-0.4 0.48
0.52
0.55
0.51
0.51
0.36
0.65
0.28
0.65
0.4-0.5 0.44
0.38
0.46
0.43
0.42
0.43
0.59
0.26
0.61
Table 12 :
12Change in average rank-order correlation using metrics -L2 Norm, LOM and random selection conmputed using the interpretability method: INTEGRATED GRADIENT, for dataset-IMDB over 3 models -DistilBERT, RoBERTa and BERT-adv.IMDB
DistilBERT
RoBERTa
BERT-adv
Ratio
L2 ∆LOM Random L2 ∆LOM Random L2 ∆LOM Random
0-0.1
0.7
0.71
0.74
0.73
0.75
0.76
0.61
0.63
0.66
0.1-0.2 0.6
0.63
0.66
0.64
0.66
0.69
0.58
0.61
0.63
0.2-0.3 0.59
0.6
0.63
0.57
0.63
0.65
0.57
0.6
0.61
0.3-0.4 0.56
0.57
0.57
0.57
0.58
0.6
0.55
0.57
0.58
0.4-0.5 0.52
0.52
0.52
0.52
0.55
0.57
0.54
0.54
0.54
Moustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, Kai-Wei Chang, arXiv:1804.07998Generating natural language adversarial examples. arXiv preprintMoustafa Alzantot, Yash Sharma, Ahmed Elgohary, Bo-Jhang Ho, Mani Srivastava, and Kai-Wei Chang. 2018. Generating natural language adversarial ex- amples. arXiv preprint arXiv:1804.07998.
On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. Sebastian Bach, Alexander Binder, Grégoire Montavon, Frederick Klauschen, Klaus-Robert Müller, Wojciech Samek, PloS one. 107130140Sebastian Bach, Alexander Binder, Grégoire Mon- tavon, Frederick Klauschen, Klaus-Robert Müller, and Wojciech Samek. 2015. On pixel-wise explana- tions for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7):e0130140.
Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. Daniel Cer, Yinfei Yang, Sheng-Yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St, Noah John, Mario Constant, Steve Guajardo-Cespedes, Yuan, abs/1803.11175CoRRUniversal sentence encoderDaniel Cer, Yinfei Yang, Sheng-yi Kong, Nan Hua, Nicole Limtiaco, Rhomni St. John, Noah Con- stant, Mario Guajardo-Cespedes, Steve Yuan, Chris Tar, Yun-Hsuan Sung, Brian Strope, and Ray Kurzweil. 2018. Universal sentence encoder. CoRR, abs/1803.11175.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Explanations can be manipulated and geometry is to blame. Ann-Kathrin Dombrowski, Maximillian Alber, Christopher Anders, Marcel Ackermann, Klaus-Robert Müller, Pan Kessel, Advances in Neural Information Processing Systems. Curran Associates, Inc32Ann-Kathrin Dombrowski, Maximillian Alber, Christo- pher Anders, Marcel Ackermann, Klaus-Robert Müller, and Pan Kessel. 2019. Explanations can be manipulated and geometry is to blame. In Ad- vances in Neural Information Processing Systems, volume 32, pages 13589-13600. Curran Associates, Inc.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, Dejing Dou, arXiv:1712.06751Hotflip: White-box adversarial examples for text classification. arXiv preprintJavid Ebrahimi, Anyi Rao, Daniel Lowd, and De- jing Dou. 2017. Hotflip: White-box adversarial examples for text classification. arXiv preprint arXiv:1712.06751.
Black-box generation of adversarial text sequences to evade deep learning classifiers. Ji Gao, Jack Lanchantin, Mary Lou Soffa, Yanjun Qi, IEEE Security and Privacy Workshops. IEEEJi Gao, Jack Lanchantin, Mary Lou Soffa, and Yan- jun Qi. 2018. Black-box generation of adversarial text sequences to evade deep learning classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), pages 50-56. IEEE.
Interpretation of neural networks is fragile. Amirata Ghorbani, Abubakar Abid, James Zou, 10.1609/aaai.v33i01.33013681Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Amirata Ghorbani, Abubakar Abid, and James Zou. 2019. Interpretation of neural networks is fragile. Proceedings of the AAAI Conference on Artificial In- telligence, 33(01):3681-3688.
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversar- ial examples. arXiv preprint arXiv:1412.6572.
Polisis: Automated analysis and presentation of privacy policies using deep learning. Hamza Harkous, Kassem Fawaz, Rémi Lebret, Florian Schaub, G Kang, Karl Shin, Aberer, Proceedings of the 27th USENIX Conference on Security Symposium, SEC'18. the 27th USENIX Conference on Security Symposium, SEC'18USA. USENIX AssociationHamza Harkous, Kassem Fawaz, Rémi Lebret, Flo- rian Schaub, Kang G. Shin, and Karl Aberer. 2018. Polisis: Automated analysis and presentation of pri- vacy policies using deep learning. In Proceedings of the 27th USENIX Conference on Security Sympo- sium, SEC'18, page 531-548, USA. USENIX Asso- ciation.
Adversarial example generation with syntactically controlled paraphrase networks. Mohit Iyyer, John Wieting, Kevin Gimpel, Luke Zettlemoyer, abs/1804.06059CoRRMohit Iyyer, John Wieting, Kevin Gimpel, and Luke Zettlemoyer. 2018. Adversarial example generation with syntactically controlled paraphrase networks. CoRR, abs/1804.06059.
Perspective api. Google Jigsaw, Google Jigsaw. 2017. Perspective api. https://www. perspectiveapi.com/.
Is bert really robust? a strong baseline for natural language attack on text classification and entailment. Di Jin, Zhijing Jin, Joey Tianyi Zhou, Peter Szolovits, 10.1609/aaai.v34i05.6311Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Di Jin, Zhijing Jin, Joey Tianyi Zhou, and Peter Szolovits. 2020. Is bert really robust? a strong base- line for natural language attack on text classification and entailment. Proceedings of the AAAI Confer- ence on Artificial Intelligence, 34(05):8018-8025.
Captum: A unified and generic model interpretability library for pytorch. Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-RichardsonNarine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, and Orion Reblitz-Richardson. 2020. Captum: A unified and generic model inter- pretability library for pytorch.
Adversarial examples for natural language classification problems. Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, Stefano Ermon, Volodymyr Kuleshov, Shantanu Thakoor, Tingfung Lau, and Stefano Ermon. 2018. Adversarial exam- ples for natural language classification problems.
Robust and stable black box explanations. Himabindu Lakkaraju, Nino Arsov, Osbert Bastani, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Himabindu Lakkaraju, Nino Arsov, and Osbert Bastani. 2020. Robust and stable black box explanations. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 5628-5638. PMLR.
Dianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, Bill Dolan, arXiv:2009.07502Contextualized perturbation for textual adversarial attack. arXiv preprintDianqi Li, Yizhe Zhang, Hao Peng, Liqun Chen, Chris Brockett, Ming-Ting Sun, and Bill Dolan. 2020. Contextualized perturbation for textual adversarial attack. arXiv preprint arXiv:2009.07502.
Jinfeng Li, Shouling Ji, Tianyu Du, Bo Li, Ting Wang, arXiv:1812.05271Textbugger: Generating adversarial text against real-world applications. arXiv preprintJinfeng Li, Shouling Ji, Tianyu Du, Bo Li, and Ting Wang. 2018. Textbugger: Generating adversarial text against real-world applications. arXiv preprint arXiv:1812.05271.
Jiwei Li, Xinlei Chen, Eduard Hovy, Dan Jurafsky, arXiv:1506.01066Visualizing and understanding neural models in nlp. arXiv preprintJiwei Li, Xinlei Chen, Eduard Hovy, and Dan Jurafsky. 2015. Visualizing and understanding neural models in nlp. arXiv preprint arXiv:1506.01066.
Jiwei Li, Will Monroe, Dan Jurafsky, arXiv:1612.08220Understanding neural networks through representation erasure. arXiv preprintJiwei Li, Will Monroe, and Dan Jurafsky. 2016. Un- derstanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220.
Bin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, Wenchang Shi, arXiv:1704.08006Deep text classification can be fooled. arXiv preprintBin Liang, Hongcheng Li, Miaoqiang Su, Pan Bian, Xirong Li, and Wenchang Shi. 2017. Deep text classification can be fooled. arXiv preprint arXiv:1704.08006.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar S Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S Zettlemoyer, Veselin Stoyanov, abs/1907.11692Roberta: A robustly optimized bert pretraining approach. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar S. Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke S. Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. ArXiv, abs/1907.11692.
Learning word vectors for sentiment analysis. L Andrew, Raymond E Maas, Daly, T Peter, Dan Pham, Huang, Y Andrew, Christopher Ng, Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1Association for Computational LinguisticsAndrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies-Volume 1, pages 142-150. As- sociation for Computational Linguistics.
Textattack: A framework for adversarial attacks, data augmentation. John X Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, Yanjun Qi, and adversarial training in nlpJohn X. Morris, Eli Lifland, Jin Yong Yoo, Jake Grigsby, Di Jin, and Yanjun Qi. 2020. Textattack: A framework for adversarial attacks, data augmenta- tion, and adversarial training in nlp.
. Nikola Mrksic, Ó Diarmuid, Blaise Séaghdha, Milica Thomson, Lina Maria Gasic, Pei Hao Rojas-Barahona, David Su, Vandyke, Nikola Mrksic, Diarmuid Ó Séaghdha, Blaise Thom- son, Milica Gasic, Lina Maria Rojas-Barahona, Pei hao Su, David Vandyke, Tsung-Hsien Wen, and
Counter-fitting word vectors to linguistic constraints. Steve J Young, HLT-NAACL. Steve J. Young. 2016. Counter-fitting word vectors to linguistic constraints. In HLT-NAACL.
A rule-based style and grammar checker. Daniel Naber, Daniel Naber et al. 2003. A rule-based style and gram- mar checker.
A challenge of authorship identification for tenthousand-scale microblog users. Syunya Okuno, Hiroki Asai, Hayato Yamana, IEEE International Conference on Big Data (Big Data). IEEESyunya Okuno, Hiroki Asai, and Hayato Yamana. 2014. A challenge of authorship identification for ten- thousand-scale microblog users. In 2014 IEEE Inter- national Conference on Big Data (Big Data), pages 52-54. IEEE.
Crafting adversarial input sequences for recurrent neural networks. Nicolas Papernot, Patrick Mcdaniel, Ananthram Swami, Richard Harang, Military Communications Conference, MILCOM 2016-2016 IEEE. IEEENicolas Papernot, Patrick McDaniel, Ananthram Swami, and Richard Harang. 2016. Crafting adver- sarial input sequences for recurrent neural networks. In Military Communications Conference, MILCOM 2016-2016 IEEE, pages 49-54. IEEE.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
Model-agnostic interpretability of machine learning. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, arXiv:1606.05386arXiv preprintMarco Tulio Ribeiro, Sameer Singh, and Car- los Guestrin. 2016a. Model-agnostic inter- pretability of machine learning. arXiv preprint arXiv:1606.05386.
Why should i trust you?: Explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningACMMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016b. Why should i trust you?: Explain- ing the predictions of any classifier. In Proceed- ings of the 22nd ACM SIGKDD international con- ference on knowledge discovery and data mining, pages 1135-1144. ACM.
Semantically equivalent adversarial rules for debugging NLP models. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2018. Semantically equivalent adversar- ial rules for debugging NLP models. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 856-865.
Laura Rieger, Lars Kai Hansen, arXiv:2007.06381A simple defense against adversarial attacks on heatmap explanations. arXiv preprintLaura Rieger and Lars Kai Hansen. 2020. A simple defense against adversarial attacks on heatmap ex- planations. arXiv preprint arXiv:2007.06381.
Suranjana Samanta, Sameep Mehta, arXiv:1707.02812Towards crafting text adversarial samples. arXiv preprintSuranjana Samanta and Sameep Mehta. 2017. Towards crafting text adversarial samples. arXiv preprint arXiv:1707.02812.
Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, arXiv:1910.01108Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprintVictor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
Learning important features through propagating activation differences. Avanti Shrikumar, Peyton Greenside, Anshul Kundaje, PMLRInternational Conference on Machine Learning. Avanti Shrikumar, Peyton Greenside, and Anshul Kun- daje. 2017. Learning important features through propagating activation differences. In International Conference on Machine Learning, pages 3145-3153. PMLR.
How can we fool lime and shap? adversarial attacks on post hoc explanation methods. Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, Himabindu Lakkaraju, Dylan Slack, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2019. How can we fool lime and shap? adversarial attacks on post hoc ex- planation methods.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Andrew Manning, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Pro- ceedings of the 34th International Conference on Machine Learning-Volume 70, pages 3319-3328. JMLR. org.
Gradient-based analysis of NLP models is manipulable. Junlin Wang, Jens Tuyls, Eric Wallace, Sameer Singh, 10.18653/v1/2020.findings-emnlp.24Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsJunlin Wang, Jens Tuyls, Eric Wallace, and Sameer Singh. 2020. Gradient-based analysis of NLP mod- els is manipulable. In Findings of the Associa- tion for Computational Linguistics: EMNLP 2020, pages 247-258, Online. Association for Computa- tional Linguistics.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Muhammad Bilal Zafar, Michele Donini, Dylan Slack, arXiv:2106.04631Cédric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi. 2021. On the lack of robust interpretability of neural text classifiers. arXiv preprintMuhammad Bilal Zafar, Michele Donini, Dylan Slack, Cédric Archambeau, Sanjiv Das, and Krishnaram Kenthapadi. 2021. On the lack of robust inter- pretability of neural text classifiers. arXiv preprint arXiv:2106.04631.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in neural information processing systems. Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in neural information pro- cessing systems, pages 649-657.
Zhengli Zhao, Dheeru Dua, Sameer Singh, arXiv:1710.11342Generating natural adversarial examples. arXiv preprintZhengli Zhao, Dheeru Dua, and Sameer Singh. 2017. Generating natural adversarial examples. arXiv preprint arXiv:1710.11342.
Fake news: Fundamental theories, detection strategies and challenges. Xinyi Zhou, Reza Zafarani, Kai Shu, Huan Liu, Proceedings of the twelfth ACM international conference on web search and data mining. the twelfth ACM international conference on web search and data miningXinyi Zhou, Reza Zafarani, Kai Shu, and Huan Liu. 2019. Fake news: Fundamental theories, detection strategies and challenges. In Proceedings of the twelfth ACM international conference on web search and data mining, pages 836-837.
| [] |
[
"Approximating How Single Head Attention Learns",
"Approximating How Single Head Attention Learns"
] | [
"Charlie Snell csnell22@berkeley.edu \nComputer Science Division\nUniversity of California\nBerkeley\n",
"Ruiqi Zhong ruiqi-zhong@berkeley.edu \nComputer Science Division\nUniversity of California\nBerkeley\n",
"Dan Klein klein@berkeley.edu \nComputer Science Division\nUniversity of California\nBerkeley\n",
"Jacob Steinhardt jsteinhardt@berkeley.edu \nComputer Science Division\nUniversity of California\nBerkeley\n"
] | [
"Computer Science Division\nUniversity of California\nBerkeley",
"Computer Science Division\nUniversity of California\nBerkeley",
"Computer Science Division\nUniversity of California\nBerkeley",
"Computer Science Division\nUniversity of California\nBerkeley"
] | [] | Why do models often attend to salient words, and how does this evolve throughout training? We approximate model training as a two stage process: early on in training when the attention weights are uniform, the model learns to translate individual input word i to o if they cooccur frequently. Later, the model learns to attend to i while the correct output is o because it knows i translates to o. To formalize, we define a model property, Knowledge to Translate Individual Words (KTIW) (e.g. knowing that i translates to o), and claim that it drives the learning of the attention. This claim is supported by the fact that before the attention mechanism is learned, KTIW can be learned from word co-occurrence statistics, but not the other way around. Particularly, we can construct a training distribution that makes KTIW hard to learn, the learning of the attention fails, and the model cannot even learn the simple task of copying the input words to the output. Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness. We end by discussing the limitation of our approximation framework and suggest future directions. | null | [
"https://arxiv.org/pdf/2103.07601v3.pdf"
] | 232,232,786 | 2103.07601 | 4a5c7c2afeeadbc4f7c0a9101fe6ed5d8f624506 |
Approximating How Single Head Attention Learns
Charlie Snell csnell22@berkeley.edu
Computer Science Division
University of California
Berkeley
Ruiqi Zhong ruiqi-zhong@berkeley.edu
Computer Science Division
University of California
Berkeley
Dan Klein klein@berkeley.edu
Computer Science Division
University of California
Berkeley
Jacob Steinhardt jsteinhardt@berkeley.edu
Computer Science Division
University of California
Berkeley
Approximating How Single Head Attention Learns
Why do models often attend to salient words, and how does this evolve throughout training? We approximate model training as a two stage process: early on in training when the attention weights are uniform, the model learns to translate individual input word i to o if they cooccur frequently. Later, the model learns to attend to i while the correct output is o because it knows i translates to o. To formalize, we define a model property, Knowledge to Translate Individual Words (KTIW) (e.g. knowing that i translates to o), and claim that it drives the learning of the attention. This claim is supported by the fact that before the attention mechanism is learned, KTIW can be learned from word co-occurrence statistics, but not the other way around. Particularly, we can construct a training distribution that makes KTIW hard to learn, the learning of the attention fails, and the model cannot even learn the simple task of copying the input words to the output. Our approximation explains why models sometimes attend to salient words, and inspires a toy example where a multi-head attention model can overcome the above hard training distribution by improving learning dynamics rather than expressiveness. We end by discussing the limitation of our approximation framework and suggest future directions.
Introduction
The attention mechanism underlies many recent advances in natural language processing, such as machine translation (Bahdanau et al., 2015) and pretraining (Devlin et al., 2019). While many works focus on analyzing attention in already-trained models (Jain and Wallace, 2019;Vashishth et al., 2019;Brunner et al., 2019), little is understood about how the attention mechanism is learned via gradient descent at training time.
These learning dynamics are important, as standard, gradient-trained models can have very unique inductive biases, distinguishing them from more esoteric but equally accurate models. For example, in text classification, while standard models typically attend to salient (high gradient influence) words (Serrano and Smith, 2019a), recent work constructs accurate models that attend to irrelevant words instead (Wiegreffe and Pinter, 2019a;Pruthi et al., 2020). In machine translation, while the standard gradient descent cannot train a high-accuracy transformer with relatively few attention heads, we can construct one by first training with more heads and then pruning the redundant heads (Voita et al., 2019;Michel et al., 2019). To explain these differences, we need to understand how attention is learned at training time.
Our work opens the black box of attention training, focusing on attention in LSTM Seq2Seq models (Luong et al., 2015) (Section 2.1). Intuitively, if the model knows that the input individual word i translates to the correct output word o, it should attend to i to minimize the loss. This motivates us to investigate the model's knowledge to translate individual words (abbreviated as KTIW), and we define a lexical probe β to measure this property.
We claim that KTIW drives the attention mechanism to be learned. This is supported by the fact that KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3). Specifically, even when the attention weights are frozen to be uniform, probe β still strongly agrees with the attention weights of a standardly trained model. On the other hand, when KTIW cannot be learned, the attention mechanism cannot be learned. Particularly, we can construct a distribution where KTIW is hard to learn; as a result, the model fails to learn a simple task of copying the input to the output. Now the problem of understanding how attention mechanism is learned reduces to understanding how KTIW is learned. Section 2.3 builds a simpler proxy model that approximates how KTIW is learned, and Section 3.2 verifies empirically that the approximation is reasonable. This proxy model is simple enough to analyze and we interpret its training dynamics with the classical IBM Translation Model 1 (Section 4.2), which translates individual word i to o if they co-occur more frequently.
To collapse this chain of reasoning, we approximate model training in two stages. Early on in training when the attention mechanism has not been learned, the model learns KTIW through word cooccurrence statistics; KTIW later drives the learning of the attention.
Using these insights, we explain why attention weights sometimes correlate with word saliency in binary text classification (Section 5.1): the model first learns to "translate" salient words into labels, and then attend to them. We also present a toy experiment (Section 5.2) where multi-head attention improves learning dynamics by combining differently initialized attention heads, even though a single head model can express the target function.
Nevertheless, "all models are wrong". Even though our framework successfully explains and predicts the above empirical phenomena, it cannot fully explain the behavior of attention-based models, since approximations are after all less accurate. Section 6 identifies and discusses two key assumptions: (1) information of a word tends to stay in the local hidden state (Section 6.1) and (2) attention weights are free variables (Section 6.2). We discuss future directions in Section 7.
Model
Section 2.1 defines the LSTM with attention Seq2Seq architecture. Section 2.2 defines the lexical probe β, which measures the model's knowledge to translate individual words (KTIW). Section 2.3 approximates how KTIW is learned early on in training by building a "bag of words" proxy model. Section 2.4 shows that our framework generalizes to binary classification.
Machine Translation Model
We use the dot-attention variant from Luong et al. (2015). The model maps from an input sequence {x l } with length L to an output sequence {y t } with length T . We first use LSTM encoders to embed {x l } ⊂ I and {y t } ⊂ O respectively, where I and O are input and output vocab space, and obtain encoder and decoder hidden states {h l } and {s t }. Then we calculate the attention logits a t,l by applying a learnable mapping from h l and s t , and use softmax to obtain the attention weights α t,l : a t,l = s T t W h l ; α t,l = e a t,l L l =1 e a t,l
.
(1)
Next we sum the encoder hidden states {h t } weighted by the attention to obtain the "context vector" c t , concatenate it with the decoder s t , and obtain the output vocab probabilities p t by applying a learnable neural network N with one hidden layer and softmax activation at the output:
c t = L l=1 α t,l h l ; p t = N ([c t , s t ]).(2)
We train the model by minimizing the sum of negative log likelihood of all the output words y t :
L = − T t=1 log p t,yt .
(3)
Lexical Probe β
We define the lexical probe β t,l as:
β t,l := N ([h l , s t ]) yt ,(4)
which means "the probability assigned to the correct word y t , if the network attends only to the input encoder state h l ". If we assume that h l only contains information about x l , β closely reflects KTIW, since β can be interpreted as "the probability that x l is translated to the output y t ". Heuristically, to minimize the loss, the attention weights α should be attracted to positions with larger β t,l . 1 Hence, we expect the learning of the attention to be driven by KTIW (Figure 1 left). We then discuss how KTIW is learned.
Early Dynamics of Lexical Knowledge
To approximate how KTIW is learned early on in training, we build a proxy model by making a few simplifying assumptions. First, since attention weights are uniform early on in training, we replace the attention distribution with a uniform one. Second, since we are defining individual word translation, we assume that information about each word is localized to its corresponding hidden state. Therefore, similar to Sun and Lu (2020), we replace h l with an input word embedding e x l ∈ R d , where e represents the word embedding matrix and (1) We count all cooccurrences of the input and output words. (3) Trans(lation) ( "movie" | input) estimated by each row. "Film" is more likely to translate to "movie".
Trans(movie | Film) = 20 4 + 2 + 15 + 20 + 21 = .32 Trans(movie | Dieser) = .04
Trans(movie | ist) = .04
Trans(movie | grobartig) = .04
Trans(movie | schlecht) = .03 β 2,1 = .04 β 2,2 = .32 β 2,3 = .04 β 2,4 = .03
(5) Alignment α: how much each input word contributes towards the 2nd output word "movie". It is attracted to "Film".
α 2,1 = Alignment(t = 2, l = 2) = β 2,
βt=2
(2) "Film" is more likely to translate to "movie".
(1) The model first learns word translation under uniform attention when training starts.
(3) Attention α is then attracted to the word "Film". (4) "Film" is more likely to translate to "movie".
Dieser
Attention-based Model Learning Dynamics
Classical Alignment Learning Procedure Figure 1: Attention mechanism in recurrent models (left, Section 2.1) and word alignments in the classical model (right, Section 4.2) are learned similarly. Both first learn how to translate individual words (KTIW) under uniform attention weights/alignment at the start of training (upper, blue background), which then drives the attention mechanism/alignment to be learned (lower, red background). d is the embedding dimension. Third, to simplify analysis, we assume N only contains one linear layer W ∈ R |O|×d before softmax activation and ignore the decoder state s t . Putting these assumptions together, we now define a new proxy model that produces output vocab probability p t :
∀t, p t = σ( 1 L L l=1 W e x l ).(5)
On a high level, this proxy averages the embeddings of the input "bag of words", and produces a distribution over output vocabs to predict the output "bag of words". This implies that the sets of input and output words for each sentence pair are sufficient statistics for this proxy.
The probe β px can be similarly defined as:
β px t,l = σ(W e x l ) yt .(6)
We provide more intuitions on how this proxy learns in Section 4.
Binary Classification Model
Binary classification can be reduced to "machine translation", where T = 1 and |O| = 2. We drop the subscript t = 1 when discussing classification. We use the standard architecture from Wiegreffe and Pinter (2019a). After obtaining the encoder hidden states {h t }, we calculate the attention logits a l by applying a feed-forward neural network with one hidden layer and take the softmax of a to obtain the attention weights α:
a l = v T (ReLU (Qh l )); α l = e a l L l =1 e a l ,(7)
where Q and v are learnable.
We sum the hidden states {h l } weighted by the attention, feed it to a final linear layer and apply the sigmoid activation function (σ) to obtain the probability for the positive class
p pos = σ(W T L l=1 a l h l ) = σ( L l=1 α l W T h l ). (8)
Similar to the machine translation model (Section 2.1), we define the "lexical probe":
β l := σ((2y − 1)W T h l ),(9)
where y ∈ {0, 1} is the label and 2y − 1 ∈ {−1, 1} controls the sign. On a high level, Sun and Lu (2020) focuses on binary classification and provides almost the exact same arguments as ours. Specifically, their polarity score "s l " equals β l 1−β l in our context, and they provide a more subtle analysis of how the attention mechanism is learned in binary classification.
Empirical Evidence
We provide evidence that KTIW drives the learning of the attention early on in training: KTIW can be learned when the attention mechanism has not been learned (Section 3.2), but not the other way around (Section 3.3).
Measuring Agreement
We start by describing how to evaluate the agreement between quantities of interest, such as α and β. For any input-output sentence pair (x m , y m ), for each output index t, α m t , β m t , β px,m t ∈ R L m all associate each input position l with a real number. Since attention weights and word alignment tend to be sparse, we focus on the agreement of the highestvalued position. Suppose u, v ∈ R L , we formally define the agreement of v with u as:
A(u, v) := 1[|{j|v j > v arg max u i }| < 5%L],(10)
which means "whether the highest-valued position (dimension) in u is in the top 5% highest-valued positions in v". We average the A values across all output words on the validation set to measure the agreement between two model properties. We also report Kendall's τ rank correlation coefficient in Appendix 3 for completeness.
We denote its random baseline asÂ. is close to but not exactly 5% because of integer rounding.
Contextualized Agreement Metric. However, since different datasets have different sentence length distributions and variance of attention weights caused by random seeds, it might be hard to directly interpret this agreement metric. Therefore, we contextualize this metric with model performance. We use the standard method to train a model till convergence using T steps and denote its attention weights as α; next we train the same model from scratch again using another random seed. We denote its attention weights at training step τ asα(τ ) and its performance asp(τ ). Roughly speaking, when τ < T , both A(α,α(τ )) andp(τ ) increase as τ increases. We define the contextualized agreement ξ as:
ξ(u, v) :=p(inf{τ |A(α,α(τ )) > A(u, v)}).
(11) In other words, we find the training step τ 0 where its attention weightsα(τ 0 ) and the standard attention weights α agrees more than u and v agrees, and report the performance at this iteration. See Figure 2. We refer to the model performance when training finishes (τ = T ) as ξ * . Table 1 lists the rough intuition for each abstract symbol. The tasks above the horizontal line are classification and below are translation. The (contextualized) agreement metric A(ξ) is described in Section 3.1. Across all tasks, A(α, β), A(α, β uf ), and A(β uf , β px ) significantly outperform the random baseline and the corresponding contextualized interpretations ξ are also non-trivial. This implies that 1) the proxy model from Section 2.3 approximates well how KTIW is learned, 2) attention weights α and the probe β of KTIW strongly agrees, and 3) KTIW can still be learned when the attention weights are uniform.
Task A(α, β uf ) A(β uf , β px ) A(∆, β uf ) A(α, β)Â ξ(α, β uf ) ξ(α,
to evaluate the performance of translation models and accuracy to evaluate the classification models. Due to space limit we round to integers and include a subset of datasets in Table 2 for the main paper. Appendix Table 5 includes the full results.
KTIW Learns under Uniform Attention
Even when the attention mechanism has not been learned, KTIW can still be learned. We train the same model architecture with the attention weights frozen to be uniform, and denote its lexical probe as β uf . Across all tasks, A(α, β uf ) and A(β uf , β px ) 3 significantly outperform the random baselineÂ, and the contextualized agreement ξ(α, β uf ) is also non-trivial. This indicates that 1) the proxy we built in Section 2.3 approximates KTIW and 2) even when the attention weights are uniform, KTIW is still learned.
Attention Fails When KTIW Fails
We consider a simple task of copying from the input to the output, and each input is a permutation of the same set of 40 vocab types. Under this training distribution, the proxy model provably cannot learn: every input-output pair contains the exact same set of input-output words. 4 As a result, our framework predicts that KTIW is unlikely to be learned, and hence the learning of attention is likely to fail.
The training curves of learning to copy the permutations are in Figure 3 left, colored in red: the model sometimes fails to learn. For the control experiment, if we randomly sample and permute 3 Empirically, β px converges to the unigram weight of a bag-of-words logistic regression model, and hence β px does capture an interpretable notion of "keywords". (Appendix A.10.) 4 We provide more intuitions on this in Section 4 40 vocabs from 60 vocab types as training samples, the model successfully learns (blue curve) from this distribution every time. Therefore, even if the model is able to express this task, it might fail to learn it when KTIW is not learned. The same qualitative conclusion holds for the training distribution that mixes permutations of two disjoint sets of words (Figure 3 right), and Appendix A.3 illustrates the intuition.
For binary classification, it follows from the model definition that attention mechanism cannot be learned if KTIW cannot be learned, since
p correct = σ( L l=1 α l σ −1 (β l )); σ(x) = 1 1 + e −x ,(12)
and the model needs to attend to positions with higher β, in order to predict correctly and minimize the loss. For completeness, we include results where we freeze β and find that the learning of the attention fails in Appendix A.6. Section 2.3 built a simple proxy model to approximate how KTIW is learned when the attention weights are uniform early on in training, and Section 3.2 verified that such an approximation is empirically sound. However, it is still hard to intuitively reason about how this proxy model learns. This section provides more intuitions by connecting its initial gradient (Section 4.1) to the classical IBM Model 1 alignment algorithm (Brown et al., 1993) (Section 4.2).
Derivative at Initialization
We continue from the end of Section 2.3. For each input word i and output word o, we are interested in understanding the probability that i assigns to o, defined as:
θ px i,o := σ(W e i ) o .(13)
This quantity is directly tied to β px , since β px t,l = θ px
x l ,yt . Using super-script m to index sentence pairs in the dataset, the total loss L is:
L = − m T m t=1 log(σ( 1 L m L m l=1 W e x m l ) y m t ). (14)
Suppose each e i or W o is independently initialized from a normal distribution N (0, I d /d) and we minimize L over W and e using gradient flow, then the value of e and W are uniquely defined for each continuous time step τ . By some straightforward but tedious calculations (details in Appendix A.2), the derivative of θ i,o when the training starts is:
lim d→∞ ∂θ px i,o ∂τ (τ = 0) p → 2(C px i,o − 1 |O| o ∈O C px i,o ).(15)
where p → means convergence in probability and
C px i,o is defined as C px i,o := m L m l=1 T m t=1 1 L m 1[x m l = i]1[y m t = o].
(16) Equation 15 tells us that β px t,l = θ px x l ,yt is likely to be larger if C x l ,yt is large. The definition of C seems hard to interpret from Equation 16, but in the next subsection we will find that this quantity naturally corresponds to the "count table" used in the classical IBM 1 alignment learning algorithm.
IBM Model 1 Alignment Learning
The classical alignment algorithm aims to learn which input word is responsible for each output word (e.g. knowing that y 2 "movie" aligns to x 2 "Film" in Figure 1 upper left), from a set of inputoutput sentence pairs. IBM Model 1 (Brown et al., 1993) starts with a 2-dimensional count table C IBM indexed by i ∈ I and o ∈ O, denoting input and output vocabs. Whenever vocab i and o co-occurs in an input-output pair, we add 1 L to the C IBM i,o entry (step 1 and 2 in Figure 1 right). After updating C IBM for the entire dataset, C IBM is exactly the same as C px defined in Equation 16. We drop the super-script of C to keep the notation uncluttered.
Given C, the classical model estimates a probability distribution of "what output word o does the input word i translate to" (Figure 1 right step 3) as
Trans(o|i) = C i,o o C i,o .(17)
In a pair of sequences ({x l }, {y t }), the probability β IBM that x l is translated to the output y t is:
β IBM t,l := Trans(y t |x l ),(18)
and the alignment probability α IBM that "x l is responsible for outputting y t versus other x l " is
α IBM (t, l) = β IBM t,l L l =1 β IBM t,l ,(19)
which monotonically increases with respect to β IBM t,l . See Figure 1 right step 5.
Visualizing Aforementioned Tasks
Figure 1 (right) visualizes the count table C for the machine translation task, and illustrates how KTIW is learned and drives the learning of attention. We provide similar visualization for why KTIW is hard to learn under a distribution of vocab permutations (Section 3.3) in Figure 4, and how word polarity is learned in binary classification (Section 2.4) in Figure 5.
Application
Interpretability in Classification
We use gradient based method (Ebrahimi et al., 2018) to approximate the influence ∆ l for each input word x l . The column A(∆, β uf ) reports the agreement between ∆ and β uf , and it significantly outperforms the random baseline. Since KTIW initially drives the attention mechanism to be learned, this explains why attention weights are correlated with word saliency on many classification tasks, even though the training objective does not explicitly reward this.
Multi-head Improves Training Dynamics
We saw in Section 3.3 that learning to copy sequences under a distribution of permutations is hard and the model can fail to learn; however, sometimes it is still able to learn. Can we improve learning and overcome this hard distribution by ensembling several attention parameters together?
We introduce a multi-head attention architecture by summing the context vector c t obtained by each head. Suppose there are K heads each indexed by k, similar to Section 2.1:
a (k) t,l = s T t W (k) h l ; α (k) t,l = e a (k) t,l L l =1 e α (k) t,l ,(20)
and the context vector and final probability p t defined as:
c (k) t = L l=1 α (k) t,l h l ; p t = N ([ K k=1 c (k) t , d t ]),(21)
where W (k) are different learn-able parameters.
We call W
init a good initialization if training with this single head converges, and bad otherwise. We use rejection sampling to find good/bad head initializations and combine them to form 8-head Figure 6: If all head initializations (head-init) are bad (red), the model is likely to fail; if one of the head-init is good (blue), it is likely to learn; with high chance, at least one out of eight random head-init is good (green). We used 20 random seeds for each setting.
(K = 8) attention models. We experiment with 3 scenarios: (1) all head initializations are bad, (2) only one initialization is good, and (3) initializations are sampled independently at random. Figure 6 presents the training curves. If all head initializations are bad, the model fails to converge (red). However, as long as one of the eight initializations is good, the model can converge (blue). As the number of heads increases, the probability that all initializations are bad is exponentially small if all initializations are sampled independently; hence the model converges with very high probability (green). In this experiment, multi-head attention improves not by increasing expressiveness, since one head is sufficient to accomplish the task, but by improving the learning dynamics.
Assumptions
We revisit the approximation assumptions used in our framework. Section 6.1 discusses whether the lexical probe β t,l necessarily reflects local information about input word x l , and Section 6.2 discusses whether attention weights can be freely optimized to attend to large β. These assumptions are accurate enough to predict phenomenon in Section 3 and 5, but they are not always true and hence warrant more future researches. We provide simple examples where these assumptions might fail.
β Remains Local
We use a toy classification task to show that early on in training, expectantly, β uf is larger near positions that contain the keyword. However, unintuitively, β uf L (β at the last position in the sequence) will become the largest if we train the model for too long under uniform attention weights.
In this toy task, each input is a length-40 se-quence of words sampled from {1, . . . , 40} uniformly at random; a sequence is positive if and only if the keyword "1" appears in the sequence. We restrict "1" to appear only once in each positive sequence, and use rejection sampling to balance positive and negative examples. Let l * be the position where x l * = 1.
For the positive sequences, we examine the logodd ratio γ l before the sigmoid activation in Equation 8, since β will be all close to 1 and comparing γ would be more informative: γ l := log
Hypothesis 1 : γ l * γ γ L ≈ γ l * +1 .(22)
However, if we accept the conventional wisdom that hidden states contain information about nearby words (Khandelwal et al., 2018), we should expect:
Hypothesis 2 : γ l * γ l * +1 γ ≈ γ L .(23)
To verify these hypotheses, we plot how γ l * , γ l * +1 ,γ, and γ L evolve as training proceeds in Figure 7. Hypothesis 2 is indeed true when training starts; however, we find the following to be true asymptotically:
Observation 3 : γ L γ l * +1 γ ≈ γ l * .(24)
which is wildly different from Hypothesis 2. If we train under uniform attention weights for too long, the information about keywords can freely flow to other non-local hidden states.
Attention Weights are Free Variables
In Section 2.1 we assumed that attention weights α behave like free variables that can assign arbitrarily high probabilities to positions with larger β. However, α is produced by a model, and sometimes learning the correct α can be challenging. Let π be a random permutation of integers from 1 to 40, and we want to learn the function f that permutes the input with π: Input x are randomly sampled from a vocab of size 60 as in Section 3.3. Even though β uf behaves exactly the same for these two tasks, sequence copying is much easier to learn than permutation function: while the model always reaches perfect accuracy in the former setting within 300 iterations, it always fails in the latter. LSTM has a built-in inductive bias to learn monotonic attention.
f ([x 1 , x 2 , . . . x 40 ]) := [x π(1) , x π(2) . . . x π(40) ].(25)
Conclusions and Future Directions
Our work tries to understand the black box of attention training. Early on in training, the LSTM attention models first learn how to translation individual words from bag of words co-occurrence statistics, which then drives the learning of the attention. Our framework explains why attention weights obtained by standard training often correlate with saliency, and how multi-head attention can increase performance by improving the training dynamics rather than expressiveness. These phenomena cannot be explained if we treated training as a black box. Increasingly more theoretical deep learning papers study the optimization trajectory, since many important properties of neural networks are determined by what happens at training time (Jacot et al., 2018;Du et al., 2018;Şimşekli et al., 2019). However, it is hard to extract useful intuitions for practitioners from these results in abstract highdimensional parameter space. In contrast, the NLP community takes another path and mostly interprets models using intuitive concepts (Andreas and Klein, 2017;Strobelt et al., 2018;Hewitt and Liang, 2019), while relatively few look at the training dynamics. We look forward to more future works that can qualitatively predict the training dynamics using intuitive concepts by formally reasoning about the optimization trajectory.
We present a new framework for understanding and predicting behaviors of an existing technology: the attention mechanism in recurrent neural networks. We do not propose any new technologies or any new datasets that could directly raise ethical questions. However, it is useful to keep in mind that our framework is far from solving the question of neural network interpretability, and should not be interpreted as ground truth in high stake domains like medicine or recidivism. We are aware and very explicit about the limitations of our framework, which we made clear in Section 6.
Reasons to Reject
Almost all reviewers are worried that the assumption "lexical information stays local" is "wrong". For example, Jain and Wallace (2019) Nevertheless, an assumption being wrong does not mean that we should not apply it. For example, according to modern physics, it can be argued that friction does not "exist", since it is only an aggregation of electromagnetic force at a microscopic level. However, the concept of "friction" is still extremely useful for engineering domains and successfully contributes to important technological progresses. From a instrumentalist view, scientific theory should be ultimately benchmarked by its ability to explain and predict (unknown) empirical phenomena, instead of whether it is literally true or not.
We demonstrated the predictive power of our theoretical framework in Section 3.2, 3.3, and 5, and it is up to the readers to decide whether our approximation is accurate enough. Anecdotally, I expected the model always able to learn permutation copying in Section 3.3 before running the experiments, since the task looks extremely simple. However, our theory predicts otherwise, and the empirical result indeed agrees with our theoretical prediction. The result indeed surprised me at the time of experimentation (in March 2020).
References
A Appendices
A.1 Heuristic that α Attends to Larger β It is a heuristic rather than a rigorous theorem that attention α is attracted to larger β. There are two reasons. First, there is a non-linear layer after the averaging the hidden states, which can interact in an arbitrarily complex way to break this heuristic. Second, even if there are no non-linear operations after hidden state aggregation, the optimal attention that minimizes the loss does not necessarily assign any probability to the position with the largest β value when there are more than two output vocabs. Specifically, we consider the following model:
p t = σ(W c l=1 α t,l h l +W s s t ) = σ( l=1 α t,l γ l +γ s ),(26)
where W c and W s are learnable weights, and γ defined as:
γ l := W c h l ; γ s := W s s t ⇒ β t,l = σ(γ l +γ s ) yt .
(27) Consider the following scenario that outputs a probability distribution p over 3 output vocabs and γ s is set to 0:
p = σ(α 1 γ 1 + α 2 γ 2 + α 3 γ 3 ),(28)
where γ l=1,2,3 ∈ R |O|=3 are the logits, α is a valid attention probability distribution, σ is the softmax, and p is the probability distribution produced by this model. Suppose γ 1 = [0, 0, 0], γ 2 = [0, −10, 5], γ 3 = [0, 5, −10] (29) and the correct output is the first output vocab (i.e. the first dimension). Therefore, we take the softmax of γ l and consider the first dimension:
β l=1 = 1 3 > β l=2 = β l=3 ≈ e −5 .(30)
We calculate "optimal α" α opt : the optimal attention weights that can maximize the correct output word probability p 0 and minimize the loss. We find that α opt 2 = α opt 3 = 0.5, while α opt 1 = 0. In this example, the optimal attention assigns 0 weight to the position l with the highest β l .
Fortunately, such pathological examples rarely occur in real datasets, and the optimal α are usually attracted to positions with higher β. We empirically verify this for the below variant of machine translation model on Multi30K.
As before, we obtain the context vector c t . Instead of concatenating c t and d t and pass it into a non-linear neural network N , we add them and apply a linear layer with softmax after it to obtain the output word probability distribution
p t = σ(W (c t + d t )).(31)
This model is desirable because we can now provably find the optimal α using gradient descent (we delay the proof to the end of this subsection). Additionally, this model has comparable performance with the variant from our main paper (Section 2.1), achieving 38.2 BLEU score, vs. 37.9 for the model in our main paper. We use α opt to denote the attention that can minimize the loss, and we find that A(α opt , β) = 0.53. β do strongly agree with α opt . Now we are left to show that we can use gradient descent to find the optimal attention weights to minimize the loss. We can rewrite p t as
p t = σ( L l=1 α l W h l + W d t ).(32)
We define
γ l := W h l ; γ s := W d t .(33)
Without loss of generality, suppose the first dimension of γ 1...L , γ s are all 0, and the correct token we want to maximize probability for is the first dimension, then the loss for the output word is
L = log(1 + g(α)),(34)
where
g(α) := o∈O,o =0 e α T γ o +γs,o ,(35)
where
γ o = [γ 1,o . . . γ l,o . . . γ L,o ] ∈ R L .(36)
Since α is defined within the convex probability simplex and g(α) is convex with respect to α, the global optima α opt can be found by gradient descent.
A.2 Calculating
∂θ i,o ∂τ
We drop the px super-script of θ to keep the notation uncluttered. We copy the loss function here to remind the readers:
L = − m T m t=1 log(σ( 1 L m L m l=1 W e x m l ) y m t ).(37)
and since we optimize W and e with gradient flow,
∂W ∂τ := − L ∂W ; ∂e ∂τ := − L ∂e .(38)
We first define the un-normalized logitsγ and then take the softmax.
θ = W e,(39)
then
∂θ ∂τ = ∂(W e) ∂τ = −W ∂e ∂τ − ∂W ∂τ e.(40)
We first analyze := −W ∂e ∂τ . Since ∈ R |I|×|O| , we analyze each entry i,o . Since differentiation operation and left multiplication by matrix W is linear, we analyze each individual loss term in Equation 37 and then sum them up. We define
p m := σ( 1 L m L m l=1 W e x m l )(41)
and
L m t := − log(p m y m t ); m t,i,o := W o ∂L m t ∂e i .(42)
Hence,
L = m T m t=1 L m t ; i,o = m T m t=1 m t,i,o . (43) Therefore, − ∂L m t ∂e i = 1 L m L m l=1 1[x m l = i](W y m t − |O| o=1 p m o W o ). (44) Hence, m t,i,y m t = −W T y m t ∂L m t ∂e i = 1 L m L m l=1 1[x m l = i](45)(||W y m t || 2 2 − |O| o=1 p m o W T y m t W o ), while for o = y m t , m t,i,o = −W T o ∂L m t ∂e i = 1 L m L m l=1 1[x m l = i] (46) (W T o W y m t − |O| o=1 p m o W T o W o ).
If W o and e i are each sampled i.i.d. from N (0, I d /d), then by central limit theorem:
∀o = o , √ dW T o W o p → N (0, 1),(47)∀o, i, √ dW T o e i p → N (0, 1),(48)
and ∀o,
√ d(||W o || 2 2 − 1) p → N (0, 2).(49)
Therefore, when τ = 0,
lim d→∞ m t,i,o p → 1 L m L m l=1 1[x m l = i](1[y t l = o]− 1 |O| ).
(50) Summing over all the m t,i,o terms, we have that
i,o = C i,o − 1 |O| o C i,o ,(51)
where C is defined as
C i,o := m L m l=1 T m t=1 1 L m 1[x m l = i]1[y m t = o].
(52) We find that − ∂W ∂τ e converges exactly to the same value. Hence
∂θ i,o ∂τ = ∂W e ∂τ = 2(C i,o − 1 |O| o C i,o ). (53) Since lim d→∞ θ(τ = 0) p → 1 |O| 1 |I|×|O| , by chain rule, lim d→∞ ∂γ i,o ∂τ (τ = 0) p → 2(C i,o − 1 |O| o ∈O C i,o ).(54)
A.3 Mixture of Permutations
For this experiment, each input is either a random permutation of the set {1 . . . 40}, or a random permutation of the set {41 . . . 80}. The proxy model can easily learn whether the input words are less than 40 and decide whether the output words are all less than 40. However, β px is still the same for every position; as a result, the attention and hence the model fail to learn. The count table C can be see in Figure 8. Alignment α has no preference over any of these words, since the probabilities are uniform over the input words "A", "B", "C", "D".
A.4 Additional Tables for Completeness
We report several variants of Table 2. We chose to use token accuracy to contextualize the agreement metric in the main paper, because the errors would accumulate much more if we use a not-fully trained model to auto-regressively generate output words.
• Table 3 contains the same results as Table 2, except that its agreement score A(u, v) is now Kendall Tau rank correlation coefficient, which is a more popular metric.
• Table 5 contains the same results as Table 2, except that results are now rounded to two decimal places.
• Table 7 consists of the same results as Table 2, except that the statistics is calculated over the training set rather than the validation set.
• Table 4, Table 6, and Table 8 contain the translation results from the above 3 mentioned tables respectively, except thatp is defined as BLEU score rather than token accuracy, and hence the contextualized metric interpretation ξ changes correspondingly.
A.5 Dataset Description
We summarize the datasets that we use for classification and machine translation. See Table 9 for details on train/test splits and median sequence lengths for each dataset. IMDB Sentiment Analysis (Maas et al., 2011) A sentiment analysis data set with 50,000 (25,000 train and 25,000 test) IMDB movie reviews and their corresponding positive or negative sentiment. AG News Corpus (Zhang et al., 2015) 120,000 news articles and their corresponding topic (world, sports, business, or science/tech). We classify between the world and business articles.
20 Newsgroups 5 A news data set containing around 18,000 newsgroups articles split between 20 different labeled categories. We classify between baseball and hocky articles.
Stanford Sentiment Treebank (Socher et al., 2013) A data set for classifying the sentiment of movie reviews, labeled on a scale from 1 (negative) to 5 (positive). We remove all movies labeled as 3, and classify between 4 or 5 and 1 or 2.
Multi Domain Sentiment Data set 6 Approxi- Table Table 7 except with performance measured by bleu rather than token accuracy. Section A.4 mately 40,000 Amazon reviews from various product categories labeled with a corresponding positive or negative label. Since some of the sequences are particularly long, we only use sequences of length less than 400 words. Yelp Open Data Set 7 20,000 Yelp reviews and their corresponding star rating from 1 to 5. We classify between reviews with rating ≤ 2 and ≥ 4.
Multi-30k (Elliott et al., 2016) English to German translation. The data is from translation image captions.
IWSLT '14 (Cettolo et al., 2015) German to English translation. The data is from translated TED talk transcriptions.
News Commentary v14 (Cettolo et al., 2015) A collection of translation news commentary datasets in different languages from WMT19 8 . We use the following translation splits: English-Dutch (En-Nl), English-Portuguese (En-Pt), and Italian-Portuguese (It-Pt). In pre-processing for this dataset, we removed all purely numerical examples.
A.6 α Fails When β is Frozen
For each classification task we initialize a random model and freeze all parameters except for the attention layer (frozen β model). We then compute the correlation between this trained attention (defined as α fr ) and the normal attention α. Table 10 reports this correlation at the iteration where α fr is most correlated with α on the validation set. As shown in Table 10, the left column is consistently lower than the right column. This indicates that the model can learn output relevance without attention, but not vice versa.
A.7 Training β uf
We find that A(α, β uf (τ )) first increases and then decreases as training proceeds (i.e. τ increases), so we chose the maximum agreement to report in Table 2 over the course of training. Since this trend is consistent across all datasets, our choice minimally inflates the agreement measure, and is comparable to the practice of reporting dev set results. As discussed in Section 6.1, training under uniform attention for too long might bring unintuitive results, Table 10: We report the correlation between α fr and α on classification datasets, and compare it against A(α, β uf ), the same column defined in Table 2 Translation We use a a bidirectional two layer bi-LSTM of dimension 256 to encode the source and the use last hidden state h L as the first hidden state of the decoder. The attention and outputs are then calculated as described in 2. The learn-able neural network before the outputs that is mentioned in Section 2, is a 1 hidden layer model with ReLU non-linearity. The hidden layer is dimension 256. Our model contains 6,132,544 parameters excluding embeddings and 8,180,544 including embeddings on all datasets.
Permutation Copying
We use single directional single layer LSTM with hidden dimension 256 for both the encoder and the decoder.
Classification Procedure For all classification datasets we used a batch size of 32. We trained for 4000 iterations on each dataset. For each dataset we train on the pre-defined training set if the dataset has one. Additionally, if a dataset had a predefined test set, we randomly sample at most 4000 examples from this test set for validation. Specific dataset split sizes are given in Table 9.
Classification Evaluation We evaluated each model at steps 0, 10, 50, 100, 150, 200, 250, and then every 250 iterations after that.
Classification Tokenization We tokenized the data at the word level. We mapped all words occurring less than 3 times in the training set to <unk>. For 20 Newsgroups and AG News we mapped all non-single digit integer "words" to <unk>. For 20 Newsgroups we also split words with the "_" character.
Classification Training We trained all classification models on a single GPU. Some datasets took slightly longer to train than others (largely depending on average sequence length), but each train took at most 45 minutes.
Translation Hyper Parameters For translation all hidden states in the model are dimension 256. We use the sequence to sequence architecture described above. The LSTMs used dropout 0.5.
Translation Procedure For all translation tasks we used batch size 16 when training. For IWSLT'14 and Multi-30k we used the provided dataset splits. For the News Commentary v14 datasets we did a 90-10 split of the data for training and validation respectively.
Translation Evaluation We evaluated each model at steps 0, 50, 100, 500, 1000, 1500, and then every 2000 iterations after that.
Translation Training
We trained all translation models on a single GPU. IWSLT'14, and the News Commentary datasets took approximately 5-6 hours to train, and multi-30k took closer to 1 hour to train.
Translation Tokenization We tokenized both translation datasets using the Sentence-Piece tokenizer trained on the corresponding train set to a vocab size of 8,000. We used a single tokenization for source and target tokens. And accordingly also used the same matrix of embeddings for target and source sequences.
A.9 A Note On SMS Dataset
In addition to the classification datasets reported in the tables, we also ran experiments on the SMS Spam Collection V.1 dataset 9 . The attention learned from this dataset was very high variance, and so two different random seeds would consistently produce attentions that did not correlate much. The dataset itself was also a bit of an outlier; it had shorter sequence lengths than any of the other datasets (median sequence length 13 on train and validation set), it also had the smallest training set out of all our datasets (3500 examples), and it had by far the smallest vocab (4691 unique tokens). We decided not to include this dataset in the main paper due to these unusual results and leave further exploration to future works.
A.10 Logistic Regression Proxy Model
Our proxy model can be shown to be equivalent to a bag-of-words logistic regression model in the classification case. Specifically, we define a bag-of-words logistic regression model to be:
∀t, p t = σ(β log x).(55)
where x ∈ R |I| , β log ∈ R |O|×|I| , and σ is the softmax function. The entries in x are the number of times each word occurs in the input sequence, normalized by the sequence length. and β log is learned. This is equivalent to:
∀t, p t = σ( 1 L L l=1 β log x l ).(56)
Here β log i indicates the ith column of β log ; these are the entries in β log corresponding to predictions for the ith word in the vocab. Now it is easy to arrive at the equivalence between logistic regression and our proxy model. If we restrict the rank of β log to be at most min(d, |O|, |I|) by factoring it as β log = W E where W ∈ R |O|×d and E ∈ R d×|I| , then the logistic regression looks like:
∀t, p t = σ( 1 L L l=1 W E x l ),(57)
which is equivalent to our proxy model:
∀t, p t = σ( 1 L L l=1
W e x l ).
Since d = 256 for the proxy model, which is larger than |O| = 2 in the classification case, the proxy model is not rank limited and is hence fully equivalent to the logistic regression model. Therefore the β px can be interpreted as "keywords" in the same way that the logistic regression weights can.
To empirically verify this equivalence, we trained a logistic regression model with 2 regularization on each of our classification datasets. To pick the optimal regularization level, we did a sweep of regularization coefficients across ten orders of magnitude and picked the one with the best validation accuracy. We report results for A(β uf , β log ) in comparison to A(β uf , β px ) in Table 11 10 .
Note that these numbers are similar but not exactly equivalent. The reason is that the proxy model did not use 2 regularization, while logistic regression did.
10 These numbers were obtained from a retrain of all the models in the main table, so for instance, the LSTM model used to produce β uf might not be exactly the same as the one used for the results in all the other tables due to random seed difference. Table 11: we report A(β uf , β log ) to demonstrate its effective equivalence to A(β uf , β px ). These values are not exactly the same due to differences in regularization strategies.
Figure 2 :
2We find the smallest training step τ 0 where (α,α(τ 0 )) > A(u, v), and define ξ(u, v) :=p(τ 0 ).
Figure 3 :
3Each curve represents accuracy on the test distribution vs. number of training steps for different random seeds (20 each). When trained on a distribution of permutation of 40 vocabs (red) (Left) or a mixture of permutations (Right), the model sometimes fails to learn and converges slower.
=
Trans(C′ | C) = .25Trans(C′ | B) = .25 Trans(C′ | A) = .25Trans(C′ | D)Alignment α has no preference over any of these words, since the probabilities are uniform.
Figure 4 :Figure 5 :
45Co-occurrence table C is non-informative under a distribution of permutations. Therefore, this distribution is hard for the attention-based model to learn. positive | great) = 1.Trans(positive | bad) = 0.Trans(positive | movie) = 0.5 Alignment α is attracted to the word "great", since 1 is the largest.Input: bad movie Label: Negative The classical model first learns word polarity, which later attracts attention.
We measure four quantities: 1) γ l * , the log-odd ratio if the model only attends to the key word position, 2) γ l * +1 , one position after the key attention weights are uniform, and 4) γ L if the model attends to the last hidden state. If the γ l only contains information about word x l , we should expect:
Figure 7 :
7When training begins, Hypothesis 2 (Equation 22) is true; however, asymptotically, Oberservation 3 (Equation 24) is true.
; Serrano and Smith (2019b); Wiegreffe and Pinter (2019b); Pruthi et al. (2020) all show that information flows across hidden states. I completely agree with the fact that information diffuses across positions, and in fact I reported the same fact in one of my own prior works (Zhong et al., 2019).
AAFAA
' B' C' D' E' F' G' H' ' B' C' D' E' F' G' H' ' B' C' D' E' F' G' H' ' B' C' D' E' F' G' H'Trans(C′| C) = .25 Trans(C′ | B) = .25 Trans(C′ | A) = .25 Trans(C′ | D) = .25
Figure 8 :
8The training distributions mixes random permutation of disjoint set of words (left and right, respectively). From the count table, β px could learn that the set of input words {A, B, C, D} corresponds to the set of output words {A , B , C , D }, but its β value for each input position is still uniformly 0.25.
Table 1 :
1Intuitions for each abstract symbol (some occur later in the paper). The first group is model activations/gradients, the second metrics, and the third others.
Table 2 :
2
Jacob Andreas and Dan Klein. 2017. Analogs of linguistic structure in deep representations. Mauro Cettolo, Jan Niehues, Sebastian Stüker, Luisa Bentivogli, and Marcello Federico. 2015. Report on the 11th iwslt evaluation campaign, iwslt 2014. Elena Voita, David Talbot, Fedor Moiseev, Rico Sen-In Proceed-
ings of the 2017 Conference on Empirical Methods
in Natural Language Processing, pages 2893-2897,
Copenhagen, Denmark. Association for Computa-
tional Linguistics.
Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben-
gio. 2015. Neural machine translation by jointly
learning to align and translate.
In 3rd Inter-
national Conference on Learning Representations,
ICLR 2015, San Diego, CA, USA, May 7-9, 2015,
Conference Track Proceedings.
Peter F Brown, Stephen A Della Pietra, Vincent J
Della Pietra, and Robert L Mercer. 1993. The math-
ematics of statistical machine translation: Parameter
estimation. Computational linguistics, 19(2):263-
311.
Gino Brunner, Yang Liu, Damian Pascual, Oliver
Richter, Massimiliano Ciaramita, and Roger Watten-
hofer. 2019. On identifiability in transformers. In
International Conference on Learning Representa-
tions.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Simon S Du, Wei Hu, and Jason D Lee. 2018. Al-
gorithmic regularization in learning deep homoge-
neous models: Layers are automatically balanced.
In Advances in Neural Information Processing Sys-
tems, pages 384-395.
Javid Ebrahimi, Anyi Rao, Daniel Lowd, and Dejing
Dou. 2018. HotFlip: White-box adversarial exam-
ples for text classification. In Proceedings of the
56th Annual Meeting of the Association for Compu-
tational Linguistics (Volume 2: Short Papers), pages
31-36, Melbourne, Australia. Association for Com-
putational Linguistics.
Desmond Elliott, Stella Frank, Khalil Sima'an, and Lu-
cia Specia. 2016. Multi30K: Multilingual English-
German image descriptions. In Proceedings of the
5th Workshop on Vision and Language, pages 70-
74, Berlin, Germany. Association for Computational
Linguistics.
John Hewitt and Percy Liang. 2019. Designing and
interpreting probes with control tasks. In Proceed-
ings of the 2019 Conference on Empirical Methods
in Natural Language Processing and the 9th Inter-
national Joint Conference on Natural Language Pro-
cessing (EMNLP-IJCNLP), pages 2733-2743.
Arthur Jacot, Franck Gabriel, and Clément Hongler.
2018. Neural tangent kernel: Convergence and
generalization in neural networks. In Advances in
neural information processing systems, pages 8571-
8580.
Sarthak Jain and Byron C. Wallace. 2019. Attention is
not Explanation. In Proceedings of the 2019 Con-
ference of the North American Chapter of the Asso-
ciation for Computational Linguistics: Human Lan-
guage Technologies, Volume 1 (Long and Short Pa-
pers), pages 3543-3556, Minneapolis, Minnesota.
Association for Computational Linguistics.
Urvashi Khandelwal, He He, Peng Qi, and Dan Juraf-
sky. 2018. Sharp nearby, fuzzy far away: How neu-
ral language models use context. In Proceedings
of the 56th Annual Meeting of the Association for
Computational Linguistics (Volume 1: Long Papers),
pages 284-294, Melbourne, Australia. Association
for Computational Linguistics.
Thang Luong, Hieu Pham, and Christopher D. Man-
ning. 2015. Effective approaches to attention-based
neural machine translation. In Proceedings of the
2015 Conference on Empirical Methods in Natu-
ral Language Processing, pages 1412-1421, Lis-
bon, Portugal. Association for Computational Lin-
guistics.
Andrew L. Maas, Raymond E. Daly, Peter T. Pham,
Dan Huang, Andrew Y. Ng, and Christopher Potts.
2011. Learning word vectors for sentiment analy-
sis. In Proceedings of the 49th Annual Meeting of
the Association for Computational Linguistics: Hu-
man Language Technologies, pages 142-150, Port-
land, Oregon, USA. Association for Computational
Linguistics.
Paul Michel, Omer Levy, and Graham Neubig. 2019.
Are sixteen heads really better than one? In Ad-
vances in Neural Information Processing Systems,
pages 14014-14024.
Danish Pruthi, Mansi Gupta, Bhuwan Dhingra, Gra-
ham Neubig, and Zachary C. Lipton. 2020. Learn-
ing to deceive with attention-based explanations. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 4782-
4793, Online. Association for Computational Lin-
guistics.
Sofia Serrano and Noah A. Smith. 2019a. Is attention
interpretable? In Proceedings of the 57th Annual
Meeting of the Association for Computational Lin-
guistics, pages 2931-2951, Florence, Italy. Associa-
tion for Computational Linguistics.
Sofia Serrano and Noah A Smith. 2019b. Is attention
interpretable? arXiv preprint arXiv:1906.03731.
UmutŞimşekli, Levent Sagun, and Mert Gurbuzbala-
ban. 2019. A tail-index analysis of stochastic gradi-
ent noise in deep neural networks. In Proceedings
of the 36th International Conference on Machine
Learning (ICML 2019).
Richard Socher, Alex Perelygin, Jean Wu, Jason
Chuang, Christopher D Manning, Andrew Y Ng,
and Christopher Potts. 2013. Recursive deep mod-
els for semantic compositionality over a sentiment
treebank. In Proceedings of the 2013 conference on
empirical methods in natural language processing,
pages 1631-1642.
Hendrik Strobelt, Sebastian Gehrmann, Michael
Behrisch, Adam Perer, Hanspeter Pfister, and
Alexander M Rush. 2018. S eq 2s eq-v is: A vi-
sual debugging tool for sequence-to-sequence mod-
els. IEEE transactions on visualization and com-
puter graphics, 25(1):353-363.
Xiaobing Sun and Wei Lu. 2020. Understanding at-
tention for text classification. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 3418-3428, Online. As-
sociation for Computational Linguistics.
Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh
Tomar, and Manaal Faruqui. 2019. Attention in-
terpretability across nlp tasks.
arXiv preprint
arXiv:1909.11218.
nrich, and Ivan Titov. 2019. Analyzing multi-head
self-attention: Specialized heads do the heavy lift-
ing, the rest can be pruned. In Proceedings of the
57th Annual Meeting of the Association for Com-
putational Linguistics, pages 5797-5808, Florence,
Italy. Association for Computational Linguistics.
Sarah Wiegreffe and Yuval Pinter. 2019a. Attention is
not not explanation. In Proceedings of the 2019 Con-
ference on Empirical Methods in Natural Language
Processing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 11-20, Hong Kong, China. Associ-
ation for Computational Linguistics.
Sarah Wiegreffe and Yuval Pinter. 2019b.
At-
tention is not not explanation. arXiv preprint
arXiv:1908.04626.
Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015.
Character-level convolutional networks for text clas-
sification. In Advances in neural information pro-
cessing systems, pages 649-657.
Ruiqi Zhong, Steven Shao, and Kathleen McKeown.
2019. Fine-grained sentiment analysis with faithful
attention. arXiv preprint arXiv:1908.06870.
Table 3 :
3Table 2 except with agreement defined by Kendall Tau. Section A.4Task
A(α, β uf ) A(β uf , β px ) A(∆, β uf )Â
Muti30k
8.68
27.54
4.24
0.00
IWSLT14
8.64
22.56
2.72
0.00
News It-Pt
4.82
17.16
1.63
0.00
News En-Nl
4.53
20.35
2.08
0.00
News En-Pt
4.41
18.20
2.05
0.00
Task
ξ(α, β uf ) ξ(β uf , β px ) ξ(∆, β uf )
ξ *
Muti30k
1.99
6.91
1.99
37.89
IWSLT14
5.38
5.31
5.38
32.95
News It-Pt
0.09
0.55
0.04
24.71
News En-Nl
0.01
0.94
0.01
29.42
News En-Pt
0.01
0.22
0.01
37.04
Table 4 :
4translation results from Table 3 except with performance measured by bleu rather than token accuracy.
Section A.4
Table 5 :
5Table 2 with 2 decimal results. Section A.4
Task
A(α, β uf ) A(β uf , β px ) A(∆, β uf ) A(α, β)Â
Muti30k
30.77
34.43
27.24
48.70
7.19
IWSLT14
35.75
39.09
27.69
55.25
6.52
News It-Pt
29.13
38.62
25.45
52.48
6.17
News En-Nl
35.53
41.72
29.15
60.15
6.35
News En-Pt
35.77
37.37
30.37
64.94
6.34
Task
ξ(α, β uf ) ξ(β uf , β px ) ξ(∆, β uf ) ξ(α, β)
ξ *
Muti30k
11.43
11.43
11.43
16.41 37.89
IWSLT14
6.71
6.71
5.31
9.89
32.95
News It-Pt
1.29
2.16
1.29
2.16
24.71
News En-Nl
0.94
2.39
0.94
4.12
29.42
News En-Pt
0.74
0.74
0.74
4.28
37.04
Table 6 :
6translation results fromTable 5except with performance measured by bleu rather than token accuracy. Section A.4 Task A(α, β uf ) A(β uf , β px ) A(∆, β uf ) A(α, β)ÂIMDB
51.52
80.10
42.85
64.88
5.29
Yelp
11.15
76.12
55.50
37.63
5.85
AG News
36.97
53.95
43.11
46.89
6.17
20 NG
72.36
38.69
71.73
69.47
5.32
SST
21.82
29.35
20.48
28.50
8.48
Amzn
51.95
77.18
40.15
61.78
5.91
Muti30k
32.89
34.67
28.36
56.39
7.21
IWSLT14
36.61
38.95
28.37
57.71
6.52
News It-Pt
31.03
38.70
27.11
64.81
6.15
News En-Nl
37.86
41.91
31.11
67.68
6.39
News En-Pt
37.43
37.23
31.76
71.96
6.35
Task
ξ(α, β uf ) ξ(β uf , β px ) ξ(∆, β uf ) ξ(α, β)
ξ *
IMDB
90.40
99.95 *
90.40
95.01
99.95
Yelp
75.61
96.54
96.19
94.44
98.22
AG News
93.57
98.42 *
94.63
95.54
98.42
20 NG
100.00
65.40
100.00
100.0 100.00
SST
97.72
100.00 *
84.11
100.0 * 100.00
Amzn
87.96
99.58 *
80.98
91.09
99.58
Muti30k
43.27
43.27
43.27
51.97
80.76
IWSLT14
35.94
35.94
35.94
44.18
71.18
News It-Pt
22.69
25.96
22.69
39.98
77.10
News En-Nl
18.85
23.56
18.85
40.09
74.49
News En-Pt
19.33
19.33
19.33
42.41
77.97
Table 7 :
7Table 2except with correlations and performance metrics taken over the training set instead of the validation set. Section A.4TaskA(α, β uf ) A(β uf , β px ) A(∆, β uf )ÂMuti30k
32.89
34.67
28.36
7.16
IWSLT14
36.61
38.95
28.37
6.54
News It-Pt
31.03
38.70
27.11
6.17
News En-Nl
37.86
41.91
31.11
6.38
News En-Pt
37.43
37.23
31.76
6.37
Task
ξ(α, β uf ) ξ(β uf , β px ) ξ(∆, β uf )
ξ *
Muti30k
11.87
11.87
11.87
52.28
IWSLT14
6.82
6.82
6.82
36.23
News It-Pt
1.30
2.30
1.30
42.40
News En-Nl
1.11
2.29
1.11
39.40
News En-Pt
0.83
0.83
0.83
46.57
Table 8 :
8translation results from
. Section A.6 token embeddings where they aligned with our vocabulary. The sequences are encoded with a 1 layer bidirectional LSTM of dimension 256. The rest of the model, including the attention mechanism, is exactly as described in 2.4. Our model has 1,274,882 parameters excluding embeddings. Since each classification set has a different vocab size each model has a slightly different parameter count when considering embeddings: 19,376,282 for IMDB, 10,594,382 for AG News, 5,021,282 for 20 Newsgroups, 4,581,482 for SST, 13,685,282 for Yelp, 12,407,882 for Amazon, and 2,682,182 for SMS.
Task A(β uf , β px ) A(β uf , β log )IMDB
0.81
0.84
Yelp
0.74
0.76
AG News
0.57
0.58
20 NG
0.40
0.45
SST
0.39
0.46
Amzn
0.53
0.60
This statement is heuristical rather than rigorous. See Appendix A.1 for a counterexample.
Datasets. We evaluate the agreement metrics A and ξ on multiple machine translation and text classification datasets. For machine translation, we use Multi-30k (En-De), IWSLT'14 (De-En), and News Commentary v14 (En-Nl, En-Pt, and It-Pt). For text classification, we use IMDB Sentiment Analysis, AG News Corpus, 20 Newsgroups (20 NG), Stanford Sentiment Treebank, Amazon review, and Yelp Open Data Set. All of them are in English. The details and citations of these datasets can be seen in the Appendix A.5. We use token accuracy 2 2 Appendix Tables 6, 4, and 8 include results for BLEU.
http://qwone.com/ jason/20Newsgroups/ 6 https://www.cs.jhu.edu/ mdredze/datasets/sentiment/
http://www.dt.fee.unicamp.br/ tiago/smsspamcollection/
| [] |
[
"TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering",
"TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering"
] | [
"Laura Kallmeyer ",
"Yannick Parmentier parmenti@loria.fr ",
"Timm Lichte timm.lichte@uni-tuebingen.de ",
"Johannes Dellert jdellert@sfs.uni-tuebingen.de ",
"Wolfgang Maier wo.maier@uni-tuebingen.de ",
"Kilian Evang kevang@sfs.uni-tuebingen.de ",
"\nSFB 441\nUniversität Tübingen\nD-72074TübingenGermany\n",
"\nSFB 441\nCNRS -LORIA Nancy Université F-54506\nVandoeuvreFrance\n",
"\nSFB 441\nUniversität Tübingen\nSFB 441 -SfS UniversitätD-72074, D-72074Tübingen, Tübingen, TübingenGermany, Germany\n",
"\nUniversität Tübingen\nSFB 441 -SfS UniversitätD-72074, D-72074Tübingen, Tübingen, TübingenGermany, Germany\n"
] | [
"SFB 441\nUniversität Tübingen\nD-72074TübingenGermany",
"SFB 441\nCNRS -LORIA Nancy Université F-54506\nVandoeuvreFrance",
"SFB 441\nUniversität Tübingen\nSFB 441 -SfS UniversitätD-72074, D-72074Tübingen, Tübingen, TübingenGermany, Germany",
"Universität Tübingen\nSFB 441 -SfS UniversitätD-72074, D-72074Tübingen, Tübingen, TübingenGermany, Germany"
] | [
"Coling"
] | In this paper, we present an open-source parsing environment (Tübingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars (TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German. | 10.3115/1611546.1611547 | null | 8,699,915 | 0807.3622 | d49037cb029ccb84049089a8bf5a321a7efca339 |
TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering
ManchesterCopyright Manchester2008. August 2008
Laura Kallmeyer
Yannick Parmentier parmenti@loria.fr
Timm Lichte timm.lichte@uni-tuebingen.de
Johannes Dellert jdellert@sfs.uni-tuebingen.de
Wolfgang Maier wo.maier@uni-tuebingen.de
Kilian Evang kevang@sfs.uni-tuebingen.de
SFB 441
Universität Tübingen
D-72074TübingenGermany
SFB 441
CNRS -LORIA Nancy Université F-54506
VandoeuvreFrance
SFB 441
Universität Tübingen
SFB 441 -SfS UniversitätD-72074, D-72074Tübingen, Tübingen, TübingenGermany, Germany
Universität Tübingen
SFB 441 -SfS UniversitätD-72074, D-72074Tübingen, Tübingen, TübingenGermany, Germany
TuLiPA: Towards a Multi-Formalism Parsing Environment for Grammar Engineering
Coling
the workshop on Grammar Engineering Across FrameworksManchester2008. August 2008
In this paper, we present an open-source parsing environment (Tübingen Linguistic Parsing Architecture, TuLiPA) which uses Range Concatenation Grammar (RCG) as a pivot formalism, thus opening the way to the parsing of several mildly context-sensitive formalisms. This environment currently supports tree-based grammars (namely Tree-Adjoining Grammars (TAG) and Multi-Component Tree-Adjoining Grammars with Tree Tuples (TT-MCTAG)) and allows computation not only of syntactic structures, but also of the corresponding semantic representations. It is used for the development of a tree-based grammar for German.
Introduction
Grammars and lexicons represent important linguistic resources for many NLP applications, among which one may cite dialog systems, automatic summarization or machine translation. Developing such resources is known to be a complex task that needs useful tools such as parsers and generators (Erbach, 1992).
Furthermore, there is a lack of a common framework allowing for multi-formalism grammar engineering. Thus, many formalisms have been proposed to model natural language, each coming with specific implementations. Having a common framework would facilitate the comparison c 2008.
Licensed under the Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported license (http://creativecommons.org/licenses/by-nc-sa/3.0/). Some rights reserved. between formalisms (e.g., in terms of parsing complexity in practice), and would allow for a better sharing of resources (e.g., having a common lexicon, from which different features would be extracted depending on the target formalism).
In this context, we present a parsing environment relying on a general architecture that can be used for parsing with mildly context-sensitive (MCS) formalisms 1 (Joshi, 1987). Its underlying idea is to use Range Concatenation Grammar (RCG) as a pivot formalism, for RCG has been shown to strictly include MCS languages while being parsable in polynomial time (Boullier, 2000).
Currently, this architecture supports tree-based grammars (Tree-Adjoining Grammars and Multi-Component Tree-Adjoining Grammars with Tree Tuples (Lichte, 2007)). More precisely, treebased grammars are first converted into equivalent RCGs, which are then used for parsing. The result of RCG parsing is finally interpreted to extract a derivation structure for the input grammar, as well as to perform additional processings (e.g., semantic calculus, extraction of dependency views).
The paper is structured as follows. In section 2, we present the architecture of the TuLiPA parsing environment and show how the use of RCG as a pivot formalism makes it easier to design a modular system that can be extended to support several dimensions (syntax, semantics) and/or formalisms. In section 3, we give some desiderata for grammar engineering and present TuLiPA's current state 1 A formalism is said to be mildly context sensitive (MCS) iff (i) it generates limited cross-serial dependencies, (ii) it is polynomially parsable, and (iii) the string languages generated by the formalism have the constant growth property (e.g., {a 2 n |n ≥ 0} does not have this property). Examples of MCS formalisms include Tree-Adjoining Grammars, Combinatory Categorial Grammars and Linear Indexed Grammars. with respect to these. In section 4, we compare this system with existing approaches for parsing and more generally for grammar engineering. Finally, in section 5, we conclude by presenting future work.
Range Concatenation Grammar as a pivot formalism
The main idea underlying TuLiPA is to use RCG as a pivot formalism for RCG has appealing formal properties (e.g., a generative capacity lying beyond Linear Context Free Rewriting Systems and a polynomial parsing complexity) and there exist efficient algorithms, for RCG parsing (Boullier, 2000) and for grammar transformation into RCG (Boullier, 1998;Boullier, 1999). Parsing with TuLiPA is thus a 3-step process:
1. The input tree-based grammar is converted into an RCG (using the algorithm of when dealing with TT-MCTAG).
2. The resulting RCG is used for parsing the input string using an extension of the parsing algorithm of Boullier (2000).
3. The RCG derivation structure is interpreted to extract the derivation and derived trees with respect to the input grammar.
The use of RCG as a pivot formalism, and thus of an RCG parser as a core component of the system, leads to a modular architecture. In turns, this makes TuLiPA more easily extensible, either in terms of functionalities, or in terms of formalisms.
Adding functionalities to the parsing environment
As an illustration of TuLiPA's extensibility, one may consider two extensions applied to the system recently.
First, a semantic calculus using the syntax/semantics interface for TAG proposed by Gardent and Kallmeyer (2003) has been added. This interface associates each tree with flat semantic formulas. The arguments of these formulas are unification variables, which are co-indexed with features labelling the nodes of the syntactic tree. During classical TAG derivation, trees are combined, triggering unifications of the feature structures labelling nodes. As a result of these unifications, the arguments of the semantic formulas are unified (see Fig. 1 ; love(j,m),name(j,john),name(m,mary) Figure 1: Semantic calculus in Feature-Based TAG.
In our system, the semantic support has been integrated by (i) extending the internal tree objects to include semantic formulas (the RCG-conversion is kept unchanged), and (ii) extending the construction of the derived tree (step 3) so that during the interpretation of the RCG derivation in terms of tree combinations, the semantic formulas are carried and updated with respect to the feature unifications performed.
Secondly, let us consider lexical disambiguation. Because of the high redundancy lying within lexicalized formalisms such as lexicalized TAG, it is common to consider tree schemata having a frontier node marked for anchoring (i.e., lexicalization). At parsing time, the tree schemata are anchored according to the input string. This anchoring selects a subgrammar supposed to cover the input string. Unfortunately, this subgrammar may contain many trees that either do not lead to a parse or for which we know a priori that they cannot be combined within the same derivation (so we should not predict a derivation from one of these trees to another during parsing). As a result, the parser could have poor performance because of the many derivation paths that have to be explored. Bonfante et al. (2004) proposed to polarize the structures of the grammar, and to apply an automaton-based filtering of the compatible structures. The idea is the following. One compute polarities representing the needs/resources brought by a given tree (or tree tuple for TT-MCTAG). A substitution or foot node with category NP reflects a need for an NP (written NP-). In the same way, an NP root node reflects a resource of type NP (written NP+). Then you build an automaton whose edges correspond to trees, and states to polarities brought by trees along the path. The automaton is then traversed to extract all paths leading to a final state with a neutral polarity for each category and +1 for the axiom (see Fig. 2, the state 7 is the only valid state and {proper., trans., det., noun.} the only compatible set of trees). 0 John 1 1 eats 2 2 a 3 3 cake 4 0 1
NP+ 2 S+ 3 S+ NP- 4 S+ 5 S+ NP- 6 S+ NP+ 7 S+ proper.
intrans.
trans.
det.
det.
noun.
noun. Figure 2: Polarity-based lexical disambiguation.
In our context, this polarity filtering has been added before step 1, leaving untouched the core RCG conversion and parsing steps. The idea is to compute the sets of compatible trees (or tree tuples for TT-MCTAG) and to convert these sets separately. Indeed the RCG has to encode only valid adjunctions/substitutions. Thanks to this automaton-based "clustering" of the compatible tree (or tree tuples), we avoid predicting incompatible derivations. Note that the time saved by using a polarity-based filter is not negligible, especially when parsing long sentences. 2
Adding formalisms to the parsing environment
Of course, the two extensions introduced in the previous section may have been added to other modular architectures as well. The main gain brought by RCG is the possibility to parse not only tree-based grammars, but other formalisms provided they can be encoded into RCG. In our system, only TAG and TT-MCTAG have been considered so far. Nonetheless, Boullier (1998) and Søgaard (2007) have defined transformations into RCG for other mildly context-sensitive formalisms. 3 To sum up, the idea would be to keep the core RCG parser, and to extend TuLiPA with a specific conversion module for each targeted formalism. On top of these conversion modules, one should also provide interpretation modules allowing to decode the RCG derivation forest in terms of the input formalism (see Fig. 3). An important point remains to be discussed. It concerns the role of lexicalization with respect to the formalism used. Indeed, the tree-based grammar formalisms currently supported (TAG and TT-MCTAG) both share the same lexicalization process (i.e., tree anchoring). Thus the lexicon format is common to these formalisms. As we will see below, it corresponds to a 2-layer lexicon made of inflected forms and lemma respectively, the latter selecting specific grammatical structures. When parsing other formalisms, it is still unclear whether one can use the same lexicon format, and if not what kind of general lexicon management module should be added to the parser (in particular to deal with morphology).
Towards a complete grammar engineering environment
So far, we have seen how to use a generic parsing architecture relying on RCG to parse different formalisms. In this section, we adopt a broader view and enumerate some requirements for a linguistic resource development environment. We also see to what extent these requirements are fulfilled (or partially fulfilled) within the TuLiPA system.
Grammar engineering with TuLiPA
As advocated by Erbach (1992), grammar engineering needs "tools for testing the grammar with respect to consistency, coverage, overgeneration and accuracy". These characteristics may be taken into account by different interacting software. Thus, consistency can be checked by a semiautomatic grammar production device, such as the XMG system of Duchier et al. (2004). Overgeneration is mainly checked by a generator (or by a parser with adequate test suites), and coverage and accuracy by a parser. In our case, the TuLiPA system provides an entry point for using a grammar production system (and a lexicon conversion tool introduced below), while including a parser. Note that TuLiPA does not include any generator, nonetheless it uses the same lexicon format as the GenI surface realizer for TAG 4 . TuLiPA's input grammar is designed using XMG, which is a metagrammar compiler for treebased formalisms. In other terms, the linguist defines a factorized description of the grammar (the so-called metagrammar) in the XMG language. Briefly, an XMG metagrammar consists of (i) elementary tree fragments represented as tree description logic formulas, and (ii) conjunctive and disjunctive combinations of these tree fragments to describe actual TAG tree schemata. 5 This metagrammar is then compiled by the XMG system to produce a tree grammar in an XML format. Note that the resulting grammar contains tree schemata (i.e., unlexicalized trees). To lexicalize these, the linguist defines a lexicon mapping words with corresponding sets of trees. Following XTAG (2001), this lexicon is a 2-layer lexicon made of morphological and lemma specifications. The motivation of this 2-layer format is (i) to express linguistic generalizations at the lexicon level, and (ii) to allow the parser to only select a subgrammar according to a given sentence, thus reducing parsing complexity. TuLiPA comes with a lexicon conversion tool (namely lexConverter) allowing to write a lexicon in a user-friendly text format and to convert it into XML. An example of an entry of such a lexicon is given in Fig. 4.
The morphological specification consists of a word, the corresponding lemma and morphological features. The main pieces of information contained in the lemma specification are the * ENTRY field, which refers to the lemma, the * CAT field referring to the syntactic category of the anchor node, the * SEM field containing some semantic information allowing for semantic instantiation, the * FAM field, which contains the name of the tree family to be anchored, the * FILTERS field which consists of a feature structure constraining by unification the trees of a given family that can be anchored by the given lemma (used for instance for non-passivable verbs), the * EQUATIONS field allowing for the definition of equations targeting named nodes of the trees, and the * COANCHORS field, which allows for the specification of coanchors (such as by in the verb to come by). 4 http://trac.loria.fr/˜geni 5 See (Crabbé, 2005) for a presentation on how to use the XMG formalism for describing a core TAG for French. From these XML resources, TuLiPA parses a string, corresponding either to a sentence or a constituent (noun phrase, prepositional phrase, etc.), and computes several output pieces of information, namely (for TAG and TT-MCTAG): derivation/derived trees, semantic representations (computed from underspecified representations using the utool software 6 , or dependency views of the derivation trees (using the DTool software 7 ).
Grammar debugging
The engineering process introduced in the preceding section belongs to a development cycle, where one first designs a grammar and corresponding lexicons using XMG, then checks these with the parser, fixes them, parses again, and so on.
To facilitate grammar debugging, TuLiPA includes both a verbose and a robust mode allowing respectively to (i) produce a log of the RCGconversion, RCG-parsing and RCG-derivation interpretation, and (ii) display mismatching features leading to incomplete derivations. More precisely, in robust mode, the parser displays derivations step by step, highlighting feature unification failures.
TuLiPA's options can be activated via an intuitive Graphical User Interface (see Fig. 5
Towards a functional common interface
Unfortunately, as mentioned above, the linguist has to move back-and-forth from the grammar/lexicon descriptions to the parser, i.e., each time the parser reports grammar errors, the linguist fixes these and then recomputes the XML files and then parses again. To avoid this tedious task of resources re-compilation, we started developing an Eclipse 8 plug-in for the TuLiPA system. Thus, the linguist will be able to manage all these resources, and to call the parser, the metagrammar compiler, and the lexConverter from a common interface (see Fig. 6). The motivation for this plug-in comes from the observation that designing electronic grammars is a task comparable to designing source code. A powerful grammar engineering environment should thus come with development facilities such as precise debugging information, syntax highlighting, etc. Using the Eclipse open-source development platform allows for reusing several components inherited from the software development community, such as plug-ins for version control, editors coupled with explorers, etc.
Eventually, one point worth considering in the context of grammar development concerns data encoding. To our knowledge, only few environments provide support for UTF-8 encoding, thus guarantying the coverage of a wide set of charsets and languages. In TuLiPA, we added an UTF-8 support (in the lexConverter), thus allowing to design a TAG for Korean (work in progress).
Usability of the TuLiPA system
As mentioned above, the TuLiPA system is made of several interacting components, that one currently has to install separately. Nonetheless, much attention has been paid to make this installation process as easy as possible and compatible with all major platforms. 9 XMG and lexConverter can be installed by compiling their sources (using a make command). TuLiPA is developed in Java and released as an executable jar. No compilation is needed for it, the only requirement is the Gecode/GecodeJ library 10 (available as a binary package for many platforms). Finally, the TuLiPA eclipse plug-in can be installed easily from eclipse itself. All these tools are released under Free software licenses (either GNU GPL or Eclipse Public License).
This environment is being used (i) at the University of Tübingen, in the context of the development of a TT-MCTAG for German describing both syntax and semantics, and (ii) at LORIA Nancy, in the development of an XTAG-based metagrammar for English. The German grammar, called GerTT (for German Tree Tuples), is released under a LGPL license for Linguistic Resources 11 and is presented in . The test-suite currently used to check the grammar is hand-crafted. A more systematic evaluation of the grammar is in preparation, using the Test Suite for Natural Language Processing (Lehmann et al., 1996).
Comparison with existing approaches 4.1 Engineering environments for tree-based grammar formalisms
To our knowledge, there is currently no available parsing environment for multi-component TAG. Existing grammar engineering environments for TAG include the DyALog system 12 described in Villemonte de la Clergerie (2005). DyALog is a compiler for a logic programming language using tabulation and dynamic programming techniques. This compiler has been used to implement efficient parsing algorithms for several formalisms, including TAG and RCG. Unfortunately, it does not include any built-in GUI and requires a good knowledge of the GNU build tools to compile parsers. This makes it relatively difficult to use. DyALog's main quality lies in its efficiency in terms of parsing time and its capacity to handle very large resources. Unlike TuLiPA, it does not compute semantic representations.
The closest approach to TuLiPA corresponds to the SemTAG system 13 , which extends TAG parsers compiled with DyALog with a semantic calculus module (Gardent and Parmentier, 2007). Unlike TuLiPA, this system only supports TAG, and does not provide any graphical output allowing to easily check the result of parsing.
Note that, for grammar designers mainly interested in TAG, SemTAG and TuLiPA can be seen as complementary tools. Indeed, one may use TuLiPA to develop the grammar and check specific syntactic structures thanks to its intuitive parsing environment. Once the grammar is stable, one may use SemTAG in batch processing to parse corpuses and build semantic representations using large grammars. This combination of these 2 systems is made easier by the fact that both use the same input formats (a metagrammar in the XMG language and a text-based lexicon). This approach is the one being adopted for the development of a French TAG equipped with semantics.
For Interaction Grammar (Perrier, 2000), there exists an engineering environment gathering the XMG metagrammar compiler and an eLEtrOstatic PARser (LEOPAR). 14 This environment is being used to develop an Interaction Grammar for French. TuLiPA's lexical disambiguation module reuses techniques introduced by LEOPAR. Unlike TuLiPA, LEOPAR does not currently support semantic information.
Engineering environments for other grammar formalisms
For other formalisms, there exist state-of-the-art grammar engineering environments that have been used for many years to design large deep grammars for several languages. For Lexical Functional Grammar, one may cite the Xerox Linguistic Environment (XLE). 15 For Head-driven Phrase Structure Grammar, the main available systems are the Linguistic Knowledge Base (LKB) 16 and the TRALE system. 17 For Combinatory Categorial Grammar, one may cite the OpenCCG library 18 and the C&C parser. 19 These environments have been used to develop broad-coverage resources equipped with semantics and include both a generator and a parser. Unlike TuLiPA, they represent advanced projects, that have been used for dialog and machine translation applications. They are mainly tailored for a specific formalism. 20
Future work
In this section, we give some prospective views concerning engineering environments in general, and TuLiPA in particular. We first distinguish between 2 main usages of grammar engineering environments, namely a pedagogical usage and an application-oriented usage, and finally give some comments about multi-formalism.
Pedagogical usage
Developing grammars in a pedagogical context needs facilities allowing for inspection of the structures of the grammar, step-by-step parsing (or generation), along with an intuitive interface. The idea is to abstract away from technical aspects related to implementation (intermediate data structures, optimizations, etc.).
The question whether to provide graphical or text-based editors can be discussed. As advocated by Baldridge et al. (2007), a low-level textbased specification can offer more flexibility and bring less frustration to the grammar designer, especially when such a specification can be graphically interpreted. This is the approach chosen by XMG, where the grammar is defined via an (advanced or not) editor such as gedit or emacs. Within TuLiPA, we chose to go further by using the Eclipse platform. Currently, it allows for displaying a summary of the content of a metagrammar or lexicon on a side panel, while editing these on a middle panel. These two panels are linked via a jump functionality. The next steps concern (i) the plugging of a graphical viewer to display the (meta)grammar structures independently from a given parse, and (ii) the extension of the eclipse plug-in so that one can easily consistently modify entries of the metagrammar or lexicon (especially when these are split over several files).
Application-oriented usage
When dealing with applications, one may demand more from the grammar engineering environment, especially in terms of efficiency and robustness (support for larger resources, partial parsing, etc.).
Efficiency needs optimizations in the parsing engine making it possible to support grammars containing several thousands of structures. One interesting question concerns the compilation of a grammar either off-line or on-line. In DyALog's approach, the grammar is compiled off-line into a logical automaton encoding all possible derivations. This off-line compilation can take some minutes with a TAG having 6000 trees, but the resulting parser can parse sentences within a second.
In TuLiPA's approach, the grammar is compiled into an RCG on-line. While giving satisfactory results on reduced resources 21 , it may lead to troubles when scaling up. This is especially true for TAG (the TT-MCTAG formalism is by definition a factorized formalism compared with TAG). In the future, it would be useful to look for a way to precompile a TAG into an RCG off-line, thus saving the conversion time.
Another important feature of grammar engineering environments consists of its debugging func-21 For a TT-MCTAG counting about 300 sets of trees and an and-crafted lexicon made of about 300 of words, a 10-word sentence is parsed (and a semantic representation computed) within seconds. tionalities. Among these, one may cite unit and integration testing. It would be useful to extend the TuLiPA system to provide a module for generating test-suites for a given grammar. The idea would be to record the coverage and analyses of a grammar at a given time. Once the grammar is further developed, these snapshots would allow for regression testing.
About multi-formalism
We already mentioned that TuLiPA was opening a way towards multi-formalism by relying on an RCG core. It is worth noticing that the XMG system was also designed to be further extensible. Indeed, a metagrammar in XMG corresponds to the combination of elementary structures. One may think of designing a library of such structures, these would be dependent on the target grammar formalism. The combinations may represent general linguistic concepts and would be shared by different grammar implementations, following ideas presented by Bender et al. (2005).
Conclusion
In this paper, we have presented a multi-formalism parsing architecture using RCG as a pivot formalism to parse mildly context-sensitive formalisms (currently TAG and TT-MCTAG). This system has been designed to facilitate grammar development by providing user-friendly interfaces, along with several functionalities (e.g., dependency extraction, derivation/derived tree display and semantic calculus). It is currently used for developing a core grammar for German.
At the moment, we are working on the extension of this architecture to include a fully functional Eclipse plug-in. Other current tasks concern optimizations to support large scale parsing and the extension of the syntactic and semantic coverage of the German grammar under development.
In a near future, we plan to evaluate the parser and the German grammar (parsing time, correction of syntactic and semantic outputs) with respect to a standard test-suite such as the TSNLP (Lehmann et al., 1996).
Figure 3 :
3Towards a multi-formalism parsing environment.
Figure 4 :
4Morphological and lemma specification of vergisst.
Figure 5 :
5TuLiPA's Graphical User Interface.
Figure 6 :
6TuLiPA's eclipse plug-in.
).S
NP↓ x
VP
NPj
V
NP↓ y
NPm
John
loves
Mary
name(j,john) love(x,y)
name(m,mary)
).6 See
http://www.coli.uni-saarland.de/
projects/chorus/utool/, with courtesy of Alexander
Koller.
7 With courtesy of Marco Kuhlmann.
An evaluation of the gain brought by this technique when using Interaction Grammar is given byBonfante et al. (2004).3 These include Multi-Component Tree-Adjoining Grammar, Linear Indexed Grammar, Head Grammar, Coupled Context Free Grammar, Right Linear Unification Grammar and Synchronous Unification Grammar.
See http://www.eclipse.org
See http://sourcesup.cru.fr/tulipa. 10 See http://www.gecode.org/gecodej. 11 See http://infolingu.univ-mlv. fr/DonneesLinguistiques/ Lexiques-Grammaires/lgpllr.html
See http://dyalog.gforge.inria.fr 13 See http://trac.loria.fr/˜semconst 14 See http://www.loria.fr/equipes/ calligramme/leopar/
See http://www2.parc.com/isl/groups/ nltt/xle/ 16 See http://wiki.delph-in.net/moin 17 See http://milca.sfs.uni-tuebingen.de/ A4/Course/trale/ 18 See http://openccg.sourceforge.net/ 19 See http://svn.ask.it.usyd.edu.au/trac/ candc/wiki 20 Nonetheless, Beavers (2002) encoded a CCG in the LKB's Type Description Language.
AcknowledgmentsThis work has been supported by the Deutsche Forschungsgemeinschaft (DFG) and the Deutscher Akademischer Austausch Dienst (DAAD, grantA/06/71039). We are grateful to three anonymous reviewers for valuable comments on this work.
DotCCG and VisCCG: Wiki and programming paradigms for improved grammar engineering with OpenCCG. Jason Baldridge, Sudipta Chatterjee, Alexis Palmer, Ben Wing, Proceedings of the GEAF07 workshop. King, Tracy Holloway and Emily M. Benderthe GEAF07 workshopStanford, CACSLIBaldridge, Jason, Sudipta Chatterjee, Alexis Palmer, and Ben Wing. 2007. DotCCG and VisCCG: Wiki and programming paradigms for improved grammar engineering with OpenCCG. In King, Tracy Hol- loway and Emily M. Bender, editors, Proceedings of the GEAF07 workshop, pages 5-25, Stanford, CA. CSLI.
Documentation: A CCG Implementation for the LKB. John Beavers, Stanford, CACSLI, Stanford UniversityLinGO Working Paper No. 2002-08Beavers, John. 2002. Documentation: A CCG Imple- mentation for the LKB. LinGO Working Paper No. 2002-08, CSLI, Stanford University, Stanford, CA.
Shared representation in multilingual grammar engineering. Emily Bender, Dan Flickinger, Frederik Fouvry, Melanie Siegel, Research on Language & Computation. 3Bender, Emily, Dan Flickinger, Frederik Fouvry, and Melanie Siegel. 2005. Shared representation in mul- tilingual grammar engineering. Research on Lan- guage & Computation, 3(2):131-138.
Polarization and abstraction of grammatical formalisms as methods for lexical disambiguation. Guillaume Bonfante, Bruno Guillaume, Guy Perrier, Proceedings of the International Conference on Computational Linguistics. the International Conference on Computational LinguisticsGeneva, SwitzerlandBonfante, Guillaume, Bruno Guillaume, and Guy Per- rier. 2004. Polarization and abstraction of grammat- ical formalisms as methods for lexical disambigua- tion. In Proceedings of the International Conference on Computational Linguistics (CoLing 2004), pages 303-309, Geneva, Switzerland.
Proposal for a natural language processing syntactic backbone. Pierre Boullier, 3342Rapport de Recherche. INRIABoullier, Pierre. 1998. Proposal for a natural lan- guage processing syntactic backbone. Rapport de Recherche 3342, INRIA.
On TAG and Multicomponent TAG Parsing. Pierre Boullier, 3668Rapport de Recherche. INRIABoullier, Pierre. 1999. On TAG and Multicomponent TAG Parsing. Rapport de Recherche 3668, INRIA.
Range concatenation grammars. Pierre Boullier, Proceedings of the International Workshop on Parsing Technologies (IWPT 2000). the International Workshop on Parsing Technologies (IWPT 2000)Trento, ItalyBoullier, Pierre. 2000. Range concatenation gram- mars. In Proceedings of the International Workshop on Parsing Technologies (IWPT 2000), pages 53-64, Trento, Italy.
Grammatical development with XMG. Benoit Crabbé, Proceedings of the conference on Logical Aspects of Computational Linguistics 2005 (LACL 05). the conference on Logical Aspects of Computational Linguistics 2005 (LACL 05)Bordeaux, FranceCrabbé, Benoit. 2005. Grammatical development with XMG. In Proceedings of the conference on Logical Aspects of Computational Linguistics 2005 (LACL 05), pages 84-100, Bordeaux, France.
The Metagrammar Compiler: An NLP Application with a Multi-paradigm Architecture. Denys Duchier, Joseph Le Roux, Yannick Parmentier, Proceedings of the 2nd International Mozart/Oz Conference (MOZ'2004). the 2nd International Mozart/Oz Conference (MOZ'2004)Charleroi, BelgiumDuchier, Denys, Joseph Le Roux, and Yannick Parmen- tier. 2004. The Metagrammar Compiler: An NLP Application with a Multi-paradigm Architecture. In Proceedings of the 2nd International Mozart/Oz Conference (MOZ'2004), pages 175-187, Charleroi, Belgium.
Tools for grammar engineering. Gregor Erbach, 3rd Conference on Applied Natural Language Processing. Trento, ItalyErbach, Gregor. 1992. Tools for grammar engineer- ing. In 3rd Conference on Applied Natural Lan- guage Processing, pages 243-244, Trento, Italy.
Semantic Construction in FTAG. Claire Gardent, Laura Kallmeyer, Proceedings of the Conference of the European chapter of the Association for Computational Linguistics (EACL 2003). the Conference of the European chapter of the Association for Computational Linguistics (EACL 2003)Budapest, HungaryGardent, Claire and Laura Kallmeyer. 2003. Semantic Construction in FTAG. In Proceedings of the Con- ference of the European chapter of the Association for Computational Linguistics (EACL 2003), pages 123-130, Budapest, Hungary.
Semtag: a platform for specifying tree adjoining grammars and performing tag-based semantic construction. Claire Gardent, Yannick Parmentier, Proceedings of the International Conference of the Association for Computational Linguistics (ACL 2007. the International Conference of the Association for Computational Linguistics (ACL 2007Prague, Czech RepublicCompanion Volume Proceedings of the Demo and Poster SessionsGardent, Claire and Yannick Parmentier. 2007. Sem- tag: a platform for specifying tree adjoining gram- mars and performing tag-based semantic construc- tion. In Proceedings of the International Confer- ence of the Association for Computational Linguis- tics (ACL 2007), Companion Volume Proceedings of the Demo and Poster Sessions, pages 13-16, Prague, Czech Republic.
An introduction to Tree Adjoining Grammars. Aravind K Joshi, Mathematics of Language. Manaster-Ramer, A.AmsterdamJoshi, Aravind K. 1987. An introduction to Tree Ad- joining Grammars. In Manaster-Ramer, A., editor, Mathematics of Language, pages 87-114. John Ben- jamins, Amsterdam.
On the relation between Multicomponent Tree Adjoining Grammars with Tree Tuples (TT-MCTAG) and Range Concatenation Grammars (RCG). Laura Kallmeyer, Yannick Parmentier, Proceedings of the 2nd International Conference on Language and Automata Theories and Applications. the 2nd International Conference on Language and Automata Theories and ApplicationsLATA; Tarragona, SpainKallmeyer, Laura and Yannick Parmentier. 2008. On the relation between Multicomponent Tree Adjoin- ing Grammars with Tree Tuples (TT-MCTAG) and Range Concatenation Grammars (RCG). In Pro- ceedings of the 2nd International Conference on Language and Automata Theories and Applications (LATA 2008), pages 277-288, Tarragona, Spain.
Developping an MCTAG for German with an RCGbased Parser. Laura Kallmeyer, Timm Lichte, Wolfgang Maier, Yannick Parmentier, Johannes Dellert, Proceedings of the Language, Resource and Evaluation Conference. the Language, Resource and Evaluation ConferenceMarrakech, MoroccoKallmeyer, Laura, Timm Lichte, Wolfgang Maier, Yan- nick Parmentier, and Johannes Dellert. 2008. De- velopping an MCTAG for German with an RCG- based Parser. In Proceedings of the Language, Re- source and Evaluation Conference (LREC 2008), Marrakech, Morocco.
TSNLP -Test Suites for Natural Language Processing. Sabine Lehmann, Stephan Oepen, Sylvie Regnier-Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Estival, Eva Dauphin, Hervé Compagnion, Judith Baur, Lorna Balkan, Doug Arnold, Proceedings of the International Conference on Computational Linguistics. the International Conference on Computational LinguisticsCopenhagen, Denmark2Lehmann, Sabine, Stephan Oepen, Sylvie Regnier- Prost, Klaus Netter, Veronika Lux, Judith Klein, Kirsten Falkedal, Frederik Fouvry, Dominique Esti- val, Eva Dauphin, Hervé Compagnion, Judith Baur, Lorna Balkan, and Doug Arnold. 1996. TSNLP - Test Suites for Natural Language Processing. In Pro- ceedings of the International Conference on Compu- tational Linguistics (Coling 1996), volume 2, pages 711-716, Copenhagen, Denmark.
An MCTAG with tuples for coherent constructions in German. Timm Lichte, Proceedings of the 12th Conference on Formal Grammar. the 12th Conference on Formal GrammarDublin, IrelandLichte, Timm. 2007. An MCTAG with tuples for co- herent constructions in German. In Proceedings of the 12th Conference on Formal Grammar, Dublin, Ireland.
Interaction grammars. Guy Perrier, Proceedings of the International Conference on Computational Linguistics. the International Conference on Computational LinguisticsSaarbruecken, GermanyPerrier, Guy. 2000. Interaction grammars. In Pro- ceedings of the International Conference on Compu- tational Linguistics (CoLing 2000), pages 600-606, Saarbruecken, Germany.
Complexity, expressivity and logic of linguistic theories. Anders Søgaard, Copenhagen, DenmarkUniversity of CopenhagenPh.D. thesisSøgaard, Anders. 2007. Complexity, expressivity and logic of linguistic theories. Ph.D. thesis, University of Copenhagen, Copenhagen, Denmark.
DyALog: a tabular logic programming based environment for NLP. Éric Villemonte De La Clergerie, Proceedings of the workshop on Constraint Satisfaction for Language Processing (CSLP 2005). the workshop on Constraint Satisfaction for Language Processing (CSLP 2005)Barcelona, SpainVillemonte de la Clergerie,Éric. 2005. DyALog: a tabular logic programming based environment for NLP. In Proceedings of the workshop on Constraint Satisfaction for Language Processing (CSLP 2005), pages 18-33, Barcelona, Spain.
A lexicalized tree adjoining grammar for english. Xtag-Research-Group, IRCS-01-03IRCS, University of PennsylvaTechnical ReportXTAG-Research-Group. 2001. A lexicalized tree adjoining grammar for english. Technical Re- port IRCS-01-03, IRCS, University of Pennsylva- nia. Available at http://www.cis.upenn. edu/˜xtag/gramrelease.html.
| [] |
[
"Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories",
"Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories"
] | [
"Gizem Sogancioglu g.sogancioglu@uu.nl \nUtrecht University Utrecht\nthe Netherlands\n",
"Fabian Mijsters Amar Van Uden \nUtrecht University Utrecht\nthe Netherlands\n",
"Jelle Peperzak j.peperzak@students.uu.nl \nUtrecht University Utrecht\nthe Netherlands\n"
] | [
"Utrecht University Utrecht\nthe Netherlands",
"Utrecht University Utrecht\nthe Netherlands",
"Utrecht University Utrecht\nthe Netherlands"
] | [] | Clinical word embeddings are extensively used in various Bio-NLP problems as a state-of-the-art feature vector representation. Although they are quite successful at the semantic representation of words, due to the dataset -which potentially carries statistical and societal bias -on which they are trained, they might exhibit gender stereotypes. This study analyses gender bias of clinical embeddings on three medical categories: mental disorders, sexually transmitted diseases, and personality traits. To this extent, we analyze two different pre-trained embeddings namely (contextualized) clinical-BERT and (non-contextualized) BioWordVec. We show that both embeddings are biased towards sensitive gender groups but BioWordVec exhibits a higher bias than clinical-BERT for all three categories. Moreover, our analyses show that clinical embeddings carry a high degree of bias for some medical terms and diseases which is conflicting with medical literature. Having such an ill-founded relationship might cause harm in downstream applications that use clinical embeddings. | 10.48550/arxiv.2208.01341 | [
"https://export.arxiv.org/pdf/2208.01341v2.pdf"
] | 251,402,282 | 2208.01341 | eb3eb497192427e4e00f65b943ca888f2f63eeb0 |
Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories
Gizem Sogancioglu g.sogancioglu@uu.nl
Utrecht University Utrecht
the Netherlands
Fabian Mijsters Amar Van Uden
Utrecht University Utrecht
the Netherlands
Jelle Peperzak j.peperzak@students.uu.nl
Utrecht University Utrecht
the Netherlands
Gender bias in (non)-contextual clinical word embeddings for stereotypical medical categories
Clinical word embeddings are extensively used in various Bio-NLP problems as a state-of-the-art feature vector representation. Although they are quite successful at the semantic representation of words, due to the dataset -which potentially carries statistical and societal bias -on which they are trained, they might exhibit gender stereotypes. This study analyses gender bias of clinical embeddings on three medical categories: mental disorders, sexually transmitted diseases, and personality traits. To this extent, we analyze two different pre-trained embeddings namely (contextualized) clinical-BERT and (non-contextualized) BioWordVec. We show that both embeddings are biased towards sensitive gender groups but BioWordVec exhibits a higher bias than clinical-BERT for all three categories. Moreover, our analyses show that clinical embeddings carry a high degree of bias for some medical terms and diseases which is conflicting with medical literature. Having such an ill-founded relationship might cause harm in downstream applications that use clinical embeddings.
groups have become more prevalent. An analysis of several American studies regarding biases in healthcare showed that doctors can hold negative stereotypes of racial minorities while being unaware of the fact that they do [13]. Besides racial bias, Arslanian et al. [8] also identified gender bias in healthcare, finding that under-or overrepresentation of certain groups in case studies has led to the reliance of underrepresented groups on the stereotypical symptoms of the overrepresented group. They showed that even though men show slightly different symptoms to women when it comes to heart disease, professionals tend to rely on the stereotypically male symptoms to identify heart disease in both men and women. These biases do not only have implications for the quality of treatment minority groups get prescribed by their doctor, but they also translate to machine learning systems if the training data is pulled from real patient cases. Due to this unavoidable nature of bias in training data, proper debiasing techniques are vital in the modeling of a machine learning system that is equally effective for all groups, rather than favoring the privileged group.
Besides over-and underrepresentation in training data, harmful bias can also be a result of the stereotypical assignment of gender to certain personality traits. Glick and Fiske [17] explain that, even when sexism is seemingly beneficial (such as women being linked to 'attractive'), the effects of stereotypical assignment of properties based on gender ultimately cause more harm than good to women in professional settings. Keeping in mind the already existing underrepresentation of women in case studies in the healthcare domain and the biasing effect that such an underrepresentation has on training data for machine learning systems, analyzing potential biasing effects of stereotypical personality-gender combinations could prove important in further reduction of gender bias in training data.
To create a fair machine learning system, unfavorable biases in the training data must be identified and corrected. Two ways previous research has approached this task are either with the use of non-contextualized word embeddings [11] or with the use of contextualized word embeddings [32], which Zhang et al. used to analyze the healthcare dataset MIMIC-III [20]. However, no comparative analysis has yet been performed to determine differences in effectiveness between these two types of word embedding methods when applied in the healthcare domain. Additionally, no previous research has looked into the potential biasing effects of personality traits in a healthcare dataset. Due to the focus of the current study on the healthcare domain and the similar focus of Zhang et al. [32], the decision was made to use the MIMIC-III dataset for this study as well. As such, our contributions in this study are as follows:
• We show that both BioWordVec and clinical-BERT embeddings carry gender biases for some diseases and medical categories. However, BioWordVec shows a higher gender bias for three categories; mental disorders, sexually transmitted diseases, and personality traits. • We define a concept of accurate and conflicting biases for medicine and show that while embeddings carry gender biases that are in line with medicine literature ('anxiety' and 'breast cancer' are closer to female), they also have biases that are conflicting with the literature ('depression' is closer to the male although women are more likely to be diagnosed with major depression in literature). • Both embeddings are trained on the same clinical dataset.
We provide a descriptive analysis of the MIMIC-III Medical Dataset to better understand potential reasons for bias in word embeddings.
The current study identifies mental illness and sexually transmitted diseases as historically biased diagnosis categories in healthcare based on previous studies. It also presents a short overview of gender statistics in medicine, a descriptive analysis of the MIMIC-III Clinical Database, and compares the amount of bias in the identified categories using contextualized and non-contextualized word embeddings trained on the MIMIC-III medical notes.
BACKGROUND 2.1 Gender statistics in medicine
Not all diseases are equally prevalent in both males and females. Potential biases in clinical data sets are thus not necessarily related to statistical bias -biases as a result of measurement or sampling inconsistencies -but could accurately represent occurrence rates across genders. For example, if a certain disease is more common in females, it can be expected that a larger part of the diagnosed patients in the clinical data set will be female and not as a result of non-representative sampling. However, reliable and accurate statistics on gender differences in disease prevalence are difficult to obtain: access to health information or care is often limited to females due to, amongst others, restrictions on mobility, decisionmaking power, and lower literacy rates. On the other hand, cultural attitudes of manhood and masculinity can also negatively affect the behavior of males, resulting in more violence, risky behavior, and not seeking health care [24]. Consequently, measured distributions of diseases across gender might reflect societal bias -bias as a result of cultural and societal norms and behaviors -instead of true prevalence rates. A health care category that might be heavily affected by societal and cultural norms is that of mental health. Although mental disorders (e.g. schizophrenia) are reported to be diagnosed equally often among males and females, the number of diagnoses of mental illnesses (e.g. depression) among both sexes varies greatly. The WHO reports that mental illnesses are underdiagnosed by doctors and that if diagnosed, women are often under-or over-treated [25]. Whereas females are predominantly diagnosed with depression, anxiety, and/or somatic complaints, males are more often diagnosed with substance abuse (e.g. alcohol) or antisocial disorders [16,18,25]. It is thought that these differences in prevalence rates of mental illnesses across gender are due to different coping mechanisms in men and women resulting in contrasting symptoms and behaviors in both sexes even though underlying causes might be similar [16,18]. These statistics are thought to further emphasize gender stereotypes inhibiting help-seeking behavior [25].
Another health care category of interest is that of sexually transmitted diseases (STD). Researchers in India and Peru found that females are more likely to carry or have carried a sexually transmitted infection even though males tend to have more sex partners on average. Moreover, a higher percentage of females was reported to not show any symptoms of the infection (asymptomatic) [19,26]. Indeed, it is generally agreed that women are more susceptible to STDs, are more affected when infected, and seek help less often due to biological (e.g. higher chance of infection), economic (e.g. financial dependence on male partner), or social (e.g. believed to be in a monogamous relationship) factors [30]. Females do appear to respond better to preventive treatment, such as vaccinations, and existing methods for females for prevention during sex provide more protection than existing methods for males [30].
2.1.1 Accurate and conflicting biases. Accurate biases are word embeddings of diseases that show a higher similarity towards a certain gender and the rate of the disease in that gender is higher compared to other genders based on medical literature findings. For example, the NIH [2] predicts around 280.000 cases of female breast cancer and ACS [4] predicts around 2.000 cases of male breast cancer in the US in 2021. A word embedding for breast cancer should, according to these statistics, be more biased towards females, and thus a word embedding showing this would be considered an accurate bias. A conflicting bias would show a higher similarity for a disease towards a certain gender without medical statistics showing that this disease is more prevalent in the specific gender. Accurate biases aid downstream models in classifying symptoms as a certain disease. Conflicting biases hinder the classification of symptoms of diseases or other similar downstream tasks that use these embeddings. Deciding whether a certain bias is accurate or conflicting can be done based on statistics or by consulting with medical experts.
Based on the reported prevalence above, it can be expected that the MIMIC-III data set will show a distribution skewed towards females regarding mental illnesses and sexually transmitted diseases. Word embeddings related to sexually transmitted diseases that are closer to females can be considered to carry accurate bias: the skewed relation is an accurate representation of the actual prevalence rates. In contrast, word embeddings related to mental illnesses that are closer to females can be thought of as conflicting bias. Even though females are more often diagnosed with mental illnesses, this diagnosis appears to be a result of societal and statistical bias, not that of real prevalence rates. In addition, it is expected that male patients in the data set are more likely to be linked to substance abuse. It is hard to identify word embeddings related to substance abuse that are closer to males as either accurate or conflicting bias. As described earlier, men are more often diagnosed with substance abuse, even though the underlying cause might be depression. Hence, biased word embeddings accurately represent prevalence rates to some extent only.
Word Embeddings
A word embedding is a vector that represents information about a word, such as its semantic and syntactic properties as found in a text or corpus [31]. In a machine learning context, these embeddings have proven useful in the prediction of certain words based on the vector values of the words fed to the machine learning system. The vectors representing the words are created using an embedding method, which defines the complexity of the vectors as well as the strategy used to compare words to each other. Embedding methods can be distinguished between two types of methods: noncontextualized embedding methods and contextualized embedding methods.
2.2.1 Non-contextualized word embeddings. Non-contextualized embeddings, such as those created with the Skip-gram Model and the CBoW model [22], specifically consider the relatedness of two words by the number of times they show up near each other in a text, as well as their proximity when they do. In the GloVe model [27], for example, the chance of two words being nearby of each other is determined by the percentage of times that one of those words occurs within a certain proximity of the other word. This probability of two words occurring together is represented by a weight by which the relatedness of those words is defined.
Contextualized word embeddings.
Contextualized word embeddings aim to use the surrounding words to encode the meaning or purpose of a word in that specific context into a word embedding [29]. The main advantage of using contextualized word embeddings over their non-contextualized counterparts is the ability to distinguish between the semantic meaning of two similar words (e.g. 'A dog's bark' vs 'A tree's bark'). The ELMo [28] word embeddings, for example, are created by a bidirectional LSTM. The bi-direction encodes the context of a word for the left side and the right side. In contrast, Bert [14] embeddings are trained for fill-in-the-blank and next sentence prediction tasks.
METHOD
As mentioned earlier, our primary focus in this study is analyzing bias for three medical domains; mental illnesses (MD), sexually transmitted diseases (STD), and personality traits (PD). First, a descriptive analysis based on three tables of the MIMIC-III Clinical Database is depicted. This analysis provides a preliminary indication of potential skewed distributions and biases for patient treatment. Second, we analyze the amount of bias using two different embedding approaches BioWordVec [12] as a non-contextualized method and Clinical-BERT [6] as contextualized embeddings approach. Both models are pre-trained on all available clinical notes of the MIMIC-III dataset so that they are comparable with each other.
The bias in the word embeddings of each method is quantified using the Direct Bias metric [11]. Hence, in this section, we first describe the Direct Bias metric, then elaborate on the crafted medical dictionary, and finally explain the computational details of applying the bias metric to each embedding approach.
Direct Bias
Direct Bias, which is proposed by Bolukbasi et. al [11], is a measure of how close a certain set of words is to the gender vector. It was first proposed for standard non-contextualized word2vec embeddings, but also applied later to contextualized embeddings such as ELMo [28] for measuring occupational gender bias [10]. We used this metric in our study to measure bias in both clinical embeddings.
1 ∑︁ |cos( − → , )|(1)
Direct bias, whose formula is given in Eq. 1, is computed by averaging the cosine similarity scores between the gender vector and the words belonging to the target category. Let's assume that we have a list of words namely 'M' (M = [w1: 'bipolar disorder', w2: 'anxiety', w3: 'eating disorder' .. ]) which all belong to the target domain of mental illness category. The average absolute cosine similarity scores between each word (w1, w2, w3) and the gender vector, g, is considered as a bias score of the target category towards a specific gender. If there is no gender bias, scores should be equal to 0.
Medical Terms List.
We created dictionaries that consist of medical terms that are all crafted from the web. We collected three categories. Mental disorders [7], sexually transmitted diseases [5] and personality traits [1]. The categories contain 221 mental disorders (e.g. Alzheimer's disease), 639 personality traits of which 236 are positive (e.g Accessible), 111 are neutral (e.g. Absentminded), 292 are negative (e.g. Abrasive), and 15 are sexually transmitted diseases of which 8 are bacterial (e.g. Chlamydia), 1 is fungal (Candidiasis), 6 are viral (e.g. HIV) and 3 are parasitic diseases (e.g Scabies).
3.1.2 BioWordVec. To compute the gender vector, we used publicly available gender list 1 . This list contains gender-specific words like the male, female, he, and she. The word embeddings of these gender words are fed into a principal component analysis (PCA). The PCA outputs a gender vector that should contain a vector in a high dimensional space that represents gender. This vector is in turn used to calculate the direct bias of diseases. The BioWordVec model is a non-contextualized model which means that it can only compute the word embeddings for single words. To generate word embeddings for diseases that consist of more than one word, for example, "bipolar disease", the word embeddings for each part of the word are generated and averaged to get a word embedding that encodes the meaning of both words in a single vector. This vector is in turn compared to the aforementioned gender vector to determine the direct bias.
Clinical-BERT.
As a contextualized word embedding, Clinical-BERT [6], requires context knowledge about a word to determine its vector. For this reason, we needed to construct sentences for obtaining vector representation of both gender pairs and medical terms. For gender pairs, we constructed very simple sentences by swapping provided gender pairs by Bolukbasi X is a type of mental health disorder and in the list of ICD-9-CM diagnosis codes Sexually transmitted diseases X is a type of sexually transmitted disease and in the list of ICD-9-CM diagnosis codes Personality traits X is a type of personality traits Table 1: Template used for extracting clinical-BERT embeddings per diagnosis category given in Table 1 which does not contain any gender pronoun but can be used as a generic and simple explanation of terms. Then, each word, X, in the Medical Terms List was simply placed in a template sentence of its relevant category, and the corresponding vector of each medical term was computed by Clinical-BERT. After obtaining vector representations of all words, direct bias scores per category were computed as explained in the previous section (3.1.2).
RESULTS
In this section, we first show the results from the descriptive analysis of the MIMIC-III data set on which the clinical word embeddings are trained. Later, we present the obtained bias scores for the clinical word embeddings.
Descriptive Analysis
The MIMIC-III Clinical Database describes the diagnosis and treatment of 46.520 patients at the Intensive Care Unit of Beth Israel Deaconess Medical Center between 2001 and 2012 [20]. General descriptive statistics of the patients are shown in Figure 2. The statistics are based on the 'Patients', 'Admissions', and 'ICD9 Diagnoses' tables of the database.
Although gender appears to be evenly distributed among patients, the far majority of patients fall within the categories of 'White' (70%) and 'Christian' (48%). Other ethnicity groups, such as 'Black' or 'Asian', are only sparsely represented in the data set ( 8% and 4% respectively). Similarly, non-Christian religions, for example, 'Islam' and 'Jewish', occur marginally in the data set. The number of patients on a 'Private' or 'Government' plan appears to be equally distributed, with a small majority of patients on a government plan. Figure 1 visualizes the distribution of patients across two sensitivity features: gender and race. The majority of ethnic groups appear to score below the group average regarding the percentage of female patients (44%). The high percentage of Black female patients (≈ 55%) is notable compared to that of the other minority groups; Asian and Hispanic/Latino females (both ≈ 40%).
The majority of patients arrived at the Intensive Care Unit for emergency or urgent treatment (70%). The remaining patients received a previously planned treatment (13%) or had a newborn (17%). On average, patients were discharged after 10 days. The shortest stay was around 2 hours whereas the longest stay was around 295 days. Important to note is that 88 patients appear to have been discharged before being admitted.
At the discharge of the Intensive Care Unit, admission entries are coded based on the diagnosed disease using the ICD9 disease codes for billing purposes [9]. In total, 651.047 admissions were labeled with the matching ICD9 code (see Figure 2). Most of the admissions considered circulatory system (e.g. heart) diseases (22%), followed Figure 1: Distribution of diagnoses across sensitivity groups by metabolic diseases and immunity disorders (11%), and respiratory system diseases (7%). In line with the category distribution of received treatment described earlier, only 1% of the admissions were labeled as pregnancy-related.
Figure 2: Distribution of diagnoses across all patients
In addition to the overall distribution, one can also compare the diagnoses registered across patients of various sensitivity groups. Figure 3 shows the diagnoses per gender (male/female) and per ethnicity (White, Black, Hispanic/Latino, Asian, unobtained, other). As can be expected, none of the male patients received a pregnancy diagnosis. Overall, the dispositions of diseases across genders appear rather equally distributed: the majority of diseases occur around 40% to 50% percent in females. In contrast, the dispositions of diseases across ethnicity groups are heavily skewed towards the majority group: Caucasian (white). Considering the distribution of patients among ethnic groups as shown in Table 2, a skewed distribution of diseases among ethnicity groups can also be expected. Interestingly, the given partition of diagnoses per ethnicity group appears to be relatively constant across diseases: most diseases count around 70% of diagnoses on White patients, between 10 − 15% on Black patients, and around 5% on Asian and Hispanic/Latino patients.
Bias in Embeddings
4.2.1 BioWordVec Results. Figure 4 depicts average direct bias scores over the defined medical categories. The mean scores for the different categories are 0.08, 0.04, and 0.05 for mental disorders, sexually transmitted diseases, and personality traits respectively. Overall categories, the non-contextualized word embeddings, generated by BioWordVec, show a heavier bias towards gender compared to the contextualized embeddings generated by Clinical-Bert. It shows that mental disorders are biased the heaviest towards gender. The terms containing the maximum value in their category are Dyspareunia (MD), HPV (STD), and Unreligious (PD). Figure 5 depicts average direct bias scores per medical category. Results show that all categories carry some degree of bias towards a specific gender with 0.03, 0.04, and 0.03 mean scores for mental disorders, sexually transmitted diseases, and personality traits respectively. BioWordVec carries a higher bias than Clinical-BERT for all three dimensions. This outcome is in line with the findings of a previous study [10] which primarily focused on gender-occupation bias in general embeddings. Utilizing context helps to better represent the word and consequently decrease the bias score. Moreover, interestingly, while Clinical-BERT has the highest bias for STD among all, BioWordVec carries the highest bias for MD. This could be explained as follows; These two embeddings are both trained on domain-independent and clinical resources. Let's consider a scenario where the term 'autistic' is used as an Table 3: Examples from fill in the blank task with Clinical-BERT insulting term towards specific groups or individuals. These cooccurrences will reflect word representation and non-contextual embeddings will not be able to differentiate between the same term used for different meanings and while contextualized embeddings will represent the word more accurately. Fill in the blank task. In addition to direct bias experiments, we also performed a fill-in-the-blank task pipeline for Clinical-BERT embedding. A sentence containing a blank word with [MASK] identifier is given to the model, and the top 10 most probable words are returned by the model. While a previous study [32] used this task to measure bias magnitude by computing log probabilities of gender words, we used it as a tool to help us demonstrate our findings from the previous section. A list of sentences and their corresponding gender probability scores are provided in Table 3. In line with the similarities we found, clinical notes indicating drug/alcohol addiction are found to be more probable to belong to a male patient, while suicide is found to be more likely attempted by a female patient.
Clinical-BERT Results.
Bias and statistics.
Bias represented in embeddings might be necessary for some illnesses/traits that are proven to be correlated with gender due to social or genetic factors. However, when we analyzed the bias scores per disease for both embeddings, besides the expected outcomes, we also observed high bias scores that Table 4: List of medical terms. OCD: Obsessive-compulsive disorder, APD: Antisocial personality disorder, Bias1: Direct bias score of clinical-BERT, Bias2: Direct bias score of BioWordVec. Positive scores mean biased toward females, and negative scores mean biased toward males.
were not in line with evidence-based medicine findings. Table 4 lists some of those diseases. For example; while a woman is more likely to have depression than a man [15], embeddings show bias towards men. And although there are no marked gender differences in the diagnosis rates of disorders like Schizophrenia or Bipolar disorder [3], both embeddings are slightly biased towards women.
DISCUSSION AND FUTURE WORK
In this study, we analyzed the bias in both BioWordVec and Clinical-BERT for three medical categories; mental disorders, sexually transmitted diseases, and personality traits. We show that BioWordVec with an average direct bias score of 0.06 contains a higher bias than Clinical-BERT (average direct bias score of 0.03) for all three dimensions but especially with a large margin for the mental disorders category. Our descriptive analysis shows that males are diagnosed with mental disorders more often than females. Table 3 however, shows that certain mental disorders, for example, depression, are closer to females, and drug-related mental disorders are closer to males. This is most likely because most of the males are diagnosed with drug-related disorders and females with depression, bipolar, or anxiety-related disorders. The aforementioned diagnosis ratios with the addition of pre-training create gender-specific biases without that gender being the most prevalent diagnosed in that category.
Although it is expected to have some degree of bias for some medical terms, such as 'anxiety' which is correlated with gender, we also observed some strong biases that do not exist in or are controversial with medical literature. For example; Bipolar Disorder, Schizophrenia, and OCD are biased towards females for both embeddings whereas medicine literature states that there are no marked gender differences in the diagnosis rates of those diseases. Consequently, this ill-founded relationship might cause undesired outcomes in the downstream tasks. In future work, we would like to analyze the effect of those incorrect relationships existing in clinical embeddings to downstream models. Moreover, we provide an exhaustive analysis of the demographics of the MIMIC-III dataset in this study. However, we only analyze the gender bias in the embeddings. Bias analysis on different sensitive groups such as race and intersected groups can be studied in future research.
Another research question of interest for subsequent research could also consider genetic factors. As mentioned earlier in Section 4.1, embeddings are trained on the MIMIC-III dataset which is obtained from hospitals in Boston, USA. However, mental disorders might be affected by both genetic and cultural factors meaning that disease relation to gender might not generalize very well to other countries. This might cause a diagnosis model -which is trained on those clinical embeddings-that performs less fair and accurate for some cultures and countries. We leave this analysis as a future work of this study.
Ethical Considerations
Even though the results have shown contextualized word embeddings to be more effective at reducing bias than their noncontextualized counterpart, there are certain ethical considerations to be discussed regarding the decision to implement machine learning systems in the healthcare domain. Due to the high-stakes nature of the healthcare domain, it could be argued that, even though overall biased is significantly reduced with contextual embeddings, the presence of illnesses with high bias scores alone is enough of a reason not to involve machine learning systems in the diagnosing process. Overrepresentation of one diagnosis leads to the underrepresentation of another, the effect of which could be amplified through a feedback loop. Additionally, using a diagnosing system to identify illnesses with low bias scores exclusively is unfeasible due to the illness being unidentified during the diagnosing process. Another ethical concern involves the identification of accurate and conflicting biases. Even though experts and statistics are most likely to give an accurate reflection of which category an identified bias belongs to, research has found a gender bias in educational resources related to healthcare [8] and found that experts unintentionally hold negative stereotypes [13]. These biases translate to real-life cases, which means that training data and statistics based on real-life cases inherently hold bias. The potential influence of bias within experts and statistics should be kept in mind when determining to what extent certain biases are accurate and other biases are conflicting.
Figure 3 :Figure 4 :
34Diagnoses per gender (upper) and per ethnicity group (lower) Direct bias per category for BioWordVec. p_traits: personality traits, trans.: transmitted.
Figure 5 :
5Direct bias per category for Clinical-Bert. p_traits: personality traits, trans.: transmitted.
et. al. (e.g. [He/She] is a [mother/father]). For medical words, we generated a templateCategory
Template
Mental disorders
Table 2 :
2Demographics of patients registered in MIMIC-III data set
https://github.com/tolga-b/debiaswe/blob/master/data/definitional_pairs.json
ACKNOWLEDGEMENTSWe would like to thank Heysem Kaya, Dong Nguyen and Yupei Du for their useful feedback which helped us to improve the quality of our paper.
Cancer Stat Facts: Female Breast Cancer. 638 Primary Personality Traits[n.d.]. 638 Primary Personality Traits. https://www.cdc.gov/std/default.htm [2] [n.d.]. Cancer Stat Facts: Female Breast Cancer. https://seer.cancer.gov/statfacts/ html/breast.html
Gender and women's health. World Health Organization. 2007- 05-13.n.d.[n.d.]. Gender and women's health. World Health Organization.Retrieved 2007- 05-13. ([n. d.]).
Key Statistics for Breast Cancer in Men. n.d.[n.d.]. Key Statistics for Breast Cancer in Men. https://www.cancer.org/cancer/ breast-cancer-in-men/about/key-statistics.html
Sexually Transmitted Diseases (STDs. n.d.[n.d.]. Sexually Transmitted Diseases (STDs). https://www.cdc.gov/std/default.htm
Emily Alsentzer, R John, Willie Murphy, Wei-Hung Boag, Di Weng, Tristan Jin, Matthew Naumann, Mcdermott, arXiv:1904.03323Publicly available clinical BERT embeddings. arXiv preprintEmily Alsentzer, John R Murphy, Willie Boag, Wei-Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. arXiv preprint arXiv:1904.03323 (2019).
American Psychiatric Association, et al. 2013. Diagnostic and statistical manual of mental disorders. AP American Psychiatric Association. 5AP American Psychiatric Association, American Psychiatric Association, et al. 2013. Diagnostic and statistical manual of mental disorders: DSM-5.
Symptoms of men and women presenting with acute coronary syndromes. Cynthia Arslanian-Engoren, Amisha Patel, Jianming Fang, David Armstrong, Eva Kline-Rogers, Claire S Duvernoy, Kim A Eagle, The American journal of cardiology. 98Cynthia Arslanian-Engoren, Amisha Patel, Jianming Fang, David Armstrong, Eva Kline-Rogers, Claire S Duvernoy, and Kim A Eagle. 2006. Symptoms of men and women presenting with acute coronary syndromes. The American journal of cardiology 98, 9 (2006), 1177-1181.
American Medical Association. 2021. ICD-9 Codes Lookup. American Medical Association. 2021. ICD-9 Codes Lookup. https://www.aapc. com/codes/icd9-codes-range/
Evaluating the underlying gender bias in contextualized word embeddings. Christine Basta, Marta R Costa-Jussà, Noe Casas, arXiv:1904.08783arXiv preprintChristine Basta, Marta R Costa-Jussà, and Noe Casas. 2019. Evaluating the underlying gender bias in contextualized word embeddings. arXiv preprint arXiv:1904.08783 (2019).
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Tolga Bolukbasi, Kai-Wei Chang, Y James, Venkatesh Zou, Adam T Saligrama, Kalai, Advances in neural information processing systems. 29Tolga Bolukbasi, Kai-Wei Chang, James Y Zou, Venkatesh Saligrama, and Adam T Kalai. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in neural information processing systems 29 (2016), 4349-4357.
BioSentVec: creating sentence embeddings for biomedical texts. Qingyu Chen, Yifan Peng, Zhiyong Lu, 2019 IEEE International Conference on Healthcare Informatics (ICHI). IEEEQingyu Chen, Yifan Peng, and Zhiyong Lu. 2019. BioSentVec: creating sen- tence embeddings for biomedical texts. In 2019 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 1-5.
How does implicit bias by physicians affect patients' healthcare. Deangelis, Monit. Psychol. 5022T DeAngelis. 2019. How does implicit bias by physicians affect patients' health- care. Monit. Psychol 50, 3 (2019), 22.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018).
A literature review of depression, anxiety, and cardiovascular disease in women. V Lynn, Jo-Ann Doering, Eastwood, Journal of Obstetric, Gynecologic & Neonatal Nursing. 40Lynn V Doering and Jo-Ann Eastwood. 2011. A literature review of depression, anxiety, and cardiovascular disease in women. Journal of Obstetric, Gynecologic & Neonatal Nursing 40, 3 (2011), 348-361.
Study Finds Sex Differences in Mental Illness. R Nicholas, Eaton, Nicholas R. Eaton. 2011. Study Finds Sex Differences in Mental Illness. (2011). https://www.apa.org/news/press/releases/2011/08/mental-illness
An ambivalent alliance: Hostile and benevolent sexism as complementary justifications for gender inequality. Peter Glick, Susan T Fiske, American psychologist. 56109Peter Glick and Susan T Fiske. 2001. An ambivalent alliance: Hostile and benev- olent sexism as complementary justifications for gender inequality. American psychologist 56, 2 (2001), 109.
Recovery Across Mental Health. 2021. Gender differences in Mental Health. Recovery Across Mental Health. 2021. Gender differences in Mental Health. https://ramh.org/guide/gender-differences-in-mental-health/
J Escamilla, C Carrillo, I Phillips, C Barrios, W Stamm, R Ashley, J Kreiss, K Holmes, J Sánchez, E Gotuzzo, Gender Differences in Sexual Practices and Sexually Transmitted Infections among Adults in. Lima,Peru86J. Escamilla C. Carrillo I. Phillips C. Barrios W. Stamm R. Ashley J. Kreiss K. Holmes J. Sánchez, E. Gotuzzo. 1996. Gender Differences in Sexual Prac- tices and Sexually Transmitted Infections among Adults in Lima,Peru. American Journal of Public Health 86, 8 (1996), 1098-1107.
MIMIC-III, a freely accessible critical care database. E W Alistair, Johnson, J Tom, Lu Pollard, Li-Wei H Shen, Mengling Lehman, Mohammad Feng, Benjamin Ghassemi, Peter Moody, Leo Anthony Szolovits, Roger G Celi, Mark, Scientific data. 3Alistair E W Johnson, Tom J Pollard, Lu Shen, Li-Wei H Lehman, Mengling Feng, Mohammad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. MIMIC-III, a freely accessible critical care database. Scientific data 3, 1 (2016), 1-9.
How Machine Learning is Transforming Clinical Decision Support Tools. Kent, J Kent. (2020, March 26). How Machine Learning is Transforming Clinical Decision Support Tools. https://healthitanalytics.com/features/how-machine- learning-is-transforming-clinical-decision-support-tools
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in neural information processing systems. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in neural information processing systems. 3111-3119.
Machine learning in healthcare: examples, tips & resources for implementing into your care practice. University of Illinois Chicago.University of Illinois Chicago. (2020, November 13). Machine learning in health- care: examples, tips & resources for implementing into your care practice. https://healthinformatics.uic.edu/blog/machine-learning-in-healthcare/
World Health Organization). 2021. Gender and Health. Anna Kari, Anna Kari (World Health Organization). 2021. Gender and Health. https: //www.who.int/health-topics/gender#tab=tab_1
World Health Organization. 2021. Mental Health and Substance Use: Gender and women's mental health. World Health Organization. 2021. Mental Health and Substance Use: Gender and women's mental health. https://www.who.int/teams/mental-health-and- substance-use/gender-and-women-s-mental-health
Gender differences in the prevalence of sexually transmitted infections and genital symptoms in an urban setting in southern India. K H Mayer, A K Srikrishnan, S Sivaran, C E Zelaya, V F Go, S Solomon, M E Bentley, D D Celentano, S Panchanadeswaran, S C Johnson, 10.1136/sti.2006.020768Sexually transmitted infections. 826Mayer K. H. Srikrishnan A. K. Sivaran S. Zelaya C. E. Go V. F. Solomon S. Bentley M. E. Celentano D. D. Panchanadeswaran S., Johnson S. C. 2006. Gender differ- ences in the prevalence of sexually transmitted infections and genital symptoms in an urban setting in southern India. Sexually transmitted infections 82, 6 (2006), 491-495. https://doi.org/10.1136/sti.2006.020768
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP. the 2014 conference on empirical methods in natural language processing (EMNLPJeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). 1532-1543.
E Matthew, Mark Peters, Mohit Neumann, Matt Iyyer, Christopher Gardner, Kenton Clark, Luke Lee, Zettlemoyer, arXiv:1802.05365Deep contextualized word representations. arXiv preprintMatthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018).
Pogiatzis, NLP: Contextualized word embeddings from BERT. A Pogiatzis. (2019, March 20). NLP: Contextualized word embeddings from BERT. https://towardsdatascience.com/nlp-extract-contextualized-word- embeddings-from-bert-keras-tf-67ef29f60a7b
Sex Differences in the Transmission, Prevention, and Disease Manifestations of Sexually Transmitted Diseases. Karan K Sra Stephen, K Tyring Vandana, K Madkan, Angela A Giancola, Archives of Dermatological Research. 142Karan K. Sra Stephen K. Tyring Vandana K. Madkan, Angela A. Giancola. 2006. Sex Differences in the Transmission, Prevention, and Disease Manifestations of Sexually Transmitted Diseases. Archives of Dermatological Research 142 (2006), 365-370.
Evaluating word embedding models: Methods and experimental results. Bin Wang, Angela Wang, Fenxiao Chen, Yuncheng Wang, C-C Jay Kuo, APSIPA transactions on signal and information processing. 8Bin Wang, Angela Wang, Fenxiao Chen, Yuncheng Wang, and C-C Jay Kuo. 2019. Evaluating word embedding models: Methods and experimental results. APSIPA transactions on signal and information processing 8 (2019).
Hurtful words: quantifying biases in clinical contextual word embeddings. Haoran Zhang, Amy X Lu, Mohamed Abdalla, Matthew Mcdermott, Marzyeh Ghassemi, proceedings of the ACM Conference on Health, Inference, and Learning. the ACM Conference on Health, Inference, and LearningHaoran Zhang, Amy X Lu, Mohamed Abdalla, Matthew McDermott, and Marzyeh Ghassemi. 2020. Hurtful words: quantifying biases in clinical contextual word embeddings. In proceedings of the ACM Conference on Health, Inference, and Learning. 110-120.
| [
"https://github.com/tolga-b/debiaswe/blob/master/data/definitional_pairs.json"
] |
[
"Human Associations Help to Detect Conventionalized Multiword Expressions",
"Human Associations Help to Detect Conventionalized Multiword Expressions"
] | [
"Natalia Loukachevitch \nLomonosov Moscow State University Leniskie Gory\n\n",
"Anastasia Gerasimova anastasiagerasimova432@gmail.com \nLomonosov Moscow State University Leniskie Gory\n1 MoscowMoscowRussia, Russia\n"
] | [
"Lomonosov Moscow State University Leniskie Gory\n",
"Lomonosov Moscow State University Leniskie Gory\n1 MoscowMoscowRussia, Russia"
] | [
"Proceedings of Recent Advances in Natural Language Processing"
] | In this paper we show that if we want to obtain human evidence about conventionalization of some phrases, we should ask native speakers about associations they have to a given phrase and its component words. We have shown that if component words of a phrase have each other as frequent associations, then this phrase can be considered as conventionalized. Another type of conventionalized phrases can be revealed using two factors: low entropy of phrase associations and low intersection of component word and phrase associations. The association experiments were performed for the Russian language. | 10.26615/978-954-452-049-6_061 | [
"https://doi.org/10.26615/978-954-452-049-6_061"
] | 22,510,714 | 1709.03925 | 0920e4c6b50cae1e7a763a1e01894094a6a5ebcc |
Human Associations Help to Detect Conventionalized Multiword Expressions
Sep 4-6 2017
Natalia Loukachevitch
Lomonosov Moscow State University Leniskie Gory
Anastasia Gerasimova anastasiagerasimova432@gmail.com
Lomonosov Moscow State University Leniskie Gory
1 MoscowMoscowRussia, Russia
Human Associations Help to Detect Conventionalized Multiword Expressions
Proceedings of Recent Advances in Natural Language Processing
Recent Advances in Natural Language ProcessingVarna, BulgariaSep 4-6 201710.26615/978-954-452-049-6_061
In this paper we show that if we want to obtain human evidence about conventionalization of some phrases, we should ask native speakers about associations they have to a given phrase and its component words. We have shown that if component words of a phrase have each other as frequent associations, then this phrase can be considered as conventionalized. Another type of conventionalized phrases can be revealed using two factors: low entropy of phrase associations and low intersection of component word and phrase associations. The association experiments were performed for the Russian language.
Introduction
A lot of approaches have been proposed for automatic extraction of idioms, collocations, or multiword terms from texts as potential candidates for inclusion in lexical or terminological resources (Bonial et al., 2014;Gelbukh and Kolesnikova, 2014;Pecina, 2010;.
However, developers of computational resources need clear guidelines for the introduction of phrases into their resources. Special instructions on introducing multiword terms exist for constructing information-retrieval thesauri (ANSI/NISO, 2005). Developers of WordNet-like thesauri, a very popular type of resources, discuss the problem of introducing multiword expressions in their resources in several works Vincze and Almasi, 2014). For example, it is supposed that wordnets have to include only lexicalized concepts as synsets (Miller, 1998). However, Agirre et al., (2006) stress that boundaries of lexicalization are very difficult to draw. Bentivogli and Pianta (2004) argue that there is a necessity to include non-lexicalized phrases into wordnets.
Multiword expressions comprise a broad scope of phrases including idiomatic expressions, noun compounds, technical terms, proper names, verbparticle and light verb constructions, conventionalized phrases, and others (Calzolari et al., 2002;Sag et al., 2002;Baldwin and Kim, 2010). For some of these constructions, such as idioms, it is evident that they should be included in computational lexicons. But for many of other expressions, for example, conventionalized phrases, it is not easy to make a decision about the necessity of their inclusion. To distinguish a multiword expression, it is important to analyze if it has any "idiosincrasies", which can be lexical, syntactical, semantical or statistical.
Conventionalized phrases have statistical idiosyncrasy and usually only one approach is proposed in literature to distinguish such phrases from other compositional phrases. This is so-called substitutionability test, which shows if the phrase components can be easily substituted with their synonyms (Sag et al., 2002;Farahmand et al., 2015;Farahmand and Henderson, 2016;Pearce, 2001;Senaldi et al., 2016).
In this paper, we show that there are at least two more types of statistical idiosyncrasy (and related tests) to distinguish conventionalized expressions:
• association idiosyncrasy when components of a phrase are highly associated with each other, and
• relational idiosyncrasy when a phrase has lexical associations that significantly differ from the associations of its component words; usually it means that the phrase denotes a specific entity or process with a set of its own properties and relations.
We provde evidence for these types of phrase idiosyncrasy in association experiments in Russian, in which we asked Russian native speakers what associations they had for phrases and their component words. We have found that the human association experiment is a very efficient tool to detect conventionalized phrases with high accuracy. To the best of our knowledge, this is the first attempt to use human associations for distinguishing conventionalized phrases.
The structure of the paper is as follows. In Section 2 we consider types of phrase idiosyncrasy. Section 3 describes the specificity of RuThes thesaurus, from which we take phrases for the experiments. Section 4 presents the association experiment and its results. In Section 5 we test embedding models on their capability to distinguish conventionalized phrases. Section 6 reviews related work concerning approaches of annotating compositionality/noncompositionality/conventionalization of noun phrases.
Types of Idiosyncrasy of Multiword Expressions
Multiword expressions are phrases that have some specificity (idiosyncrasy). Because of this, it is useful to collect them and store in lexicons and thesauri (Calzolari et al., 2002;Sag et al., 2002;Baldwin and Kim, 2010). The idiosyncrasy can be lexical when a component of a phrase appears only within this phrase (Baldwin and Kim, 2010). It can be syntactical when the syntactic behavior of a phrase differs from usual (for example, fixed word order). Semantical idiosynrasy can be revealed when the meaning of a phrase cannot be inferred from the meanings of its components. If a phrase has one of the above-mentioned types of idiosynrasy it can be called a lexicalized expression (Sag et al., 2002;Baldwin and Kim, 2010).
Statistical idiosyncrasy presupposes that the components of a phrase co-occur more often than expected by chance. Besides, the frequency of phrases with statistical idiosynrasy is much higher than the frequency of the phrase with one component changed to its near-synonym (weather forecast vs. weather prediction), as the result of the substitutionability test (Sag et al., 2002;Farahmand and Henderson, 2016). Phrases with statistical idiosyncrasy (often called conventional-ized phrases) can be syntactically and semantically compositional.
In many cases conventionalized phrases are difficult to distinguish. For example, one of the often mentioned conventionalized phrase traffic lights looks fully compositional. However, if we examine the meaning of this phrase, we can see that the denoted entity can be categorized as a road facility; it has signals; it is usually constructed on road intersections; it is needed for regulating road traffic, etc. This means that the phrase traffic lights has thesaurus relations with the correponding words (facilities, road, signals, regulation) that cannot be inferred from the meanings of its component words traffic and lights.
A lot of similar examples can be found. Compositional seat belt has relation to the safety concept. Food courts are usually located in shopping centers, and therefore compositional phrase food court has relation with the shopping center concept, etc. These relations can be very useful in such NLP applications as textual entailment.
Thus, we can suppose that conventionalized phrases have not only statistical idiosyncrasy, but also relational idiosyncrasy, which can be revealed easier than using the substitutionability test. The same idiosynrasy can be found in unclear cases of possible lexicalized expessions.
In (Mel'čuk, 2012) so-called quasi-idioms are discussed. According to Mel'čuk, a phrase AB is a quasi-idiom or weak idiom iff its meaning: 1) includes the meaning of both of its lexical components, neither as the semantic pivot, and 2) includes an additional meaning C as its semantic pivot. Mel'čuk (2012) gives an example of barbed wire, which is an obstacle, but neither barbed nor wire are obstacles. Thus, it seems than semantic pivot in this case is the hypernym relation, that cannot be inferred from the phrase component words. It means that the quasi-idiom is a subtype of relational idiosyncrasy.
In this paper we show that this relational idiosynrasy can be found in association experiments with native speakers. Besides, we can also reveal the association idiosyncrasy of conventionalized phrases in these experiments.
thesaurus RuThes 1 (Loukachevitch and Dobrov, 2014). The RuThes thesaurus is a linguistic ontology for natural language processing, i.e. an ontology, where the majority of concepts are introduced on the basis of actual language expressions.
RuThes has considerable similarities with WordNet: the inclusion of concepts based on senses of real text units, representation of lexical senses, detailed coverage of word senses. At the same time, the differences include attachment of different parts of speech to the same concepts, formulating names of concepts, attention to multiword expressions, the set of conceptual relations, etc.
In particular, the developers of the RuThes thesaurus have special rules for including phrases that appear compositional into the thesaurus. Such phrases are introduced if they have specificity in relations with other single words and/or expressions (Loukachevitch and Lashevich, 2016). The following subtypes of these expressions can be considered:
• A phrase is a synonym to a single word; for example, çåìåëüíûé ó÷àñòîê (landing lot) is a synonym to word çåìëÿ (land), or a phrase has a frequent abbreviation: çàðàáîòíàÿ ïëàòà çàðïëàòà (employee wages);
• A phrase has a synonymous phrase and this fact cannot be simply inferred from the components of the phrase: ìîáèëüíûé òåëåôîí (mobile phone)ñîòîâûé òåëåôîí (cell phone);
• A phrase generalizes several single words. Such phrases as òðàíñïîðòíîå ïðîèñøåñòâèå (transport accident) or ó÷åáíîå çàâåäåíèå (educational institution) often look compositional but they have a very important function of knowledge representation: they gather together similar concepts;
• A phrase has relations that do not follow from its component words. For example, the compositional phrase äîðîaeíîå äâèaeåíèå (road traffic) has numerous relations with other phrases that cannot be inferred from its components, for example, hyponyms (lefthand traffic, one-way traffic), related concepts (car accident, traffic jam), etc.
Association Experiment
For the experiment, we took two-word noun phrases (Adjective + Noun and Noun + Nounin-Genitive) that have high frequency in Russian newswire text collections. The multiword expressions were of two main groups. The first group (Thesaurus group) included multiword expressions from the RuThes thesaurus. We chose phrases that either look fully compositional (increase of prices) or that have one of components is used in a known (=described in dictionaries) metaphoric sense. This group contained 15 phrases. Another group of phrases comprised fully compositional noun phrases not included in the thesaurus, for example, end of January, mighty earthquake, result of work, etc. The non-thesaurus group contained 36 phrases.
We asked respondents (mainly university students) to think of single-word associations to noun phrases. In a separate experiment, we collected associations to the component words of the same phrases. We wanted to understand if the collected associations can serve as a base for distinguishing thesaurus phrases from non-thesaurus phrases (and as a consequence, conventionalized phrases from non-conventionalized). Twenty six native speakers gave their associations for the thesaurus phrases and twenty nine respondents participated in the experiment with non-thesaurus phrases. Forty seven people gave associations for single words.
The study was conducted via Google Forms. The respondents were asked to provide singleword associations. However, some participants could think only of multiword expressions. Such associations were also taken into account. Table 1 contains examples of obtained associations and their frequencies for some thesaurus phrases.
From the associations obtained, we calculated the following characteristics (Tables 2, 3):
• entropy of answers for single words and phrases (currently, only entropy of phrase associations was found useful and included in the tables);
• intersection between associations of component words and phrase associations (columns Ph1 and Ph2 in Tables 2, 3); and
• number of times when one component word served as an association of another component word (columns A12 and A21 in Tables 2, 3). Table 2 contains the results for the thesaurus phrases, and Table 3 shows partial results for the non-thesaurus phrases.
We can see that for thesaurus phrases, the components are associated with each other more often than for non-thesaurus phrases. The average value of such associations for thesaurus phrases is 10 times greater than for non-thesaurus phrases. For some thesaurus phrases, both components are highly connected with another component. Withing non-thesaurus phrases, such frequent mutual associations were not found. phrases included: ïðîãðàììíîå îáåñïå÷åíèå (software program), çåìåëüíûé ó÷àñòîê (landing lot), êâàäðàòíûé ìåòð (square meter), ýëåêòðîííàÿ ïî÷òà (electronic mail), çàðàáîòíàÿ ïëàòà (employee wages), ìîáèëüíûé òåëåôîí (mobile phone), ëåíòà íîâîñòåé (news feed), and òîðãîâûé öåíòð (shopping center). Besides, we found that the average level of entropy (4.07) of phrase associations is much higher for non-thesaurus phrases than for thesaurus phrases (3.24). This means that associations of thesaurus phrases are more concentrated, more motivated by the phrase. But at the same time some clearly compositional non-thesaurus phrases also have fairly low entropy of associations, for example, ïðåññ-ñëóaeáà àäìèíèñòðàöèè (press-service of the administration).
We can also see that the phrases differ in the number of intersections between the associations obtained for a phrase and for its components. It seems natural that the already found conventionalized phrases have numerous intersections of this kind (Table 2) because the phrase and its components are closely related to each other.
On the contrary, other thesaurus phrases have a relatively small number of such intersections. It means that the thesaurus phrases evoke their own associations more often. For example, the phrase ïîâûøåíèå öåí (increase of prices) has frequent associations with the words èíôëÿöèÿ (inflation) (16 of 25) and êðèçèñ (crisis), which were not mentioned as associations for its component words. On average, intersection between associations of the phrase and its component associations for non-thesaurus phrases is four times less than for thesaurus phrases.
It can also be seen that non-thesaurus phrases with low entropy of associations can have large numbers of intersections between the component associations and the phrase associations. In such cases, low entropy of the phrase associations is mainly detemined by its components, for example, their probable syntactic dependencies. Only one of the non-thesaurus phrases has both low entropy of phrase associations and a few number of intersections of the phrase and component associations at the same time: íà÷àëî ãîäà (begin-ning of the year). It is highly associated with calendar months: January and September. For thesaurus phrases, a relatively high number of intersections between the phrase and component associations was revealed for most arguable thesaurus phrases: òðàíñïîðòíîå ïðîèñøåñòâèå (transport accident) and òåìïåðàòóðà âîçäóõà (air temperature).
Thus, we can suppose that if a phrase has a low level of entropy of associations together with a small number of the same associations for the phrase and its components then it is also conventionalized.
We can introduce the threshold as 0.8*Max-Entropy of answers. MaxEntropy is the maximal entropy we can obtain if respondents give equiprobable answers. In the current experiment, the threshold is equal to 3.76 for thesaurus phrases and 3.89 for non-thesaurus phrases. In our experiment, such conventionalized phrases include ó÷åáíîå çàâåäåíèå (educational institute), ïîâûøåíèå öåí (increase in prices), äîðîaeíîå äâèaeåíèå (road traffic), ãëàâíûé ãåðîé (main hero), ìåäèöèíñêàÿ ïîìîùü (medical aid).
A a result, we can say that we have found two signs of phrase conventionalization in the association experiment described:
• component words are frequently associated with each other, and
• associations of a phrase have both low entropy (less than 0.8*MaxEntropy) and a low level of intersection between component and phrase associations (less than 20%).
Using all three factors (association of component words to each other, entropy of phrase associations, and intersection of component word associations and phrase associations), it is possible to differentiate thesaurus phrases and non-thesaurus phrases with greater than 94% accuracy.
It is interesting to compare current results with the smaller amounts of associations. With this aim, we took the first 15 associations obtained for single words and phrases. The same abovementioned thesaurus phrases have frequent mutual associations between components (that is, have association idiosyncrasy).
Phrases ìåäèöèíñêàÿ ïîìîùü (medical aid) and òåìïåðàòóðà âîçäóõà (air temperature) had entropy of associations more than 0.8*MaxEntropy. Only two non-thesaurus phrases had both low entropy (less than 0.8*MaxEntropy) and the low level of intersection between assotiations of the phrase and its components: ôèíàë ëèãè (league final) and íà÷àëî ãîäà (beginning of the year). As a result, in this smaller experiment, the obtained associations can distinguish thesaurus phrases with accuracy more than 92%.
Detecting the Conventionalized Expressions with Distributional Models
We compared the results of the association experiment with the results of distributional models. In previous works, it was supposed that non-compositional phrases can be distinguished with comparison of the phrase distributional vector and distributional vectors of their components: it was supposed that the similarity is less for non-compositional phrases (Cordeiro et al., 2016a;Gharbieh et al., 2016). We used a Russian news collection (0.45 B tokens) and generated phrase and word embeddings with word2vec tool. For the phrases under consideration, we calculated cosine similarity between the phrase vector v(w 1 w 2 ) and the sum of normalized vectors of phrase components v(w 1 + w 2 ) according to formula from (Cordeiro et al., 2016a).
v(w 1 + w 2 ) = ( v(w 1 ) |v(w 1 )| + v(w 2 ) |v(w 2 )| )
To evaluate different parameter sets, we located all phrases in the ascending order of similarity scores. We wanted to check if the thesaurus phrases with idiosynrasy obtain lesser values of word2vec similarity than non-thesaurus phrases without any specificity. We utilized MAP (mean average precision measure) to evaluate the quality of ordering.
We experimented with different parameters of word2vec and evaluated them with MAP on our data. We found that the best word2vec model (200 dimensions, 3 word window size) achieved quite low value of MAP ( 0.391), which means that it is very difficult for current embedding models to differentiate thesaurus and non-thesaurus phrases in our experiment.
We can also calculate MAP for the same phrase list ordered accoring to the increased entropy of phrase associations. And here we obtain MAP equal to 0.642. Thus, entropy of human associations without accounting additional factors pre-dicts thesaurus phrases significantly better than the embedding models.
Related Work
The annotation of multiword expressions on compositionality/non-compositionality of noun compounds has been studied in several works (Cordeiro et al., 2016b;Reddy et al., 2011;Ramisch et al., 2016). Reddy et al. (2011) created the set of 90 noun compounds. The phrases were taken from Word-Net. For each compound, the following types of tasks have been given: a judgement on how literal the phrase is and a judgement on how literal each noun is within the compound. They used 30 turkers to obtain judgements on the compound compositionality in each task. Ramisch et al. (2016) asked respondents about the degree to which the meaning of an expression follows from its components: separately from each component and from both components in total. The authors of the paper stress that such indirect annotation provides reliable and stable data. However, this approach was confronted with difficulties concerning the inconsistency of the answers in some cases. For example, English speakers agreed on the level of head and head + modifier compositionality for phrase dirty word, but disagreed when judging the modifier: it was fully idiomatic for some, but others thought that the phrase just contained an uncommon sense of dirty. try to formulate the procedural definition of multiword lexical units that should be included in the Polish wordnet so that lexicographers could apply these principles consistently. Then they asked linguists to classify phrases using this definition into three categories: multiword lexical unit, not multiword lexical unit, and don't know. They concluded that a group of 5-7 linguists is able to decide whether multiword lexical units should be introduced in a wordnet with the appropriate agreement. However, this approach was considered too expensive.
In another experiment, directed linguists to answer questions based on non-compositionality criteria of phrases including metaphoric character, hyponymy toward the syntactic head, ability to be paraphrased, nonseparability, fixed word order, terminological register, etc. Then the answers were used to train the decision tree algorithm to predict inclusion or non-inclusion of an expression into the Polish wordnet. However, the obtained decision trees were different for the various phrase sets under analysis. Farahmand et al. (2015) describe the annotation of non-compositionality and conventionalization of noun compounds. They asked the annotators to make binary decisions about compositionality of phrases. Compositional compounds were further annotated as conventionalized or nonconventionalized. A compound was considered as conventionalized in neither of its constituents can be substituted for their near-synonyms. Sometimes the decision was diffucult because such phrases could really exist (floor space vs. floor area).
To annotate the compounds, five experts were hired. In such a way, the authors (Farahmand et al., 2015) tried to avoid problems with crowdsourcing, which can lead to flaws in the results (Reddy et al., 2011). The authors stress that identifying conventionalization is not a trivial task and that human agreement on this property can be quite low. The examples of found compositional, but conventionalized phrases included: cable car, food court, speed limit, etc. The task of this study to distinguish conventionalized or non-conventionalized phrases among compositional compounds is the closest to our work.
For Russian there are two large resources of human associations. The well-known Russian Association dictionary (Karaulov et al., 1994) is currently obsolete. Another assoicaion-oriented project Sociation.org 2 collected a lot of current Russian associations but it does not have associations for the phrases under analysis.
Practical conclusions from the above-described experiments and related work are as follows:
• In annotating compositionality/noncompositionality of multiword expressions by crowdsourcing as in (Cordeiro et al., 2016b;Reddy et al., 2011;Ramisch et al., 2016), it is also useful to ask respondents about their associations for the phrase and its components to detect relational idiosyncrasy,
• In expert analysis of multiword expressions for inclusion into computational resources as in Farahmand et al., 2015), it is useful to ask experts about additional lexical or conceptual relations that the 2 http://sociation.org/ phrase have and that do not follow from the phrase components,
• In computational approaches of extracting non-compositional multiword expressions, it is useful to compare contexts of phrase occurrence and contexts of its component word occurrences trying to detect weirdness in the phrase context.
Conclusion
In this paper, we have shown that if we want to obtain human evidence about conventionalization of some phrases, we can ask native speakers about associations they have for a phrase and its component words. We have found that there are two forms of manifesting conventionalized phrases. First, we can consider that a phrase is conventionalized if its component words have frequent associations to each other. The second type of conventionalized phrases can be revealed on the basis of two factors: low entropy of phrase associations and a low number of intersections between component word and phrase associations. These three factors allows predicting conventionalized phrases with high accuracy. We have also shown that the existing embedding models distinguish conventionalized phrases from non-conventionalized significantly worse.
In our opinion, developers of thesauri should consider the relational specificity (idiosynrasy) of multiword expressions, which can help them to decide on inclusion of specific phrases into their resources. Weird word co-occurrences with the phrase in comparison with its component contexts can be considered as an additional factor to detect conventionalized expressions in computational approaches.
Table 1 :
1Examples of the most frequent associations for thesaurus phrases and its components Therefore, we think that mutual associations between phrase components are an important sign of phrase conventionalization. It seems that such phrases are stored as single units in the human memory. In our case such conventionalizedPhrase
A12
A21
Ph1
Ph2
Entr
òðàíñïîðòíîå
ïðîèñøåñòâèå
0
7
0
6
2.16
(transport accident)
ó÷åáíîå çàâåäåíèå
1
8
1
1
2.52
(education institute)
ïðîãðàììíîå
îáåñïå÷åíèå
13
14
6
1
2.77
(software program)
ïîâûøåíèå öåí
0
0
0
0
2.85
(increase in prices)
çåìåëüíûé ó÷àñòîê
38
13
0
14
2.89
(landing lot)
êâàäðàòíûé ìåòð
10
20
0
1
3.22
(square meter)
ýëåêòðîííàÿ ïî÷òà
6
12
3
4
3.27
(electronic mail)
äîðîaeíîå äâèaeåíèå
0
2
0
0
3.33
(road traffic)
çàðàáîòíàÿ ïëàòà
18
10
8
2
3.42
(employee wage)
ãëàâíûé ãåðîé
5
1
0
3
3.56
(main hero)
ìåäèöèíñêàÿ ïîìîùü
0
5
0
4
3.58
(medical aid)
òîðãîâûé öåíòð
26
0
1
0
3.62
(shopping center)
ëåíòà íîâîñòåé
18
0
1
1
3.79
(news feed)
ìîáèëüíûé òåëåôîí
26
12
3
7
3.81
(mobile phone)
òåìïåðàòóðà
âîçäóõà
4
0
6
1
3.81
(air temperature)
Average
11
6.27
1.93
2.93
3.24
Table 2 :
2Results of association experiments for the thesaurus phrases
Table 3 :
3Results of association experiments for non-thesaurus phrases
RuThes Thesaurus as a Source of Conventionalized ExpressionsFor the present work, we utilized multiword expressions included in the Russian-language
http://www.labinform.ru/pub/ruthes/index_eng.htm Thus, phrases from RuThes without evident non-compositionality were selected for the association experiment in order to understand correlations between choice of phrases made by experts and associations of native speakers.
Acknowledgments.This study is supported by Russian Scientific Foundationc (project N16-18-02074).
Lexicalization and multiword expressions in the basque wordnet. Eneko Agirre, Izaskun Aldezabal, Eli Pociello, Proceedings of Third International WordNet Conference. Third International WordNet ConferenceEneko Agirre, Izaskun Aldezabal, and Eli Pociello. 2006. Lexicalization and multiword expressions in the basque wordnet. In Proceedings of Third Inter- national WordNet Conference. pages 131-138.
Z39.19. Guidelines for the Construction, Format and Management of Monolingual Thesauri. Ansi/Niso, ANSI/NISOANSI/NISO. 2005. Z39.19. Guidelines for the Con- struction, Format and Management of Monolingual Thesauri. ANSI/NISO.
Multiword expressions. Timothy Baldwin, Su Nam Kim, Handbook of Natural Language Processing. Chapman and Hall/CRCSecond EditionTimothy Baldwin and Su Nam Kim. 2010. Multi- word expressions. In Handbook of Natural Lan- guage Processing, Second Edition, Chapman and Hall/CRC, pages 267-292.
Extending wordnet with syntagmatic information. Luisa Bentivogli, Emanuele Pianta, Proceedings of second global WordNet conference. second global WordNet conferenceLuisa Bentivogli and Emanuele Pianta. 2004. Ex- tending wordnet with syntagmatic information. In Proceedings of second global WordNet conference. pages 47-53.
An approach to take multiword expressions. Claire Bonial, Meredith Green, Jenette Preciado, Martha Palmer, Proc. of the 10th Workshop on Multiword Expressions. of the 10th Workshop on Multiword ExpressionsClaire Bonial, Meredith Green, Jenette Preciado, and Martha Palmer. 2014. An approach to take multi- word expressions. In Proc. of the 10th Workshop on Multiword Expressions. pages 94-98.
Towards best practice for multiword expressions in computational lexicons. Nicoletta Calzolari, J Charles, Ralph Fillmore, Nancy Grishman, Alessandro Ide, Catherine Lenci, Antonio Macleod, Zampolli, Proceedings of LREC-2002. LREC-2002Nicoletta Calzolari, Charles J Fillmore, Ralph Gr- ishman, Nancy Ide, Alessandro Lenci, Catherine MacLeod, and Antonio Zampolli. 2002. Towards best practice for multiword expressions in computa- tional lexicons. In Proceedings of LREC-2002.
Predicting the compositionality of nominal compounds: Giving word embeddings a hard time. Silvio Cordeiro, Carlos Ramisch, Marco Idiart, Aline Villavicencio, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Long Papers)Silvio Cordeiro, Carlos Ramisch, Marco Idiart, and Aline Villavicencio. 2016a. Predicting the compo- sitionality of nominal compounds: Giving word em- beddings a hard time. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, pages 1986-1997.
Filtering and measuring the intrinsic quality of human compositionality judgments. Silvio Cordeiro, Carlos Ramisch, Aline Villavicencio, ACL 2016. Silvio Cordeiro, Carlos Ramisch, and Aline Villavicen- cio. 2016b. Filtering and measuring the intrinsic quality of human compositionality judgments. In ACL 2016. pages 32-37.
Modeling the non-substitutability of multiword expressions with distributional semantics and a loglinear model. Meghdad Farahmand, James Henderson, Proceedings of the 12th Workshop on Multiword Expressions, ACL 2016. the 12th Workshop on Multiword Expressions, ACL 2016Meghdad Farahmand and James Henderson. 2016. Modeling the non-substitutability of multiword ex- pressions with distributional semantics and a log- linear model. In Proceedings of the 12th Workshop on Multiword Expressions, ACL 2016. pages 61-66.
A multiword expression data set: Annotating non-compositionality and conventionalization for english noun compounds. Meghdad Farahmand, Aaron Smith, Joakim Nivre, Proceedings of NAACL-HLT. Association for Computational Linguistics. NAACL-HLT. Association for Computational LinguisticsMeghdad Farahmand, Aaron Smith, and Joakim Nivre. 2015. A multiword expression data set: Anno- tating non-compositionality and conventionalization for english noun compounds. In Proceedings of NAACL-HLT. Association for Computational Lin- guistics, pages 29-33.
Multiword expressions in nlp: General survey and a special case of verb-noun constructions. Alexander Gelbukh, Olga Kolesnikova, Computational Linguistics: Concepts, Methodologies, Tools, and Applications. Alexander Gelbukh and Olga Kolesnikova. 2014. Mul- tiword expressions in nlp: General survey and a spe- cial case of verb-noun constructions. Computational Linguistics: Concepts, Methodologies, Tools, and Applications, pages 178-197.
A word embedding approach to identifying verb-noun idiomatic combinations pages. Waseem Gharbieh, C Virendra, Paul Bhavsar, Cook, Waseem Gharbieh, Virendra C Bhavsar, and Paul Cook. 2016. A word embedding approach to identi- fying verb-noun idiomatic combinations pages 112- 118.
. Yuri Karaulov, Yu Sorokin, E Tarasov, N Ufimtseva, G Cherkasova, Russian Association DictionaryYuri Karaulov, Yu. Sorokin, E. Tarasov, N. Ufimtseva, and G. Cherkasova. 1994. Russian Association Dic- tionary.
Ruthes linguistic ontology vs. russian wordnets. Natalia Loukachevitch, Boris Dobrov, Proceedings of Global WordNet Conference GWC-2014. Global WordNet Conference GWC-2014Natalia Loukachevitch and Boris Dobrov. 2014. Ruthes linguistic ontology vs. russian wordnets. In Proceedings of Global WordNet Conference GWC- 2014. pages 154-162.
Multiword expressions in russian thesauri ruthes and ruwordnet. Natalia Loukachevitch, German Lashevich, Proceedings of the AINL FRUCT 2016. FRUCT. the AINL FRUCT 2016. FRUCTNatalia Loukachevitch and German Lashevich. 2016. Multiword expressions in russian thesauri ruthes and ruwordnet. In Proceedings of the AINL FRUCT 2016. FRUCT, pages 66-71.
A procedural definition of multi-word lexical units. Marek Maziarz, Stan Szpakowicz, Maciej Piasecki, Proceedings of Recent Advances in NLP Conference RANLP-2015. Recent Advances in NLP Conference RANLP-2015Marek Maziarz, Stan Szpakowicz, and Maciej Pi- asecki. 2015. A procedural definition of multi-word lexical units. In Proceedings of Recent Advances in NLP Conference RANLP-2015. pages 427-435.
Phraseology in the language, in the dictionary, and in the computer. Igor Mel, ' , Yearbook of Phraseology. 31Igor Mel'čuk. 2012. Phraseology in the language, in the dictionary, and in the computer. Yearbook of Phraseology 3(1):31-56.
A George, Miller, Nouns in wordnet. WordNet: An electronic lexical database pages. George A Miller. 1998. Nouns in wordnet. WordNet: An electronic lexical database pages 24-45.
Synonymy in collocation extraction. Darren Pearce , Proceedings of the workshop on WordNet and other lexical resources, second meeting of the north american chapter of the association for computational linguistics. the workshop on WordNet and other lexical resources, second meeting of the north american chapter of the association for computational linguisticsDarren Pearce. 2001. Synonymy in collocation extrac- tion. In Proceedings of the workshop on WordNet and other lexical resources, second meeting of the north american chapter of the association for com- putational linguistics. pages 41-46.
Lexical association measures and collocation extraction. Pavel Pecina, Language resources and evaluation. 441-2Pavel Pecina. 2010. Lexical association measures and collocation extraction. Language resources and evaluation 44(1-2):137-158.
Extraction of the multi-word lexical units in the perspective of the wordnet expansion. Maciej Piasecki, Michal Wendelberger, Marek Maziarz, RANLP-2015. Maciej Piasecki, Michal Wendelberger, and Marek Maziarz. 2015. Extraction of the multi-word lexical units in the perspective of the wordnet expansion. In RANLP-2015. pages 512-520.
How naked is the naked truth? a multilingual lexicon of nominal compound compositionality. Carlos Ramisch, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, Aline Villavicencio, Rodrigo Wilkens, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics2Short Papers). Association for Computational LinguisticsCarlos Ramisch, Silvio Cordeiro, Leonardo Zilio, Marco Idiart, Aline Villavicencio, and Rodrigo Wilkens. 2016. How naked is the naked truth? a multilingual lexicon of nominal compound compo- sitionality. In Proceedings of the 54th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Compu- tational Linguistics, pages 114-133.
An empirical study on compositionality in compound nouns. Siva Reddy, Diana Mccarthy, Suresh Manandhar, IJCNLP. Siva Reddy, Diana McCarthy, and Suresh Manandhar. 2011. An empirical study on compositionality in compound nouns. In IJCNLP. pages 210-218.
Multiword expressions: A pain in the neck for nlp. A Ivan, Timothy Sag, Francis Baldwin, Ann Bond, Dan Copestake, Flickinger, Proceedings of International Conference on Intelligent Text Processing and Computational Linguistics. International Conference on Intelligent Text Processing and Computational LinguisticsBerlin HeidelbergSpringerIvan A Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: A pain in the neck for nlp. In Pro- ceedings of International Conference on Intelli- gent Text Processing and Computational Linguistics, CICLING-2002. Springer Berlin Heidelberg, pages 1-15.
Lexical variability and compositionality: Investigating idiomaticity with distributional semantic models pages. S G Marco, Senaldi, E Gianluca, Alessandro Lebani, Lenci, Marco SG Senaldi, Gianluca E Lebani, and Alessandro Lenci. 2016. Lexical variability and composition- ality: Investigating idiomaticity with distributional semantic models pages 21-31.
Nonlexicalized concepts in wordnets: A case study of english and hungarian. Veronika Vincze, Attila Almasi, Proceedings of Global WordNet, Conference GWC-2014. Global WordNet Association. Global WordNet, Conference GWC-2014. Global WordNet AssociationVeronika Vincze and Attila Almasi. 2014. Non- lexicalized concepts in wordnets: A case study of english and hungarian. In Proceedings of Global WordNet, Conference GWC-2014. Global WordNet Association.
| [] |
[
"Does BERT Learn as Humans Perceive? Understanding Linguistic Styles through Lexica",
"Does BERT Learn as Humans Perceive? Understanding Linguistic Styles through Lexica"
] | [
"Shirley Anugrah Hayati shirley@gatech.edu \nUniversity of Pennsylvania Georgia Institute of Technology University of Minnesota\n\n",
"Dongyeop Kang dongyeop@umn.eduungar@cis.upenn.edu \nUniversity of Pennsylvania Georgia Institute of Technology University of Minnesota\n\n",
"Lyle Ungar \nUniversity of Pennsylvania Georgia Institute of Technology University of Minnesota\n\n"
] | [
"University of Pennsylvania Georgia Institute of Technology University of Minnesota\n",
"University of Pennsylvania Georgia Institute of Technology University of Minnesota\n",
"University of Pennsylvania Georgia Institute of Technology University of Minnesota\n"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | People convey their intention and attitude through linguistic styles of the text that they write. In this study, we investigate lexicon usages across styles throughout two lenses: human perception and machine word importance, since words differ in the strength of the stylistic cues that they provide. To collect labels of human perception, we curate a new dataset, HUMMINGBIRD, on top of benchmarking style datasets. We have crowd workers highlight the representative words in the text that makes them think the text has the following styles: politeness, sentiment, offensiveness, and five emotion types. We then compare these human word labels with word importance derived from a popular fine-tuned style classifier like BERT. Our results show that the BERT often finds content words not relevant to the target style as important words used in style prediction, but humans do not perceive the same way even though for some styles (e.g., positive sentiment and joy) humanand machine-identified words share significant overlap for some styles. | 10.18653/v1/2021.emnlp-main.510 | [
"https://www.aclanthology.org/2021.emnlp-main.510.pdf"
] | 237,433,537 | 2109.02738 | f321858560c1a3ef8069d03b0229ce9718813f40 |
Does BERT Learn as Humans Perceive? Understanding Linguistic Styles through Lexica
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Shirley Anugrah Hayati shirley@gatech.edu
University of Pennsylvania Georgia Institute of Technology University of Minnesota
Dongyeop Kang dongyeop@umn.eduungar@cis.upenn.edu
University of Pennsylvania Georgia Institute of Technology University of Minnesota
Lyle Ungar
University of Pennsylvania Georgia Institute of Technology University of Minnesota
Does BERT Learn as Humans Perceive? Understanding Linguistic Styles through Lexica
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 20216323
People convey their intention and attitude through linguistic styles of the text that they write. In this study, we investigate lexicon usages across styles throughout two lenses: human perception and machine word importance, since words differ in the strength of the stylistic cues that they provide. To collect labels of human perception, we curate a new dataset, HUMMINGBIRD, on top of benchmarking style datasets. We have crowd workers highlight the representative words in the text that makes them think the text has the following styles: politeness, sentiment, offensiveness, and five emotion types. We then compare these human word labels with word importance derived from a popular fine-tuned style classifier like BERT. Our results show that the BERT often finds content words not relevant to the target style as important words used in style prediction, but humans do not perceive the same way even though for some styles (e.g., positive sentiment and joy) humanand machine-identified words share significant overlap for some styles.
Introduction
To express their interpersonal goal and attitude, people often use different styles in their communication. The style of a text can be as important as its literal meaning for effective communication (Hovy, 1987). NLP researchers have built many models to identify different styles in text, including politeness (Danescu-Niculescu-Mizil et al., 2013), emotion (Alm et al., 2005;Mohammad et al., 2018), and sentiment (Socher et al., 2013). Recently, transformerbased (Vaswani et al., 2017) pretrained language models, such as BERT (Devlin et al., 2019), have achieved impressive performance on many NLP tasks, including stylistic studies. However, explaining what these deep learning models learn remains * research conducted at the University of Pennsylvania 1 Our dataset and code are available at https:// github.com/sweetpeach/hummingbird I will understand if you decline, but would very much like you to accept. May I nominate you? Figure 1: Both humans and BERT models label the sentence (a) as "polite", whereas in sentence (b), the humans label it as "anger" but BERT does not. Pink highlight: high human perception score. Blue: BERT's important words. Purple: the word is seen as a strong cue by both human and BERT. The darker color means that the score for human perception or machine word importance is higher. Best seen in color. a challenge. Thus, there is a growing effort to understand how these models behave (Rogers et al., 2021;Rajagopal et al., 2021). In this work, we attempt to understand style variation through the contrasting words identified by humans and BERT as determining a style. Given the subjective nature of styles, we are interested in capturing human's inherent perception of stylistic cues in the text and compare this with the BERT's "perception". Specifically, we investigate the extent to which BERT's word importance, as estimated using Shapley value-based attribution scores (Mudrakarta et al., 2018), aligns with human perception in stylistic text classification.
When humans identify styles in a text, specific words play an important role in recognizing the style, such as hedges for identifying politeness (Danescu-Niculescu-Mizil et al., 2013). We call such words stylistic cues. For example, in Figure 1(a), humans perceive the words "understand," "like," and "accept" as strong stylistic cues for politeness. But does the BERT model learn the same words as indicative? It turns out that although the model learns that the word "accept" is an important feature for classifying the text as polite, it disagrees with humans for "understand" and "like" by identifying these words as signals for impoliteness. This leads to a concern that lexical explanation from BERT could be unreliable and motivates us to investigate more deeply into the lexical cues used by humans and BERT. Since styles overlap significantly (Kang and Hovy, 2021), we cover multiple styles: politeness, sentiment, offensiveness, anger, disgust, fear, joy, and sadness.
Our contributions are as follows:
• This is the first comparative study to examine stylistic lexical cues from human perception and BERT. To characterize their discrepancy, we developed a dataset, called HUMMING-BIRD, where crowd-workers relabeled benchmarking datasets for style classification tasks. • We found that human and BERT cues are quite different; BERT pays more attention to content words, and word-level human labels provide more accurate multi-style correlations than sentence-level machine predictions. • Our work differs from previous works which have generated stylistic lexica from manuallycurated seed words or thesauri (Davidson et al., 2017;Mohammad and Turney, 2010); Instead, in our work, the full text is given to annotators, providing more context to the selection of the cue words.
Collection of Human and BERT's Importance Scores on Stylistic Words
While there are many datasets with stylistic labels, to the best of our knowledge, there is no available dataset of stylistic texts with human labels on the individual words that drive the human perception. Therefore, on top of existing benchmark style datasets, we we develop HUMMINGBIRD, a new dataset with human-identified stylistic words in those stylistic sentences . Human Perception Scores To collect human perception scores, we first pick 500 stylisticallydiverse texts from the four style datasets by the following method. First, we fine-tune BERT on the training sets of the exiting datasets using the original train/dev/test splits. The models' performance is shown in Table 1. We then run each model on every development set. For example, we run a sentiment classifier on our emotion dataset. From this, we obtain the probability score from the model for predicting each style.
To encourage that the chosen texts exhibit diverse styles, we sort them based on their probability scores and compute the standard deviation of these scores across the eight styles, following Kang and Hovy (2021). We then select the 50 most polite texts, 50 most impolite texts, 50 positive texts, 50 negative texts, 100 offensive texts, and 200 emotional texts (40 from each emotion style), resulting in total 500 texts from the four different style datasets.
We hired 622 workers to annotate them with human perception on Prolific 2 from November to December 2020. We required the workers to be in the United States and payed them an average of $9.6/hour. Each worker was asked what styles they perceive each of the texts to exhibit. If they think the text has certain styles, workers then highlight the words in the text which they believe make them think the text had those styles (Pink highlights in Figure 1). Three workers label the same pair of sentence and style, and we take majority voting for the style labels.
3 Crowd-workers obtained an average per- Table 2: Top 5 words where humans and BERT agree or disagree. ↑ ↑: both human and BERT agree. ↑: high human perception score but low BERT's importance score. ↑: high BERT's importance score but low human perception score. BERT-only agreement includes more content words ( * ) or interjections ( # ) than human-only agreement.
centage agreement of 73.2% on majority labeling, which is a substantial agreement, for text-level as shown in Table 1 and an average percentage agreement of 27.7% for word-level agreement. Then, for a word w i in a text t = w 1 ..w N , the human perception score is defined as:
H(w i ) = ∑ #annotators j=1 h j (w i ) #annotators (1)
where h j ∈ −1, 0, 1 is the score given by the j th annotator. Each annotator's label will contribute a score of either 1 for a word that is perceived as a positive cue, -1 for a negative cue, and otherwise 0 (neutral or no emotion).
BERT's Importance Scores To obtain the word importance (attribution) scores from BERT, we first trained BERT-based models, yielding with F1 scores in Table 1. We then use the popular technique of layered integrated gradients (Mudrakarta et al., 2018) provided by Captum (Kokhlikyan et al., 2020). This technique is a variant of integrated gradients, an interpretability algorithm that attributes an importance score to each input feature by approximating the integral of the gradients of the model's output with respect to the inputs along a straight line from given baselines to the inputs (Sundararajan et al., 2017).
Since BERT could tokenize a word w into several word pieces, the importance of a word is an average of the scores of the word pieces x that make up the word. For an input of word pieces x, if we have a function F ∶ R n → [0, 1] as a neural network, and an input x = (x 1 , ..., x n ) ∈ R n , an attribution of the prediction at input x relative to a baseline input x ′ is a vector A F = (x, x ′ ) = (a 1 , ..., a n ) ∈ R n where a i is the attribution of x i to the prediction F (x). We use the default setting of Captum for the baseline input x ′ which is zero scalar. Finally, we obtain [-1,1] attribution score for each token like the blue highlights in Figure 1.
Human-BERT Agreement through Lexical Analysis
We study how similar human perception and BERT's word importance are, within each style (intra-style) and across styles (multi-styles).
Intra-stylistic Analyses
We measure the correlation between human perception of stylistic words and BERT's word importance, by computing the Pearson's r for them across all words in the vocabulary, as shown in Figure 2. Naïve refers to our baseline which is that we simply count word frequencies in the stylistic text. For example, if the style is positive sentiment, for a word w, we computed how many times w appears for sentences labeled as "positive". We calculated the Pearson's r between this word count and the sentences' styles across all sentences. This Pearson's r score is the baseline score of the word importance for word w.
We find that BERT's word importances correlate more highly with human judgements than this baseline; neither BERT nor humans rely purely on co-occurrence frequencies. Some styles are easier to identify by both human and BERT, such as joy and sentiment with Pearson r=0.288 and 0.273. The yellow bar suggests that human-BERT agree- ment is higher when the word appears more often, especially for offensiveness (0.088 vs. 0.224).
We now look into which words BERT and humans agree and disagree on. Table 2 shows such words selected based on the difference of the word ranks of the human perception score and those from BERT's word importance. To include only highly stylistic words, words are selected only if their scores are greater than a threshold of 0.3. When humans and BERT agree ( ↑ ↑), they attend to words that are clearly associated with the styles (e.g joy, positive) and are general ("lovely", "delightful", "excited").
In contrast, BERT often finds words that suggest contexts in which the sentiment is likely to occur. For example, top-5 words from BERT-only agreement ( ↑) contain more content words such as "scenes" for politeness and "movies" and "basebell" for joy than those from human-only agreement ( ↑). In particular, we see that for politeness and positive sentiment, BERT pays more attention to interjections (e.g., "hi", "wow") than humans. For offensiveness and fear in Table 4 in the Ap-pendix, humans perceive hashtags as important cues but BERT does not. Interestingly, humans perceive a seemingly positive word, "charming," as offensive while BERT does not, perhaps missing sarcasm. These content words or words irrelevant to the target style are mostly learned due to the biased training dataset, leading to inaccurate prediction by the machine.
Then, we evaluate the impact of important words perceived by human and BERT in the existing test set using a simple occurrence-based classification method. From the ranked word list by their human perception score and BERT's word importance scores, we label a text as having the target style, if at least one word in the test sentence exists in the top-N word list. For this study, we only select words that appear three times or more in the dataset.
In Figure 3, human's word list outperforms BERT's for most styles, even with this small size of annotations compared to the large size of original datasets used for training the BERT model. Interestingly, for some negative styles (e.g., impoliteness, negative sentiment, fear), BERT's word list performs better. We observe that words from offensive dataset (mostly swear words) are more consistently labeled as impolite and negative by human annotators. However, these words are not often seen in the original politeness and sentiment datasets. It explains why features from BERT models which are trained on the original, large datasets get higher F1 score. As for fear, we found that content words, such as "facebook" and "theatre", appear in the test data. Here we see that BERT relies on content words (topic-related words) to help predict the style, which is fragile to out-of-domain samples.
Multi-stylistic Analyses
As we extend our analyses on multi-style correlation from a lexical viewpoint, we found that humans and machines give similar correlations among the styles. For instance, joy, positive sentiment, and politeness are all positively correlated, as are anger, disgust, and offensiveness ( Figure 4). However, the multi-style correlation strength is greater for human perceptions than for machine importance.
The weaker correlation across styles for machines is confirmed in Figure 5, which presents a lower-dimensional visualization for the stylistic representation of each word. Stylistic words are more clustered in human perception, while for BERT, the separation between highly stylistic words and non-stylistic words is less clear. Figure 5 also shows the geometric closeness across the style clusters, giving extra information beyond the pairwise correlations in Figure 4. In human scores, styles cluster into two extremes: politeness, positive sentiment, and joy to the left, and anger, negative sentiment, offensiveness, and impoliteness to the right, with disgust, fear, and sadness, between them. This leads to more accurate style correlation analysis than machine-based analysis on the text level (Kang and Hovy, 2021).
Conclusion
We showed that BERT's word importances for style prediction, as calculated using integrated gradients, correspond only very loosely with the word importances given by human annotators. These differences likely result from several factors: 1) Word-importances computed for words which appear rarely in the text tend to be noisy. 2) BERT, as a contextual pretrained model, take more context into account for deciding the style of the text while human intuitively choose the most obvious "stylistic" words to judge the style of the text. 3) Styles are subjective matter, so human annotators may have different perception toward the style of a sentence.
Future Directions This work also provides a public dataset as a first step for researchers to further investigate these issues. We plan to scale up our data collection in their size and style types including other higher-level of styles such as sarcasm and humor. We also explore a possibility of informing BERT to pay more attention on humanannotated lexica.
Limitations We acknowledge that while the inter-annotator agreement for the sentence-level for human (top) and machine (bottom). Each word is represented as a vector of its perception score for the styles in this order: politeness, sentiment, offensiveness, anger, disgust, fear, joy, and sadness.
style is quite high, there is a huge variation for the word-level agreement. As a caveat, the annotators could be unreliable. We do find that annotators label different words as being important than those that drive BERT predictions. Note that we do not claim that BERT is "wrong" and humans are "always reliable"; only that they are different. BERT's important words can help the model predict correctly, but they are perceived as stylistic features as humans do. Studying this difference is our major goal of this paper. We believe that if a word is perceived as "stylistic" by the majority of people, this word can be regarded as an important cue for the model. Learning this variability of human perception on styles could be an interesting future work using HUMMINGBIRD.
Ethical Considerations
A full analysis of style, such as politeness or expression of anger, depends upon the context of the utterance: who is saying it to whom in what situation. Such analysis is beyond the scope of this work, which looks only at how the style of the utterance is perceived without context by a small number of crowd workers. Methods such as we have used here should be extended to look at the more subtle contextual interpretations of style and, eventually, at the ways in which perceived styles may differ from intended styles. Many people have (correctly) drawn attention to the role that (mis)perceptions of style can foster gender or racial discrimination (Kang and Hovy, 2021). Closer attention to the words which drive style perception is an important first step towards addressing such problems.
Commercial platforms such as Crystal, Grammarly, and Textio offer "style checkers". Such software would benefit from analyses that extend the work presented here, in that they could compare the words that human editors suggest indicate a given style to the words that NLP methods select as most important for recognizing different styles. Such comparisons, particularly when contextualized, should allow construction of better software to help writers control the effect their writing has on the people reading it.
A Existing Datasets for Style Classification
We use existing style datasets from StanfordPoliteness (Danescu-Niculescu-Mizil et al., 2013) for politeness, Sentiment TreeBank (Socher et al., 2013) for sentiment, (Davidson et al., 2017)'s dataset for offensiveness, and SemEval 2018 Task 1: Affect in Tweets for emotion classification (Mohammad et al., 2018). We convert non-binary labels or scores to binary labels to standardize the multistyle analysis, resulting in eight styles. Table 3 shows their dataset sizes and train/dev/test splits. StanfordPoliteness is collected from StackExchange and Wikipedia requests. The labels are continous values of [-2, 2] so we convert it to binary labels of "polite" and "impolite" by converting all values greater than 0 as polite and the rest are impolite. Sentiment TreeBank dataset consists of movie review texts, and we only use the coarse label of "positive" and "negative" labels for training. Davidson et al. (2017) collected their data from Twitter, and we only consider "offensive" and "none" labels. SemEval 2018 dataset is collected from tweets and it has total 11 emotions for the same10.9k instances: anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, and trust. We select anger, disgust, fear, joy, and sadness since these emotions have the highest F1-score compared to the rest. Each emotion has two labels: "anger" or "not anger", "disgust" or "not disgust", and so on.
B Training Configuration
We use the lower-cased BERT-base model with 12 hidden layers, 12 attention heads, hidden size 768, for training our style classifiers on GeForce GTX TITAN X GPU. Drop-out rate is 0.1, learning rate is 2 × 10 −5 , and the optimizer is AdamW (Loshchilov and Hutter, 2017). Vocabulary size is 30,522 and max position embeddings is 512. Training ran for
C Annotation Interface
For each text-style pair (total: 500 texts × 8 styles = 4,000 pairs), we ask three different annotators to select the style label for text and highlight 463 lovely hilarious disappointed delightful deep shocking excited moved movies delightful thank scenes lovely thanks scare love share managing loving moved suffers smart fun move entertaining performances referring smart good hi solid deftly absolutely great congrats documentary trouble clear optimism excited best wow perfect smile baseball happy share weather hilarious high optimism loving example audience charming friend sounds great pretty news amazing morning scenes compellling rest genre Table 4: Top 10 words where humans and BERT agree and disagree for all the eight styles. We only select words that appear >= 2. ↑ ↑: both human and BERT agree. ↑: high human perception score but low word importance score. ↑: high word importance score but low human perception score. the words which make them think the text has that style with instructions shown in Figure 6. To guarantee that the workers are serious with this task, we provide a screening practice session which resembles the exact task but with a text that is very obvious to be annotated as in Figure 7. The real task interface is also the same as
Politeness Positive Sentiment Joy ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑
D Important Words Perceived by
Humans and the Machine Table 4 shows top twenty words where humans and BERT agree and disagree for all styles.
( a )
aHuman: Polite BERT: Polite a nightmare date with a half-formed wit done a great disservice by a lack of critical distance and a sad trust in liberal arts college bumper sticker platitudes . (b) Human: Anger BERT: Not Anger Human BERT Both
Figure 2 :
2words) Human-BERT (All words) Human-BERT (#words >=3) Pearson's r between human and BERT for the eight styles (p < 0.001).
Figure 3 :
3Simple classification using top-N human and BERT features for all the eight styles. Best seen in color.
Figure 4 :
4Pearson's r word correlation matrix across styles. The upper triangle (blue and red) represents human perception scores, while the lower triangle (green and brown) represents machine word importances.
Figure 5 :
5T-SNE (Van der Maaten and Hinton, 2008)
Figure 6 :
6Instruction page for crowd workers.
Figure 7 :
7Annotation page for crowd workers.
Figure 8 :
8Demographic survey for crowd workers. 3 epochs, and each epoch took around 4 minutes.
Figure 7 .
7Figure 8 displays an interface where we also ask the worker's demographic profile.
Table 3 :
3Dataset Statistics
https://www.prolific.co/ 3 See Appendix for original dataset details.
AcknowledgmentsWe would like to thank Garrick Sherman for helping with the server setup during data collection and the anonymous reviewers for their thoughtful comments.
Emotions from text: machine learning for text-based emotion prediction. Cecilia Ovesdotter Alm, Dan Roth, Richard Sproat, Proceedings of human language technology conference and conference on empirical methods in natural language processing. human language technology conference and conference on empirical methods in natural language processingCecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for text-based emotion prediction. In Proceedings of human language technology conference and confer- ence on empirical methods in natural language pro- cessing, pages 579-586.
A computational approach to politeness with application to social factors. Cristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, Christopher Potts, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, Bulgaria1Long Papers). Association for Computational LinguisticsCristian Danescu-Niculescu-Mizil, Moritz Sudhof, Dan Jurafsky, Jure Leskovec, and Christopher Potts. 2013. A computational approach to politeness with application to social factors. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 250-259, Sofia, Bulgaria. Association for Computa- tional Linguistics.
Automated hate speech detection and the problem of offensive language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media11Thomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated hate speech detection and the problem of offensive language. In Proceedings of the International AAAI Conference on Web and Social Media, volume 11.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Generating natural language under pragmatic constraints. Eduard Hovy, Journal of Pragmatics. 116Eduard Hovy. 1987. Generating natural language un- der pragmatic constraints. Journal of Pragmatics, 11(6):689-719.
Style is not a single variable: Case studies for cross-stylistic language understanding. Association for Computational Linguistics. Dongyeop Kang, Eduard Hovy, Dongyeop Kang and Eduard Hovy. 2021. Style is not a single variable: Case studies for cross-stylistic language understanding. Association for Computa- tional Linguistics.
Narine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, arXiv:2009.07896Captum: A unified and generic model interpretability library for pytorch. arXiv preprintNarine Kokhlikyan, Vivek Miglani, Miguel Martin, Edward Wang, Bilal Alsallakh, Jonathan Reynolds, Alexander Melnikov, Natalia Kliushkina, Carlos Araya, Siqi Yan, et al. 2020. Captum: A unified and generic model interpretability library for pytorch. arXiv preprint arXiv:2009.07896.
. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101Decoupled weight decay regularization. arXiv preprintIlya Loshchilov and Frank Hutter. 2017. Decou- pled weight decay regularization. arXiv preprint arXiv:1711.05101.
SemEval-2018 task 1: Affect in tweets. Saif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, Svetlana Kiritchenko, 10.18653/v1/S18-1001Proceedings of The 12th International Workshop on Semantic Evaluation. The 12th International Workshop on Semantic EvaluationNew Orleans, LouisianaAssociation for Computational LinguisticsSaif Mohammad, Felipe Bravo-Marquez, Mohammad Salameh, and Svetlana Kiritchenko. 2018. SemEval- 2018 task 1: Affect in tweets. In Proceedings of The 12th International Workshop on Semantic Eval- uation, pages 1-17, New Orleans, Louisiana. Asso- ciation for Computational Linguistics.
Emotions evoked by common words and phrases: Using Mechanical Turk to create an emotion lexicon. Saif Mohammad, Peter Turney, Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text. the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in TextLos Angeles, CAAssociation for Computational LinguisticsSaif Mohammad and Peter Turney. 2010. Emotions evoked by common words and phrases: Using Me- chanical Turk to create an emotion lexicon. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Genera- tion of Emotion in Text, pages 26-34, Los Angeles, CA. Association for Computational Linguistics.
Association for Computational Linguistics. Ankur Pramod Kaushik Mudrakarta, Mukund Taly, Kedar Sundararajan, Dhamdhere, 10.18653/v1/P18-1176Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, Australia1Did the model understand the question?Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1896-1906, Melbourne, Australia. As- sociation for Computational Linguistics.
Selfexplain: A self-explaining architecture for neural text classifiers. Dheeraj Rajagopal, Vidhisha Balachandran, Eduard Hovy, Yulia Tsvetkov, arXiv:2103.12279arXiv preprintDheeraj Rajagopal, Vidhisha Balachandran, Eduard Hovy, and Yulia Tsvetkov. 2021. Selfexplain: A self-explaining architecture for neural text classi- fiers. arXiv preprint arXiv:2103.12279.
A primer in bertology: What we know about how bert works. Anna Rogers, Olga Kovaleva, Anna Rumshisky, Transactions of the Association for Computational Linguistics. 8Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2021. A primer in bertology: What we know about how bert works. Transactions of the Association for Computational Linguistics, 8:842-866.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, Christopher Potts, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational LinguisticsRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment tree- bank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631-1642, Seattle, Washington, USA. Asso- ciation for Computational Linguistics.
Axiomatic attribution for deep networks. Mukund Sundararajan, Ankur Taly, Qiqi Yan, PMLRInternational Conference on Machine Learning. Mukund Sundararajan, Ankur Taly, and Qiqi Yan. 2017. Axiomatic attribution for deep networks. In Inter- national Conference on Machine Learning, pages 3319-3328. PMLR.
Visualizing data using t-sne. Laurens Van Der Maaten, Geoffrey Hinton, Journal of machine learning research. 119Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of machine learning research, 9(11).
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. NIPS30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30, pages 5998-6008. NIPS.
| [] |
[
"Neural Machine Translation on Scarce-Resource Condition: A case-study on Persian-English",
"Neural Machine Translation on Scarce-Resource Condition: A case-study on Persian-English"
] | [
"Mohaddeseh Bastan m.bastan@aut.ac.ir \nComputer Engineering and Information Technology Dept\nAmirkabir University of Technology\nTehranIran\n",
"Shahram Khadivi khadivi@aut.ac.ir \nComputer Engineering and Information Technology Dept\nAmirkabir University of Technology\nTehranIran\n",
"Mohammad Mehdi Homayounpour \nComputer Engineering and Information Technology Dept\nAmirkabir University of Technology\nTehranIran\n"
] | [
"Computer Engineering and Information Technology Dept\nAmirkabir University of Technology\nTehranIran",
"Computer Engineering and Information Technology Dept\nAmirkabir University of Technology\nTehranIran",
"Computer Engineering and Information Technology Dept\nAmirkabir University of Technology\nTehranIran"
] | [] | Neural Machine Translation (NMT) is a new approach for Machine Translation (MT), and due to its success, it has absorbed the attention of many researchers in the field. In this paper, we study NMT model on Persian-English language pairs, to analyze the model and investigate the appropriateness of the model for scarce-resourced scenarios, the situation that exist for Persian-centered translation systems. We adjust the model for the Persian language and find the best parameters and hyper parameters for two tasks: translation and transliteration. We also apply some preprocessing task on the Persian dataset which yields to increase for about one point in terms of BLEU score. Also, we have modified the loss function to enhance the word alignment of the model. This new loss function yields a total of 1.87 point improvements in terms of BLEU score in the translation quality. | 10.1109/iraniancee.2017.7985278 | [
"https://arxiv.org/pdf/1701.01854v1.pdf"
] | 14,781,177 | 1701.01854 | 10b2f8b7e3a634c6b4ea5ffa385d988be9ff119a |
Neural Machine Translation on Scarce-Resource Condition: A case-study on Persian-English
Mohaddeseh Bastan m.bastan@aut.ac.ir
Computer Engineering and Information Technology Dept
Amirkabir University of Technology
TehranIran
Shahram Khadivi khadivi@aut.ac.ir
Computer Engineering and Information Technology Dept
Amirkabir University of Technology
TehranIran
Mohammad Mehdi Homayounpour
Computer Engineering and Information Technology Dept
Amirkabir University of Technology
TehranIran
Neural Machine Translation on Scarce-Resource Condition: A case-study on Persian-English
* Shahram Khadivi has contributed to this work when he was with Amirkabir University of Technology.componentneural machine translationcost functionalignment modeltext preprocessing
Neural Machine Translation (NMT) is a new approach for Machine Translation (MT), and due to its success, it has absorbed the attention of many researchers in the field. In this paper, we study NMT model on Persian-English language pairs, to analyze the model and investigate the appropriateness of the model for scarce-resourced scenarios, the situation that exist for Persian-centered translation systems. We adjust the model for the Persian language and find the best parameters and hyper parameters for two tasks: translation and transliteration. We also apply some preprocessing task on the Persian dataset which yields to increase for about one point in terms of BLEU score. Also, we have modified the loss function to enhance the word alignment of the model. This new loss function yields a total of 1.87 point improvements in terms of BLEU score in the translation quality.
INTRODUCTION
Neural Networks are under great consideration. These networks have recently been used in many applications such as speech recognition [1], image processing [2], and natural language processing [3] and achieved remarkable results. Since the introduction of these networks and considerable results in different applications, many researchers in different fields are making use of the neural networks as a solution for their problems. MT which is a subcategory of natural language processing was firstly processed using neural networks by Castaño in 1997 [4].
For machine translation, these networks have been used for many different language pairs. In this paper, we propose a neural model for Persian translation for the first time. We use Tensorflow MT model [5] which was released by Google in 2015. We improve the base model with a new feature obtained from the statistical model. The new model consists of a new term as a cost function which measures the difference between the alignment obtained from neural model and statistical model. Then this cost is used to improve both accuracy and convergence time for the NMT.
The paper is organized as follow. In part II Statistical Machine Translation (SMT) and NMT and the corresponding mathematics are introduced. In part III literature review of NMT is done. In part IV our NMT model is presented. In part V the experiments and the improvements of the new model in comparison with the baselines are discussed. Finally, section VI concludes the paper
II. STATISTICAL AND NEURAL MACHINE TRANSLATION
MT is the automation of the translation between human languages [6]. Two of the most successful models for machine translations are SMT and NMT which are discussed in light of the following subsections.
A. Statistical Machine Translation
A common SMT model leads to find the target sentence f: y1, y2, …, yT using source side sentence e: x1, x2, …, xS by maximizing the following term [7]: p(e|f) ~ p(e).p(f|e) In this equation, ( )is the language model which helps our output to be natural and grammatical, and p(f|e) is the translation model which ensures that e is normally interpreted as f, and not some other thing [8].
Most of the MT systems use log-linear model instead of the pure form, to model more features in the final equation. Then the model will be as follow [8]:
log( ( | )) = ∑ ℎ ( . ) =1 + ( )
This equation shows the m th feature of the SMT system with the ℎ symbol and the corresponding weight with the . The term Z is a normalization term which is independent from the weights. In Fig. 1 we see an architecture of an SMT. The model searches through different possibilities using its features as shown.
Alignment is one of the features for MT and the same alignment is used as what described in [10] for estimating parameters of SMT in this paper
B. Neural Machine Translation
Deep neural networks (DNNs) have shown impressive results in machine learning tasks. The success of these networks mostly is the result of the hierarchical aspects of these networks. DNNs are like pipeline processing in which each layer solves part of the issue and the result is fed into the next layer and at the end the last layer generates the output [11]. DNNs are powerful because the ability to perform parallel computations for several steps [12].
Most of the NMT models consist of two parts including an encoder which encodes the input sequence to a fixed length array, and a decoder which decodes the context vector into the output sequence [13]. Because the task is MT and the source and target sentences may have any lengths, the input and output of the NMT models are variable.
To address the following problem, recurrent neural networks (RNNs) are used for machine translation. RNNs are a map from feedforward neural networks into sequences. In each step t, the RNN computes the hidden state from the following equation, where ht is the hidden state at step t and xt is the t th input in a sequence of inputs:
ℎ = (ℎ −1 . )
f is an activation function which can be simple as a sigmoid function or complicated as an LSTM [14]. Similarly, the next output symbol is computed using the following equation:
= (ℎ )
Therefore, RNNs can easily map a sequence to another sequence. To map a sequence of input words to sequence of output words with different length, the first attempt is done by [13]. In this work, the input sequence is encoded to a fixed length context vector and the output sequence is generated by decoding the context vector. If c is a context vector, the hidden state at the state t is computed using the following equation:
ℎ = (ℎ −1 . −1 . )
Each of the encoder and decoder is an RNN and the whole system is trained to maximize the following log-likelihood probability where N is the number of sentences in training set, Y n is the target output corresponding to source input X n :
max ( 1 ∑ log ( | )) =1
The above model works fine for small sentences. But as the length of the sentence increases, the context vector cannot encode all of the source sentences and the performance decreases significantly [15]. So, the context vector is a bottleneck for this model and a vector with fixed length should be revised. Paper [16] proposes a model which does not encode the whole source sentence into a fixed length array. Instead, the input sentence is encoded to a sequence of arrays and a subset of these arrays are selected for decoding. Then, the model can translate the longer sentences easily. In the new model, each conditional probability is defined as follows:
p( | 1 . 2 . … . −1 . X) = ( −1 . . )
Where yi is the i th word of the output and X is the input sentence, the si is the hidden network state at the i th step and is computed in this way:
= ( −1 . −1 . )
In spite of the conventional encoder-decoder method, in this equation for each output yi the probability is conditional on corresponding ci. Each ci is computed as weighted sum of the annotations hi as follows:
= ∑ ℎ =1
In this equation, Tx is the length of the source sentence and αij is the weight for the j th annotation and is computed as follow:
= exp ( ) ∑ exp ( ) =1
Finally, eij is the alignment model which shows how the words around the input position j are compliance with the i th output position. The alignment model is a feedforward neural network which is trained simultaneously with the other components of the network. In contrast of the other NMT, the alignment is not a hidden variable here and is computed as a soft alignment [16].
For training the model we use Stochastic Gradient Descent [17]. The learning rate is a parameter which controls how large a step should be in the direction of the negative gradient [18]. It is controlled adaptively here. If the improvement in terms of the loss function is not seen over last three iterations, the model will decay the learning rate by a specific factor.
We take advantage of this soft alignment and use it to train the model faster and also more accurate. We also trained the model for Persian language for the first time and adjusted the parameters and hyper parameters. Finally, we added a feature from statistical model to make the soft alignment more powerful which results in the decrease in convergence time and improves the model. we added another term to the conditional decay which described above. In implementation we are heavily relying on the Tensorflow translation model.
III. RELATED WORK
In 2003 neural network language models generally introduced in [19]. In machine translation, some researchers used these models for rescoring the translation [20]. For example, [12] used neural networks for rescoring the translation candida sentences and [13] used neural networks for translation scores in phrase table. One of the simplest and most impressive works in NMT is [21] which used neural networks for rescoring the n-best list in an MT system. This improved MT effectively. In 2012, Li proposed an MT model using feedforward neural networks which used an output layer for classification and a short list for rescoring [22].
For language model and machine translation, it has been shown that RNNs empirically work better than feedforward neural networks [23]. Most of the RNN translation models are as an encoder-decoder family. encoder-decoder models for MT first used in [24]. They used a convolutional neural network (CNN) to encode the input sentences into an array and then used an RNN for decoding.
A newer model for encoder-decoder was presented in [25], where the decoder was conditioned on the source sentences. In this work a language model is combined with a topic model and the results show some improvements in rescoring. In [13] another encoder-decoder was introduced which used an LSTM for encoding and decoding. The authors mostly considered on combining their neural network with an SMT model.
NMT models have two main problems which researchers are trying to solve. First, the ability of the model to translate decreases as the length of sentences increase. Bahdanau used an attention model to address the problem of translating long sentences [16]. Next is the memory issue. As the size of the corpus increases, the accuracy of the translation increases, but the problem of memory usage emerges. In [26] a model was proposed to address this problem. They tried to translate some part of the input sentences like phrase based translation.
In [27] a model for scoring phrases was proposed. In this model, a feedforward neural network with input and output of fixed length is used. Devlin [28] proposed an NMT model using feedforward neural network. In his model a language model encoder using neural network and a decoder from MT model combined and used decoder alignment information for the language model to output the most useful words corresponding to the input sentences. This model made a significant improvement in machine translation, but the limitation of the length of the sentences yet remained.
Bidirectional LSTM first proposed by [29] and used for speech recognition task. These networks were used for MT in [30] and created a strong model which used next and previous input words for translation. The idea of guided alignment first proposed in [31]. And our proposed model for using both SMT and NMT alignments is inspired by this paper.
IV. PERSIAN NEURAL TRANSLATION MODEL
In this section we define the issue for Persian translation and the preprocessing needs to be done before feeding the input into the model. Then the proposed model for NMT which uses soft alignment and SMT alignment feature for translation is described.
C. Data preprocessing
The Persian language makes MT a difficult task because of its specific characteristics. So the input sentences should be preprocessed and then fed into the NMT model. Here is the list of preprocessing tasks which are done on Persian corpora:
• All corpora are changed to have one sentence per line ending in one of the punctuations: ,'؟' '.', or '!'. • All words are separated with a single space symbol.
• All zero-width non-breakings have been removed. For instance, the word " می نویسم " is changed to نویسم" "می • All the adherent words have been tokenized. For instance, the word "آنها" is changed to ها" "آن • If a word is an adherent with a symbol, punctuation sign or other characters, it is disparted.
All of these preprocessing tasks, prepare the Persian data to be used for NMT. The first two preprocessing tasks in the above list, are general which should be done for every language pairs and every MT models. The next two are Persian specific. Unlike the SMT, for NMT we use these two preprocessing to distinguish the words. This is a tradeoff between the number of unique words and the length of the sentences. As the problem of the length of the sentence is decreased after using techniques described in [16], we decided to decrease the number of unique words and increase the length of the sentences. This configuration leads to better results. The last one results into disjointing non-related characters. If we do not dispart the word and its adjunct punctuation sign, the system considers them as a whole word, and this is not acceptable for us. Since the NMT system should get them as two distinct words and not one word.
D. The alignment feature
One of the properties of the NMT models is that it doesn't need to define different features and each feature is tuned to maximize the probability function. Indeed, it learns everything via a unique model and translates the source sentence into the target via the trained model. On the other hand, the SMT defines different features and computes the corresponding weights for each of them and tries to maximize the probability function. Because each of them has its own pros. Our model benefits both of these features and tries to increase the accuracy of the alignment model in NMT using alignment model in SMT.
In SMT model, we use the GIZA++ [32] tool to align the source and target sentences to each other. This tool uses an EM algorithm to align words in source and target sentences and shows which words of the source sentence is aligned with which word or words of the target sentence. This alignment can be defined as the following matrix:
[ , ] 1 :
[ , ] 0
TS M i j M M i j
If the i th target word is aligned to the j th source word
Otherwise
Here M is the alignment matrix, S is the size of the source sentence and T is the size of the target sentence. We name this matrix as EM-alignment matrix.
In NMT model, we use the soft alignment. As described in part II-B it makes a matrix for each step of the training phase. This matrix is the e matrix in (10). The i th row and j th column of this matrix defines the compliance of the i th target word with the j th source word. So we have another matrix which is the output of each step of the NMT model. We name this matrix as NMTalignment matrix What we do here is adding a cost function to NMT model consisting of the difference between these two matrices. The fact behind it, is that a fully translated model using EMalignment definitely has a better alignment model than an NMT model which is in the process of training and is not convergent. This helps the NMT model to converge faster and at the same time find the alignment model which fits to the corpora. Here is the term which is added to previous cost function of NMT:
() 2(| | | |) T S T S f M e TS
In this equation, e is the NMT-alignment matrix and M is the EM-alignment matrix. The | | symbol means the absolute value. Function f, is the summation of all elements of the matrix. The term before the function is for normalizing the summation. And ω is a weight which defines how important this term is versus default cost function. The higher the ω, the more important the alignment difference is. We arbitrary set this weight to 0.2. This weight seems reasonable, because the cost function itself consists of the difference between the model translation and the target translation and this difference is more important than the alignment difference.
The cost function in this model is used for learning rate decay factor. In an NMT at each step we expect the model to decrease the cost function. If after series of iterations, the cost function did not decrease, the learning rate will be changed.
Adding a new term to cost function helps the model to learn the alignment more accurately. If the NMT-alignment have a wide margin with EM-alignment, the model will be penalized. So it will learn to align the source and target sentences with more attention to EM-alignment. At the end, we expect the model to have an alignment closer to EM-alignment, unless the alignment is in contrast with translation. Since in cost function the translation is weightier, the model will not suffer from wrong EM-alignments and it won't change the correct NMTalignments.
V. EXPERIMENTS
In this section the experiments done for the proposed model with the results and analysis of each experiment are described. First, system configurations, then the datasets and finally, the experiments and results are described.
E. System configurations
For our experiment, we use NVIDIA GeForce GTX 780 GPU which increases the speed of processing in comparison with CPU. Also NVIDIA CUDA toolkit v.7 is used specifically for its math libraries and optimization routines. Also we take advantageous of cuDNN library v5 for increasing the training speed. For programming, we used Tensroflow framework v0.10 and made our changes based on MT model proposed by the providers.
F. Dataset description
We used two datasets. The first dataset is Verbmobil English-Persian translation dataset which consists of some conversation sentences in English and their translations in Persian. The second dataset is a transliteration dataset. It consists of some separated characters of words in Persian. In this dataset, the sentence means a word with separated characters, and words mean characters. In Table I the information about Training, Development and Test sets of Verbmobil and Transliteration are provided. Sent. Count means the number of sentences for each of the English and Persian datasets. Unique words count is the number of distinct words available in each corpus. This number is for the main dataset without any preprocessing task. In Table II, an example from each dataset is described. One sentence for each dataset is shown in Table II.
G. Evaluating Measurements
For evaluating the proposed model, we use different measurements. For translating we use BLEU [33] measurement, which is quick and language independent. This measure is based on precision of n-grams of the translated text in comparison with target reference or references.
For transliteration task we use four different measures in addition to BLEU. The first measurement is the accuracy. Accuracy means how many sentences have been transliterated completely without any error in any position of the sentence. The next measurement is WER (Word Error Rate), which counts the number of words transliterated incorrectly. The words should be transliterated exactly in the same order as source words. As we described earlier, words here mean the character. So WER measures the number of characters which has been transliterated into the wrong character. Verbmobil 26142 5909 26142 3118 276 463 276 350 250 429 250 345 Transliteration 88507 35 88507 54 1000 27 1000 26 1000 26 1000 26 TABLE II. DATASET EXAMPLE
Dataset English Persian
Verbmobil Monday the eighth of November would suit me fine است خوب من برای نوامبر هشتم دوشنبه .
Transliteration A A m i n ن ی م آ
The third measure is PER (Position-independent word Error Rate) [34]. It is the same as WER but ignoring the order of the words. This measure looks at the sentences as bag-of-the-words and does not consider the position of the words. Then it counts the number of words translated incorrectly and are not in the target sentence at all. Finally, the last measure is TER (Translation Error Rate) [35] which counts the number of edits required to change a system output into one of the given translation references.
H. Experiments and Results
We evaluate our model by 3 different experiments. First, we find the best configuration for each dataset. The parameters adjusted are number of layers and number of nodes in each layer of RNN. After adjusting the parameters and hyper parameters for each dataset, we evaluate our proposed model. First, using the best adjusted model, the changes to dataset and then the cost function effect is evaluated. The results are shown in Table III through VI. First we describe the results on transliteration task and then for the translation task. Table III shows different configurations of the NMT model for transliteration task. As we see by increasing the number of hidden layer nodes, all measurements improve (BLEU and accuracy increase and TER, WER, PER decrease). But stops at a specific configuration, where increasing the number of hidden nodes does not improve the model 1 . For transliteration task, we do not have any preprocessing step, since all the words are separated by space and there is no punctuation mark or any symbol except the default Persian and English alphabet characters. So the next experiment is adding new cost function to the default cost function. The results are shown in Table IV. Changing cost function improves the model significantly. That is mostly because the previous cost function does not include the EM-alignment and suffers from aligning incorrectly.
Next experiments are on Verbmobil dataset and translation task. For this dataset first we configure the model for best parameters and hyper parameters. The results are shown in Table V. As we expect, by increasing the number of hidden 1 Experiments consist more adjustment for the model, the bests are reported. nodes the model works better, because the task is translation and the number of unique words are more than the number of unique words in transliteration task, the model responses better by increasing the number of hidden nodes. In Table VI the preprocessing task and cost function are added to the best baseline. As it can be seen due to preprocessing, the BLEU measure increases for about one unit which shows the importance of preprocessing. The new cost function increases base-line system for about 1.87 units in BLUE measure which shows that the new cost function also works effectively for translation task.
VI. CONCLUSION
In this paper the first NMT system for Persian language which is trained using scarce data was proposed. The parameters and hyper parameters of the model are adjusted for Persian to English language. Also some preprocessing were tasks introduced which help the Persian model to be translated accurately. Finally, a cost function was added to soft-alignment in neural machine translations. The whole system increased the performance of the base-line system for about 1.87 for translation task and about 0.9 for transliteration task.
Figure 1 .
1Architecture of Translation approach based on log-linear model[9]
TABLE I .
IDATASETS DESCRIPTIONDataset
Training
Development
Test
Persian
English
Persian
English
Persian
English
Sent.
Count
Unique
Words
Count
Sent.
Count
Unique
Words
Count
Sent.
Count
Unique
Words
Count
Sent.
Count
Unique
Words
Count
Sent.
Count
Unique
Words
Count
Sent.
Count
Unique
Words
Count
TABLE III .
IIITRANSLITERATION CONFIGURATION RESULTS TABLE IV. TRANSLITERATION COST FUNCTION RESULTS TABLE VI. VERBMOBIL PREPROCESSING AND COST FUNCTION RESULTSNumber
of layers
Number
of hidden
nodes
BLEU
(%)
Accuracy
(%)
TER
(%)
WER
(%)
PER
(%)
3
50
74.04
50.3
12.27 13.12 8.30
4
50
72.8
55
12.34 12.55 8.28
3
100
73.87
50.7
12.22 12.32 8.34
4
100
76.21
44.7
11.04 11.12 8.00
3
200
68.97
48.9
14.04 14.61 9.31
4
200
75.31
47.1
12.15 12.17 8.12
Cost function
BLEU
Accuracy
TER
WER
PER
With EM-alignment
76.21
44.7
11.04 11.12 8.00
Without EM-alignment 77.13
44.2
11.01 11.02 7.77
TABLE V.
TRANSLATION CONFIGURATION RESULTS FOR
Number of
layers
Number of hidden
nodes
BLEU
(En-Fa)
BLEU
(Fa-En)
3
500
16.21
20.10
4
500
16.5
20.25
3
1000
18.15
21.69
4
1000
18.33
21.88
3
2000
17.92
21.25
4
2000
18.12
21.5
Cost function
BLEU (En-Fa)
BLEU (Fa-En)
Base line
18.33
21.88
+ preprocessing
19.25
22.80
+ preprocessing
+ new cost function
19.75
23.65
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. G Hinton, L Deng, D Yu, G E Dahl, A R Mohamed, N Jaitly, IEEE Signal Processing Magazine. 296G. Hinton, L. Deng , D. Yu, GE. Dahl, AR. Mohamed, N. Jaitly, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups". IEEE Signal Processing Magazine, vol. 29(6), pp. 82-97, Nov. 2012.
Multi-column deep neural networks for image classification. D Ciregan, U Meier, J Schmidhuber, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference. D. Ciregan, U. Meier, J. Schmidhuber, "Multi-column deep neural networks for image classification." Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference, pp. 3642-3649, Jun. 2012.
Twitter mood predicts the stock market. J Bollen, H Mao, X Zeng, Journal of Computational Science. 21J. Bollen, H. Mao, X. Zeng. "Twitter mood predicts the stock market." Journal of Computational Science. Vol. 2(1), pp. 1-8, March 2011.
Machine translation using neural networks and finite-state models. Ma, F Castaño, E Casacuberta, Vidal, Theoretical and Methodological Issues in Machine TranslationMA. Castaño, F. Casacuberta, E. Vidal, "Machine translation using neural networks and finite-state models," Theoretical and Methodological Issues in Machine Translation, pp. 160-167, Jul. 1997.
Tensorflow: Large-scale machine learning on heterogeneous distributed systems. M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, C , Preliminary White PaperM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, C., et al, "Tensorflow: Large-scale machine learning on heterogeneous distributed systems," Preliminary White Paper, November 2015.
Readings in machine translation. S Nirenburg, Hl, Somers, MIT PressS. Nirenburg, HL, Somers, "Readings in machine translation," MIT Press, 2003.
A statistical approach to machine translation. Pf, J Brown, S A Cocke, Pietra, Vj, F Pietra, J D Jelinek, Lafferty, Computational linguistics. 162PF. Brown, J. Cocke, SA. Pietra, VJ. Pietra, F. Jelinek, JD. Lafferty, et al. "A statistical approach to machine translation," Computational linguistics, Vol 16(2), pp. 79-85, Jun. 1990.
Statistical machine translation. Y Al-Onaizan, J Curin, M Jahr, K Knight, J Lafferty, D Melamed, JHU Summer Workshop. 30InFinal ReportY. Al-Onaizan, J. Curin, M. Jahr, K. Knight, J. Lafferty, D. Melamed, et al, "Statistical machine translation," InFinal Report, JHU Summer Workshop, Vol. 30, 1999.
The alignment template approach to statistical machine translation. Fj, H Och, Ney, Computational linguistics. 304FJ. Och, H. Ney, "The alignment template approach to statistical machine translation," Computational linguistics, Vol. 30(4), pp. 417-49, Dec. 2004.
The mathematics of statistical machine translation: Parameter estimation. Pf, Brown, Vj, S A Pietra, R L Pietra, Mercer, Computational linguistics. 192PF. Brown, VJ. Pietra, SA. Pietra, RL. Mercer, "The mathematics of statistical machine translation: Parameter estimation," Computational linguistics, Vol. 19(2), pp. 263-311, Jun. 1993.
Training and analysing deep recurrent neural networks. M Hermans, B Schrauwen, InAdvances in Neural Information Processing Systems. M. Hermans, B. Schrauwen, "Training and analysing deep recurrent neural networks," InAdvances in Neural Information Processing Systems, pp. 190-198, 2013.
Sequence to sequence learning with neural networks," InAdvances in neural information processing systems. I Sutskever, O Vinyals, Q V Le, I. Sutskever, O. Vinyals, QV. Le, "Sequence to sequence learning with neural networks," InAdvances in neural information processing systems, pp. 3104-3112, 2014.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Merrienboer, C Gulcehre, F Bougares, H Schwenk, Y Bengio, Proceedings of the Empiricial Methods in Natural Language Processing. the Empiricial Methods in Natural Language ProcessingK. Cho, B. Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio, "Learning phrase representations using RNN encoder-decoder for statistical machine translation," In Proceedings of the Empiricial Methods in Natural Language Processing, pp. 1724-34, Jun. 2014.
Long short-term memory. S Hochreite, J Schmidhuber, Neural computation. 98S. Hochreite, J. Schmidhuber, "Long short-term memory," Neural computation, Vol. 9(8), pp. 1735-80, Nov. 1997.
On the properties of neural machine translation: Encoder-Decoder approaches. K Cho, B Van Merrienboer, D Bahdanau, Y Bengio, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. K. Cho, B. van Merrienboer, D. Bahdanau, Y. Bengio, "On the properties of neural machine translation: Encoder-Decoder approaches," In Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation, 2014.
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv preprintD. Bahdanau, K. Cho, Y. Bengio, "Neural machine translation by jointly learning to align and translate," arXiv preprint, Sep 2014.
The annals of mathematical statistics. H Robbins, S Monro, A stochastic approximation methodH. Robbins and S. Monro, "A stochastic approximation method," The annals of mathematical statistics, pp. 400-407, Sep. 1951.
ADADELTA: an adaptive learning rate method. Md, Zeiler, arXiv:1212.5701arXiv preprintMD. Zeiler, "ADADELTA: an adaptive learning rate method," arXiv preprint arXiv:1212.5701, Dec. 2012.
A neural probabilistic language model. Y Bengio, The Journal of Machine Learning Research. 3Y. Bengio, "A neural probabilistic language model," The Journal of Machine Learning Research, Vol. 3, pp. 1137-1155, 2003.
Continuous space language models for the IWSLT 2006 task. S Holger, M R Costa-Jussa, J Fonollosa, IWSLTS. Holger, MR. Costa-Jussa, and J. AR Fonollosa, "Continuous space language models for the IWSLT 2006 task," IWSLT, 2006.
Statistical language models based on neural networks. T Mikolov, Presentation at Google, Mountain ViewT. Mikolov, "Statistical language models based on neural networks," Presentation at Google, Mountain View, April 2012.
Continuous space translation models with neural networks. Lh, A Son, Fr Allauzen, Yvon, Proceedings of the 2012 conference of the north american chapter of the association for computational linguistics: Human language technologies. the 2012 conference of the north american chapter of the association for computational linguistics: Human language technologiesAssociation for Computational LinguisticsLH. Son, A. Allauzen, and Fr. Yvon, "Continuous space translation models with neural networks." Proceedings of the 2012 conference of the north american chapter of the association for computational linguistics: Human language technologies. Association for Computational Linguistics, pp. 39-48, Jun 2012.
Comparison of feedforward and recurrent neural network language models. M Sundermeyer, I Oparin, J L Gauvain, B Freiberg, R Schlüter, H Ney, IEEE International Conference on. IEEE. M. Sundermeyer, I. Oparin, JL. Gauvain, B. Freiberg, R. Schlüter, and H. Ney, "Comparison of feedforward and recurrent neural network language models." Acoustics, Speech and Signal Processing (ICASSP), IEEE International Conference on. IEEE, pp. 8430-8434, May 2013
Recurrent Continuous Translation Models. N Kalchbrenner, B Phil, EMNLP. 3413N. Kalchbrenner, and B. Phil, "Recurrent Continuous Translation Models," EMNLP, Vol. 3, p. 413, 2013.
Joint Language and Translation Modeling with Recurrent Neural Networks. M Auli, M Galley, C Quirk, G Zweig, EMNLP. 3M. Auli, M. Galley, C. Quirk, and G. Zweig, "Joint Language and Translation Modeling with Recurrent Neural Networks," EMNLP. Vol. 3, pp. 1044-54, 2013.
Overcoming the curse of sentence length for neural machine translation using automatic segmentation. J Pouget-Abadie, D Bahdanau, B Van Merriënboer, K Cho, Y Bengio, arXiv:1409.1257arXiv preprintJ. Pouget-Abadie, D. Bahdanau, B. van Merriënboer, K. Cho, and Y. Bengio, "Overcoming the curse of sentence length for neural machine translation using automatic segmentation," arXiv preprint arXiv:1409.1257. Sep. 2014.
Continuous Space Translation Models for Phrase-Based Statistical Machine Translation. H Schwenk, COLING (Posters). H. Schwenk, "Continuous Space Translation Models for Phrase-Based Statistical Machine Translation," COLING (Posters), pp. 1071-1080, Dec. 2012.
Fast and Robust Neural Network Joint Models for Statistical Machine Translation. J Devlin, R Zbib, Z Huang, T Lamar, R M Schwartz, J Makhoul, 1J. Devlin, R. Zbib, Z. Huang, T. Lamar, RM. Schwartz, and J. Makhoul, "Fast and Robust Neural Network Joint Models for Statistical Machine Translation," InACL Vol. 1, pp. 1370-1380, Jun. 2014.
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, IEEE Transactions on Signal Processing. 45M. Schuster and KK. Paliwal, "Bidirectional recurrent neural networks," IEEE Transactions on Signal Processing, Vol. 45, pp. 2673-81, Nov. 1997.
Translation Modeling with Bidirectional Recurrent Neural Networks. M Sundermeyer, T Alkhouli, J Wuebker, H Ney, InEMNLP. M. Sundermeyer, T. Alkhouli, J. Wuebker, and H. Ney, "Translation Modeling with Bidirectional Recurrent Neural Networks," InEMNLP 2014, pp. 14-25, Oct. 2014.
Guided Alignment Training for Topic-Aware Neural Machine Translation. W Chen, E Matusov, S Khadivi, J T Peter, arXiv:1607.01628arXiv preprintW. Chen, E. Matusov, S. Khadivi, JT. Peter, "Guided Alignment Training for Topic-Aware Neural Machine Translation," arXiv preprint arXiv:1607.01628, Jul 2016.
Giza++ software. Fj, Och, FJ. Och, "Giza++ software.", 2003.
BLEU: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W J Zhu, InProceedings of the 40th annual meeting on association for computational linguistics. Association for Computational LinguisticsK. Papineni, S. Roukos, T. Ward, and WJ. Zhu, "BLEU: a method for automatic evaluation of machine translation," InProceedings of the 40th annual meeting on association for computational linguistics, pp. 311-318, Jul. 2002, Association for Computational Linguistics.
Accelerated DP based search for statistical translation. C Tillmann, S Vogel, H Ney, A Zubiaga, H Sawaf, InEurospeechC. Tillmann, S. Vogel, H. Ney, A. Zubiaga, and H. Sawaf, "Accelerated DP based search for statistical translation," InEurospeech, Sep 1997
InProceedings of association for machine translation in the Americas. M Snover, B Dorr, R Schwartz, L Micciulla, J Makhoul, 200A study of translation edit rate with targeted human annotationM. Snover, B. Dorr, R. Schwartz, L. Micciulla, and J. Makhoul, "A study of translation edit rate with targeted human annotation," InProceedings of association for machine translation in the Americas, Vol. 200, No. 6, pp. 223-31, Aug. 2006.
| [] |
[
"IITK@LCP at SemEval-2021 Task 1: Classification for Lexical Complexity Regression Task",
"IITK@LCP at SemEval-2021 Task 1: Classification for Lexical Complexity Regression Task"
] | [
"Neil Shirude neilrs@iitk.ac.in \nIndian Institute of Technology Kanpur (IIT Kanpur)\n\n",
"Sagnik Mukherjee sagnikm@iitk.ac.in \nIndian Institute of Technology Kanpur (IIT Kanpur)\n\n",
"Tushar Shandhilya \nIndian Institute of Technology Kanpur (IIT Kanpur)\n\n",
"Ananta Mukherjee anantam@iitk.ac.in \nIndian Institute of Technology Kanpur (IIT Kanpur)\n\n",
"Ashutosh Modi ashutoshm@cse.iitk.ac.in \nIndian Institute of Technology Kanpur (IIT Kanpur)\n\n"
] | [
"Indian Institute of Technology Kanpur (IIT Kanpur)\n",
"Indian Institute of Technology Kanpur (IIT Kanpur)\n",
"Indian Institute of Technology Kanpur (IIT Kanpur)\n",
"Indian Institute of Technology Kanpur (IIT Kanpur)\n",
"Indian Institute of Technology Kanpur (IIT Kanpur)\n"
] | [
"Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)"
] | This paper describes our contribution to Se-mEval 2021 Task 1: Lexical Complexity Prediction. In our approach, we leverage the ELECTRA model and attempt to mirror the data annotation scheme. Although the task is a regression task, we show that we can treat it as an aggregation of several classification and regression models. This somewhat counterintuitive approach achieved an MAE score of 0.0654 for Sub-Task 1 and MAE of 0.0811 on Sub-Task 2. Additionally, we used the concept of weak supervision signals from Gloss-BERT in our work, and it significantly improved the MAE score in Sub-Task 1. | 10.18653/v1/2021.semeval-1.66 | [
"https://www.aclanthology.org/2021.semeval-1.66.pdf"
] | 233,004,630 | 2104.01046 | d1ba1919208095014d535d0a912d8fa80fd5b889 |
IITK@LCP at SemEval-2021 Task 1: Classification for Lexical Complexity Regression Task
August 5-6, 2021
Neil Shirude neilrs@iitk.ac.in
Indian Institute of Technology Kanpur (IIT Kanpur)
Sagnik Mukherjee sagnikm@iitk.ac.in
Indian Institute of Technology Kanpur (IIT Kanpur)
Tushar Shandhilya
Indian Institute of Technology Kanpur (IIT Kanpur)
Ananta Mukherjee anantam@iitk.ac.in
Indian Institute of Technology Kanpur (IIT Kanpur)
Ashutosh Modi ashutoshm@cse.iitk.ac.in
Indian Institute of Technology Kanpur (IIT Kanpur)
IITK@LCP at SemEval-2021 Task 1: Classification for Lexical Complexity Regression Task
Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)
the 15th International Workshop on Semantic Evaluation (SemEval-2021)Bangkok, ThailandAugust 5-6, 2021541
This paper describes our contribution to Se-mEval 2021 Task 1: Lexical Complexity Prediction. In our approach, we leverage the ELECTRA model and attempt to mirror the data annotation scheme. Although the task is a regression task, we show that we can treat it as an aggregation of several classification and regression models. This somewhat counterintuitive approach achieved an MAE score of 0.0654 for Sub-Task 1 and MAE of 0.0811 on Sub-Task 2. Additionally, we used the concept of weak supervision signals from Gloss-BERT in our work, and it significantly improved the MAE score in Sub-Task 1.
Introduction
With the rapid growth in digital pedagogy, English has become an extremely popular language. Although English is considered an easy language to learn and grasp, a person's choice of words often affects texts' readability. The use of difficult words can potentially lead to a communication gap, thus hampering language efficiency. Keeping these issues in mind, many Natural Language Processing tasks for text simplification have been recently proposed (Paetzold and Specia, 2017;Sikka and Mago, 2020). Our task of lexical complexity prediction is an important step in the process of simplifying texts. The SemEval 2021 Task 1 (Shardlow et al., 2021) focuses on lexical complexity prediction in English. Given a sentence and a token from it, we have to predict the complexity score of the token. The task has two Sub-Tasks-Sub-Task 1: complexity prediction of single words Sub-Task 2: complexity prediction of multi word expressions (MWEs). A word might seem complex because of 2 major * Authors equally contributed to this work. factorsa) The word is less common or complex in itself. b) The context in which the word is used makes it hard to comprehend. Observing the orthogonality of these two reasons, we captured the context-dependent features and independent features separately, trained models on them individually, and then combined the two using ensemble methods. We used the ELECTRA (Clark et al., 2020) model for extracting contextdependent features and GloVe embeddings (Pennington et al., 2014) for representing the word-level features. Additionally, we propose a classification pipeline that is trained on GloVe embeddings of the tokens. This pipeline can be interpreted as a model for capturing different annotators' thought processes: overconfidence, under-confidence and randomness. We are making our code available for our models and experiments via GitHub 1 .
Background
This task uses the CompLex dataset (Shardlow et al., 2020), which is a lexical complexity prediction dataset in English for single and multi word expressions (2-grams). Sentences in this task consists of sentences taken from 3 corpora-Bible, Biomed and Europarl. The train, validation and test split of the data was 9179, 520, 1103 respectively. We used the trial data as the validation set. The aim of the task is to predict how complex a given token in a given sentence is. More mathematically, given a tuple [s, t, c], where s = [t 1 , t 2 , ...t n ] and t = t j , we have to give an estimate of the function σ, such that σ(s, t) = c. (s is the sentence, t is the token and c is the complexity score). The earlier focus on this task has been through 2016)). Very few of them, including Bingel et al. (2016) used neural networks. The system by Wróbel (2016) achieved an F1 score very close to the winning solution using only single feature -word frequency from Wikipedia. Most of these systems use word embeddings, POS information and word frequencies as features. The winning system by Paetzold and Specia (2016b) however uses 69 morphological, semantic and syntactic features. Another related shared task was presented at the BEA workshop at 2018 (Yimam et al., 2018). It had a probabilistic task as well as a binary classification task. Even there, the organizers conclude that feature engineering has worked better than neural networks. The winning system by Gooding and Kochmar (2018) uses feature engineering and later random forest and linear regression models.
System Overview
Our proposed pipeline can be divided into the following 4 main components-
a) Feature Extraction b) Regression Pipeline c) Classification Pipeline d) Ensemble
The pipeline is shown in Figure 3.
Feature Extraction
ELECTRA is a transformer based model, that is trained like a discriminator and not like generator. And in our case, this model performed exception- We extracted context-dependent features using embeddings generated from the ELECTRA model and captured context-independent word-level features using static 200-dimensional GloVe embeddings of the tokens. In order to generate the embeddings of the target word through ELECTRA, we implemented the KMP pattern matching algorithm (Wikipedia, 2021) to find the indices of the sub-tokens of the target token in the tokenized sentence. Subsequently, we calculated an average across these sub-token embeddings generated by ELECTRA. While using GloVe embeddings, in the case of multi-word expressions in Sub-Task 2, the average of the embeddings of both token words was taken as the feature vector. If a word was not present in the GloVe dictionary, the GloVe embedding was initialized to a 200-dimensional vector consisting of zeros.
Regression Pipeline
The most natural way to look at the lexical complexity prediction task is to treat it as a regression task. The regression pipeline, a significant component of our system, is based on this idea. For Sub-task 1, in the regression pipeline, a pretrained ELECTRA model was finetuned with a linear layer on top of it. We leveraged the model directly available at the Huggingface library (Wolf et al., 2020). Only the last transformer layer of ELECTRA was kept trainable. The remaining ones were kept frozen. For Sub-task 2, a fixed ELECTRA model (nontrainable weights) was used to generate token embeddings and a linear regression model was trained with these extracted embeddings. Weak Supervision: In order to have higher attention on the target word, the use of weak supervi-sion signals proved useful. Inspired by GlossBert (Huang et al., 2019), the target word was wrapped with single inverted commas (' 's) as a weak signal to the transformer (Vaswani et al., 2017) model. This technique significantly improved the results obtained using the regression pipeline in subtask I. However, the same technique applied to subtask II made the scores worse.
Method
Val MAE Test MAE + signal 0.06516 0.06800 -signal 0.06990 0.07118
Classification Pipeline
Motivation from Annotation Procedure: Another way to look at the task is via a novel classification pipeline that is inspired from the data annotation process that is explained in Shardlow et al. (2020). Even though the task is a regression task, each data annotator performed a 5 class classification-Given a sentence and a token in the sentence, each annotator had to select one class from among Very Easy, Easy, Neutral, Difficult and Very Difficult. Each of these classes was mapped to a discrete label between 0 and 1-namely 0, 0.25, 0.5, 0.75 and 1 respectively. The final complexity score was an average of up to 20 such annotations. The Classification Pipeline aims to model this data annotation procedure. The main idea of this process is to teach classification models how to annotate data tuples. The three main components of this scheme area) Generating dummy annotations from complexity scores b) Training classification models on dummy annotations, and c) Aggregating all predicted annotations to generate predicted complexity scores. Generation of Dummy Annotations: A given complexity score can be represented as a weighted average of its lower and upper target classes and the weights can be determined using the magnitude of the complexity score. These weights then determine the proportions of the two classes in the set of dummy annotations for that data tuple. For example, if the number of dummy annotators is n = 5 and the complexity score of the training example is c = 0.2, the lower and upper target classes are low = 0 and high = 0.25, respectively. Let α be the proportion of dummy annotations with the lower target class. Correspondingly, 1 − α will be the proportion with the upper target class. The number of dummy annotations with target class = low are given as f loor(n * α) and that with target class = high as n−f loor(n * α). α can be calculated using the equation-
c = α * low + (1 − α) * high
We get α = 0.2. Hence, we have f loor(n * α) = 1 dummy annotations with target class = low(0) and remaining 4 annotations with target class = high(0.25). Hence, the dummy annotations set for c = 0.2 is 0, 0.25, 0.25, 0.25, 0.25. Similarly, the dummy annotations set for c = 0.8 is 0.75, 0.75, 0.75, 0.75, 1.
In this process, we also attempted to capture the impact of intentional human errors made during the data annotation procedure. Just like a weary or uninterested annotator who would have randomly selected for one of the five classes for a certain data tuple, a small fraction of the dummy annotations was assigned random values from the set containing 0, 0.25, 0.5, 0.75 and 1. This modification aims to model the small-scale randomness in annotation procedure. Using this procedure, dummy annotation sets of size n can be generated for any value of c, where n can be treated as a hyperparameter. The value n can also be interpreted as the number of classification models that are being trained in the next step.
Classification Models: In a diverse set of annotators, there will be over-confident annotators who will select lower classes and there will be underconfident annotators who will select upper classes. Then there will be neutral annotators as well. By ensuring that the dummy annotations are sorted, we can say that the first classifier learns how to annotate like the over-confident annotator, the last classifier learns how to annotate like the underconfident annotator and the classifiers in between model the neutral annotators. We trained SVM classifiers with RBF kernels, using GloVe embeddings of token words as features.
Aggregation of Predicted Annotations:
The annotations were aggregated by simply taking the average of all predicted class labels in order to obtain the final predicted complexity scores. Each of these models may have high individual variances, Figure 3: A few worked out examples of generating dummy annotations from complexity scores. For each of these cases, the continuous labels 0,0.25,0.50,0.75 and 1 are mapped to categorical labels 1,2,3,4,5 and then put into SVM. Clearly the labels of the 1st classifier is less that that of the second one. i.e. on a scale of confidence, the first classifier is at a lesser position. So it models a less confident person. but the ensemble tends to have lower variance and bias. Also, any number of models can be inserted in the ensemble without leading to over-fitting on the train data.
Ensemble
In order to have a better bias variance trade off and also to exploit the "expertise" of different pipelines, the final approach incorporates both the regression and classification pipelines to form an ensemble. The final predicted complexity was obtained by taking an ensemble of the predictions from the regression and classification pipelines as described above. The classification pipeline for both the Sub-Tasks was based on GloVe embeddings as features and SVM classifiers. The regression pipeline for Sub-Task 1 was based on fine-tuning ELECTRA with weak supervision and that for Sub-Task 2 was based on features collected from ELECTRA model (non-trainable) with a linear regression trained on it.
Experimental Setup
The official evaluation metric for both the Sub-Tasks was Pearson Correlation (standard for regression tasks). For both sub-tasks, the train/test/val split as per the official release has been used. The ELECTRA finetuning was done with an NVIDIA GTX 1080 GPU with early stopping (93 epochs). We used the MAE loss function to train the model with an adam optimizer with lr = 1e − 5, eps = 1e − 08 and weightdecay = 0 . Training set was shuffled and the batch size was kept at 64. In the ELECTRA model, the padding parameter was set to True and maximum length was at 140. For the SVM models the value of slack was chosen to be 1 and for SVM and Linear regresion the sklearn (Pedregosa et al., 2011) library was used. All the hyperparameters were tuned with a grid search method.
Results
Results on Validation Data: The comparison of the baseline results and our results obtained using the regression pipeline, the classification pipeline and the ensemble of the two models on the validation set (trial data) is given in Table 4.
Error Analysis
Analyzing all the experiments and the corresponding results, the following can be concluded: a) Word-level features as well as context-dependent features need to be considered while determining complexity of a token. b) Approaches based on the data annotation scheme are well suited to tackle the lexical complexity prediction task. c) Ensemble of a large number of simple models is an effective way of tackling this task. d) Models with large number of parameters like BERT () suffer heavily due to overfitting, where as ELECTRA base prove to be much better. The model architectures that were tried out in earlier stages showed similar trends. For example, ELECTRA finetuning produced much better scores than BERT finetuning. Also, simpler models like a simple linear regression on GloVe embeddings showed promise, proving that simpler models with lesser parameters worked better. All these trends across those models are visually shown in Figure 4. It was observed that the model was underperforming on the tuples from Biomed corpus. However the scores did not improve using BERT variants like BioBERT , BioMed-BERT (Chakraborty et al., 2020) and a few other transformer based models pretrained on biomedical texts. A variant of ELECTRA on biomedical texts could have improve on this, however due to its unavailability it could not be tried out.
In majority of the prior work on LCP, there is abundance use of word frequency as a feature. However, in this system the scores got worse when frequency features were used along with others in ensemble. And the feature in itself could not produce competitive results. Previously, Gong et al. (2020) and Mu et al. (2018) have shown that frequency information causes significant distortion in the embedding space. We also hypothesize that the frequency information in GloVe embeddings help us in this regard.
Conclusion
In this paper we presented a system for lexical complexity prediction in the form of a regression task. The proposed system's primary novelty is in treating it as a classification task and trying to model the annotation scheme. An ensemble of these classification models and vanilla fine-tuning of ELEC-TRA model proved to be very useful. Also the weak supervision based approach gave the scores a significant boost for the Sub-Task 1.
Figure 1 :
1Solution Pipeline the SemEval 2016 Task 11(Paetzold and Specia (2016a)). However, it was a binary classification task. Most of the participating systems used Support Vector Machines such asKuru (2016) andChoubey and Pateria (2016), decision trees and random forests(Choubey and Pateria (2016),Brooke et al. (2016), Ronzano et al. (2016), and even basic threshold based approaches(Kauchak (2016),Malmasi et al. (
Figure 2 :
2Convergence of losses for finetuning ELEC-TRA with weak supervision ally well on the validation data as compared to BERT(Devlin et al., 2019).
Table 1 :
1Variation of MAE scores with and without thesignalling technique for Sub-task 1: the single word
task. ('+ signal' means weak supervision has been used
and '-signal' means otherwise.)
Table 2 :
2Results on validation set (Mean Absolute Er-
rors)
Task
MAE
Pearson MSE
One
0.0623 0.8308 0.0065
Two
0.0727 0.8146 0.0087
Table 3 :
3Results on Vaditation Set for final ensemble Results on Test Data: Our results on the test data along with the best results obtained for each task are shown in Table 1. The winning system's pearson and MAE scores on the test data are as follows: 0.7886 and 0.0609 for subtask I(single word expressions), 0.8612 and 0.0616 for subtask II(multi word expressions).Task
MAE
Pearson MSE
One
0.0654 0.7511 0.0071
Two
0.0811 0.8277 0.0098
Table 4 :
4Results on Test Set
https://github.com/neilrs123/Lexical-Complexity-Prediction
Joachim Bingel, Natalie Schluter, Héctor Martínez Alonso, 10.18653/v1/S16-1160CoastalCPH at SemEval-Cross Model Comparison: Sub-task 1. Joachim Bingel, Natalie Schluter, and Héctor Martínez Alonso. 2016. CoastalCPH at SemEval- Cross Model Comparison: Sub-task 1
Linear Regression with GloVe, ELEC-TRA and BERT embeddings, (11) the current regression pipeline, (12) classification pipeline, (13) Final ensemble 2016 task 11: The importance of designing your neural networks right. 10.18653/v1/S16-1160Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsComparison of MAE values of the models we tried (Subtask I)Figure 4: Comparison of MAE values of the models we tried (Subtask I). From left the models are (1) Lin- ear Regression with hand crafted features, (2) Charac- ter level RNN, (3) Character level CNN, (4) and (5) sentence and character level GRU and LSTMs (6), (7), (8), (9) and (10) Linear Regression with GloVe, ELEC- TRA and BERT embeddings, (11) the current regres- sion pipeline, (12) classification pipeline, (13) Final en- semble 2016 task 11: The importance of designing your neural networks right. In Proceedings of the 10th International Workshop on Semantic Eval- uation (SemEval-2016), pages 1028-1033, San Diego, California. Association for Computational Linguistics.
Melbourne at SemEval 2016 task 11: Classifying type-level word complexity using random forests with corpus and word list features. Julian Brooke, Alexandra Uitdenbogerd, Timothy Baldwin, 10.18653/v1/S16-1150Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsJulian Brooke, Alexandra Uitdenbogerd, and Timo- thy Baldwin. 2016. Melbourne at SemEval 2016 task 11: Classifying type-level word complexity us- ing random forests with corpus and word list fea- tures. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 975-981, San Diego, California. Association for Computational Linguistics.
BioMedBERT: A pre-trained biomedical language model for QA and IR. Souradip Chakraborty, Ekaba Bisong, Shweta Bhatt, Thomas Wagner, Riley Elliott, Francesco Mosconi, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsSouradip Chakraborty, Ekaba Bisong, Shweta Bhatt, Thomas Wagner, Riley Elliott, and Francesco Mosconi. 2020. BioMedBERT: A pre-trained biomedical language model for QA and IR. In Proceedings of the 28th International Confer- ence on Computational Linguistics, pages 669-679, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Garuda & Bhasha at SemEval-2016 task 11: Complex word identification using aggregated learning models. Prafulla Choubey, Shubham Pateria, 10.18653/v1/S16-1156Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsPrafulla Choubey and Shubham Pateria. 2016. Garuda & Bhasha at SemEval-2016 task 11: Complex word identification using aggregated learning models. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1006- 1010, San Diego, California. Association for Com- putational Linguistics.
Electra: Pretraining text encoders as discriminators rather than generators. Kevin Clark, Minh-Thang Luong, Quoc V Le, Christopher D Manning, Kevin Clark, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. 2020. Electra: Pre- training text encoders as discriminators rather than generators.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing.
Frage: Frequency-agnostic word representation. Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, Tie-Yan Liu, Chengyue Gong, Di He, Xu Tan, Tao Qin, Liwei Wang, and Tie-Yan Liu. 2020. Frage: Frequency-agnostic word representation.
CAMB at CWI shared task 2018: Complex word identification with ensemble-based voting. Sian Gooding, Ekaterina Kochmar, 10.18653/v1/W18-0520Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Thirteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsNew Orleans, LouisianaAssociation for Computational LinguisticsSian Gooding and Ekaterina Kochmar. 2018. CAMB at CWI shared task 2018: Complex word identification with ensemble-based voting. In Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 184-194, New Orleans, Louisiana. Association for Computa- tional Linguistics.
GlossBERT: BERT for word sense disambiguation with gloss knowledge. Luyao Huang, Chi Sun, Xipeng Qiu, Xuanjing Huang, 10.18653/v1/D19-1355Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLuyao Huang, Chi Sun, Xipeng Qiu, and Xuanjing Huang. 2019. GlossBERT: BERT for word sense disambiguation with gloss knowledge. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 3509-3514, Hong Kong, China. Association for Computational Lin- guistics.
Pomona at SemEval-2016 task 11: Predicting word complexity based on corpus frequency. David Kauchak, 10.18653/v1/S16-1164Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsDavid Kauchak. 2016. Pomona at SemEval-2016 task 11: Predicting word complexity based on corpus fre- quency. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1047-1051, San Diego, California. Associa- tion for Computational Linguistics.
AI-KU at SemEval-2016 task 11: Word embeddings and substring features for complex word identification. Onur Kuru, 10.18653/v1/S16-1163Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsOnur Kuru. 2016. AI-KU at SemEval-2016 task 11: Word embeddings and substring features for com- plex word identification. In Proceedings of the 10th International Workshop on Semantic Evalua- tion (SemEval-2016), pages 1042-1046, San Diego, California. Association for Computational Linguis- tics.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, 10.1093/bioinformatics/btz682Bioinformatics. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2019. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics.
LTG at SemEval-2016 task 11: Complex word identification with classifier ensembles. Shervin Malmasi, Mark Dras, Marcos Zampieri, 10.18653/v1/S16-1154Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsShervin Malmasi, Mark Dras, and Marcos Zampieri. 2016. LTG at SemEval-2016 task 11: Complex word identification with classifier ensembles. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 996- 1000, San Diego, California. Association for Com- putational Linguistics.
All-but-the-top: Simple and effective postprocessing for word representations. Jiaqi Mu, Suma Bhat, Pramod Viswanath, Jiaqi Mu, Suma Bhat, and Pramod Viswanath. 2018. All-but-the-top: Simple and effective postprocessing for word representations.
SemEval 2016 task 11: Complex word identification. Gustavo Paetzold, Lucia Specia, 10.18653/v1/S16-1085Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsGustavo Paetzold and Lucia Specia. 2016a. SemEval 2016 task 11: Complex word identification. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 560- 569, San Diego, California. Association for Compu- tational Linguistics.
SV000gg at SemEval-2016 task 11: Heavy gauge complex word identification with system voting. Gustavo Paetzold, Lucia Specia, 10.18653/v1/S16-1149Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsGustavo Paetzold and Lucia Specia. 2016b. SV000gg at SemEval-2016 task 11: Heavy gauge complex word identification with system voting. In Proceed- ings of the 10th International Workshop on Seman- tic Evaluation (SemEval-2016), pages 969-974, San Diego, California. Association for Computational Linguistics.
A survey on lexical simplification. Gustavo H Paetzold, Lucia Specia, J. Artif. Int. Res. 601Gustavo H. Paetzold and Lucia Specia. 2017. A sur- vey on lexical simplification. J. Artif. Int. Res., 60(1):549-593.
Scikit-learn: Machine learning in python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexandre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, Duchesnay Andédouard, Journal of Machine Learning Research. 1285Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Jake Vanderplas, Alexan- dre Passos, David Cournapeau, Matthieu Brucher, Matthieu Perrot, andÉdouard Duchesnay. 2011. Scikit-learn: Machine learning in python. Journal of Machine Learning Research, 12(85):2825-2830.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.
TALN at SemEval-2016 task 11: Modelling complex words by contextual, lexical and semantic features. 10.18653/v1/S16-1157Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). Francesco Ronzano, Ahmed Abura'ed, Luis Espinosa-Anke, and Horacio Saggionthe 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsFrancesco Ronzano, Ahmed Abura'ed, Luis Espinosa- Anke, and Horacio Saggion. 2016. TALN at SemEval-2016 task 11: Modelling complex words by contextual, lexical and semantic features. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 1011- 1016, San Diego, California. Association for Com- putational Linguistics.
CompLex -a new corpus for lexical complexity prediction from Likert Scale data. Matthew Shardlow, Michael Cooper, Marcos Zampieri, Proceedings of the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI). the 1st Workshop on Tools and Resources to Empower People with REAding DIfficulties (READI)Marseille, FranceEuropean Language Resources AssociationMatthew Shardlow, Michael Cooper, and Marcos Zampieri. 2020. CompLex -a new corpus for lexi- cal complexity prediction from Likert Scale data. In Proceedings of the 1st Workshop on Tools and Re- sources to Empower People with REAding DIfficul- ties (READI), pages 57-62, Marseille, France. Euro- pean Language Resources Association.
Semeval-2021 task 1: Lexical complexity prediction. Matthew Shardlow, Richard Evans, Gustavo Paetzold, Marcos Zampieri, Proceedings of the 14th International Workshop on Semantic Evaluation. the 14th International Workshop on Semantic EvaluationSemEval-2021Matthew Shardlow, Richard Evans, Gustavo Paetzold, and Marcos Zampieri. 2021. Semeval-2021 task 1: Lexical complexity prediction. In Proceedings of the 14th International Workshop on Semantic Evalu- ation (SemEval-2021).
2020. A survey on text simplification. Punardeep Sikka, Vijay Mago, Punardeep Sikka and Vijay Mago. 2020. A survey on text simplification.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you needAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need.
Knuth-morris-pratt algorithm -Wikipedia, the free encyclopedia. Wikipedia, Wikipedia. 2021. Knuth-morris-pratt algo- rithm -Wikipedia, the free encyclopedia.
. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von PlatenThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen,
. Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, Alexander M Rush, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020.
Huggingface's transformers: State-of-the-art natural language processing. Huggingface's transformers: State-of-the-art natural language processing.
PLUJAGH at SemEval-2016 task 11: Simple system for complex word identification. Krzysztof Wróbel, 10.18653/v1/S16-1146Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016). the 10th International Workshop on Semantic Evaluation (SemEval-2016)San Diego, CaliforniaAssociation for Computational LinguisticsKrzysztof Wróbel. 2016. PLUJAGH at SemEval-2016 task 11: Simple system for complex word identi- fication. In Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016), pages 953-957, San Diego, California. Association for Computational Linguistics.
A report on the complex word identification shared task. Chris Seid Muhie Yimam, Shervin Biemann, Gustavo Malmasi, Lucia Paetzold, Specia, Anaïs Sanjaštajner, Marcos Tack, Zampieri, 10.18653/v1/W18-0507Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications. the Thirteenth Workshop on Innovative Use of NLP for Building Educational ApplicationsNew Orleans, LouisianaAssociation for Computational LinguisticsSeid Muhie Yimam, Chris Biemann, Shervin Mal- masi, Gustavo Paetzold, Lucia Specia, SanjaŠtajner, Anaïs Tack, and Marcos Zampieri. 2018. A report on the complex word identification shared task 2018. In Proceedings of the Thirteenth Workshop on Inno- vative Use of NLP for Building Educational Appli- cations, pages 66-78, New Orleans, Louisiana. As- sociation for Computational Linguistics.
| [
"https://github.com/neilrs123/Lexical-Complexity-Prediction"
] |
[
"Self-imitating Feedback Generation Using GAN for Computer-Assisted Pronunciation Training",
"Self-imitating Feedback Generation Using GAN for Computer-Assisted Pronunciation Training"
] | [
"Seung Hee Yang \nInterdisciplinary Program in Cognitive Science\nSeoul National University\nRepublic of Korea\n",
"Minhwa Chung mchung@snu.ac.kr \nInterdisciplinary Program in Cognitive Science\nSeoul National University\nRepublic of Korea\n\nDepartment of Linguistics\nSeoul National University\nRepublic of Korea\n"
] | [
"Interdisciplinary Program in Cognitive Science\nSeoul National University\nRepublic of Korea",
"Interdisciplinary Program in Cognitive Science\nSeoul National University\nRepublic of Korea",
"Department of Linguistics\nSeoul National University\nRepublic of Korea"
] | [] | Self-imitating feedback is an effective and learner-friendly method for non-native learners in Computer-Assisted Pronunciation Training. Acoustic characteristics in native utterances are extracted and transplanted onto learner's own speech input, and given back to the learner as a corrective feedback. Previous works focused on speech conversion using prosodic transplantation techniques based on PSOLA algorithm. Motivated by the visual differences found in spectrograms of native and non-native speeches, we investigated applying GAN to generate self-imitating feedback by utilizing generator's ability through adversarial training. Because this mapping is highly under-constrained, we also adopt cycle consistency loss to encourage the output to preserve the global structure, which is shared by native and non-native utterances. Trained on 97,200 spectrogram images of short utterances produced by native and non-native speakers of Korean, the generator is able to successfully transform the non-native spectrogram input to a spectrogram with properties of self-imitating feedback. Furthermore, the transformed spectrogram shows segmental corrections that cannot be obtained by prosodic transplantation. Perceptual test comparing the self-imitating and correcting abilities of our method with the baseline PSOLA method shows that the generative approach with cycle consistency loss is promising. | 10.21437/interspeech.2019-1478 | [
"https://arxiv.org/pdf/1904.09407v1.pdf"
] | 128,346,251 | 1904.09407 | 5b378c2a4ae964fb8ac25d7b6ff689fb31880c6c |
Self-imitating Feedback Generation Using GAN for Computer-Assisted Pronunciation Training
Seung Hee Yang
Interdisciplinary Program in Cognitive Science
Seoul National University
Republic of Korea
Minhwa Chung mchung@snu.ac.kr
Interdisciplinary Program in Cognitive Science
Seoul National University
Republic of Korea
Department of Linguistics
Seoul National University
Republic of Korea
Self-imitating Feedback Generation Using GAN for Computer-Assisted Pronunciation Training
Index Terms: Computer-Assisted Pronunciation Training (CAPT)Corrective feedback generation for language learningGenerative Adversarial Network (GAN)
Self-imitating feedback is an effective and learner-friendly method for non-native learners in Computer-Assisted Pronunciation Training. Acoustic characteristics in native utterances are extracted and transplanted onto learner's own speech input, and given back to the learner as a corrective feedback. Previous works focused on speech conversion using prosodic transplantation techniques based on PSOLA algorithm. Motivated by the visual differences found in spectrograms of native and non-native speeches, we investigated applying GAN to generate self-imitating feedback by utilizing generator's ability through adversarial training. Because this mapping is highly under-constrained, we also adopt cycle consistency loss to encourage the output to preserve the global structure, which is shared by native and non-native utterances. Trained on 97,200 spectrogram images of short utterances produced by native and non-native speakers of Korean, the generator is able to successfully transform the non-native spectrogram input to a spectrogram with properties of self-imitating feedback. Furthermore, the transformed spectrogram shows segmental corrections that cannot be obtained by prosodic transplantation. Perceptual test comparing the self-imitating and correcting abilities of our method with the baseline PSOLA method shows that the generative approach with cycle consistency loss is promising.
Introduction
Generating a corrective feedback is an important issue in the area of spoken language technology for education [1]. This process may be painstaking if each corrected utterance has to be recorded by a teacher, or result in a negative outcome if the automatic generation of the ideal form is not correctly synthesized. Studies on computer assisted pronunciation training (CAPT) found that the better the match between the learners' and native speakers' voices, the more positive the impact on pronunciation training [2,3]. This emphasizes the importance of the student and teacher voice similarity for the enhancement of pronunciation skills. In self-imitating feedback, the characteristics in native utterances are extracted and transplanted onto the learner's speech. Listening to the manipulated speech enables students to understand the differences between their accented utterances and the native counterparts, and to produce native-accented utterances by selfimitation.
In previous works, speech conversion methods for pronunciation teaching have been studied for Korean and Japanese learners of English, Italian learners of German, Japanese learners of Italian, and for English learners of Mandarin Chinese [4,5,6,7,8]. These studies were based on the prosodic transplantation technique [9], using PSOLA (Pitch-Synchronous Overlap and Add) algorithm [10]. Through this technique, the acoustic parameters including pitch, intensity, articulation rate, and duration of the native speakers are transferred to the learners' speech.
These studies have shown that corrective feedback can be successfully generated at the suprasegmental level. However, the proficiency in a second language is fully attained only if the students have learned to modulate both the prosodic and segmental parameters equivalent to those of the native speakers. The previous methods have been limited to the prosodic level only, although the segmental accuracy plays an important role in spoken language communication [11].
In the first part of this work, we conduct a linguistic comparison between native and non-native utterances by visualizing their differences in pairs of spectrograms, i.e., timefrequency representations of speech. The spectrogram analyses illustrate the segmental characteristics between the two domains, which motivates our idea for using image-generating generative adversarial network (GAN) [12] to learn the mappings between native and non-native spectrograms. Our approach is a first attempt, to the best of our knowledge, at using GAN for speech correction. Since there are numerous "golden references" by native speakers, there are infinitely many mappings the generator can learn. Assuming that there is some underlying relationship between non-native and native linguistic domains, we further adopt cycle consistency loss [13] to induce the output to preserve the global structure, which is shared by native and non-native utterances. We then compare the proposed method with the baseline PSOLA method through a perceptual evaluation, during which corrective and selfimitative effects are judged by human experts.
There are potential advantages of the proposed GAN-based corrective feedback generation. First, GANs enable simplified procedure in feedback generation because it does not rely on the intermediate processes of feature extraction and error region detection. Second, translating spectrograms for sound manipulation in language learning is immediately useful, such as in CAPT applications. Third, the discriminator in GAN has the ability to judge the nativelikeness of spectrograms, which can be used to perform speech assessment task in language learning [14]. Furthermore, despite their increasing fidelity at translating static images [15,16], GANs have yet to be demonstrated to be capable of translating spectral representations of audio, which is the main issue of this paper.
Linguistic Differences between Native and Non-native Speech
One way to analyze speech is by examining their spectrograms, which visually represent the varying short term amplitude spectra of the speech waveform. Spectrogram analysis contains the information on phonetic characteristics, and the practice of using them for speech recognition tasks is common in the discriminative setting [17]. We first make observations on the differences between native and non-native speech by comparing spectrogram pairs of the utterances for the same words in Korean.
In Figure 1, we show an example of a spectrogram pair for the word "half a year." While the left spectrogram captures the resonances of the vocal tract during a diphthong articulation, the right spectrogram shows its monophthong version. As a consequence, the two spectrograms can be differentiated by the number and movements of the darkness bands, showing that non-native speeches are more likely to substitute diphthongs by monophthongs than the native speech. By observing more spectrogram examples, we obtain linguistic differences including final stop deletions, exhibited by the voiced and unvoiced region contrasts in the spectrograms, and lenition of tense consonants, which is demonstrated by the voice onset time in the spectrograms. Moreover, the presence of rhotic vowels in the formant frequencies of the non-native spectrograms is not observed for native counterpart, as the sound does not exist in its phonetic inventory. At the suprasegmental level, the articulation rate and total duration of the native speakers tend to be shorter than the learners' speech. These findings can be confirmed by analyses of the auditory variation patterns in [18].
Based on these observations, we draw two implications for corrective feedback generation. First, we find that spectrograms contain rich information that is enough to differentiate the characteristics of native and non-native utterances in linguistic domains. This motivates our idea for a spectrogram learning using image-generating GAN, where latent space in the audio of non-native linguistic domain is mapped to that of native linguistic domain. Second, despite the differences between the two domains, we also find that they share an underlying structure. Different renderings of the same speech are possible since there can be numerous "golden references," and in theory, there are infinitely many possible acceptable outputs. In order to avoid such confusion, it seems desirable that the outputs preserve the global structure in the input spectrograms. In the following section, we explore how GAN-based methods can exploit these properties.
Feedback Generation using GAN
Generative Adversarial Networks
GANs have attracted attention for their ability to generate convincing images and speeches. GANs [12] are generative models that learn to map the training samples to samples with a prior distribution. The generator (G) performs this mapping by imitating the real data distribution to generate fake samples. G learns the mapping by means of an adversarial training, where the discriminator (D) classifies whether the input is a fake G sample generated by G or a real sample. The task for D is to correctly identify the real samples as real, and thereby distinguishing them from the fake samples. The adversarial characteristic is due to the fact that G has to imitate better samples in order to make D misclassify them as real samples. Figure 1. An example of a spectrogram pair for the word "half a year (반년)" in Korean uttered by a native (left) and nonnative (right) speakers. We observe that spectrogram comparisons are able to capture linguistic differences, which motivates our corrective feedback design choices.
The misclassification loss is used for further improvement of the generator. During the training process, D back-propagates fake samples from G and correctly classifies them as fake, and in turn, G tries to generate better imitations by adapting its parameters towards the real data distribution in the training data. In this way, D transmits information to G on what is real and what is fake. This adversarial learning process is formulated as a minimax game between G and D, which is formulated as: min
max V (D,G) = ~ () [log D(x)] + ~() [log (1-D(G(z))].
(1) where Pdata(x) is the real data distribution, and PZ(z) is the prior distribution. For a given x, D(X) is the probability x is drawn from Pdata(x), and D(G(z)) is the probability that the generated distribution is drawn from PZ(z). analyses of the auditory variation patterns in [18].
Conditional GANs (cGANs) learn a conditional generative model [19] where we condition on the input and generate corresponding output. G tries to minimize the objective below against an adversarial D that tries to maximize it. [19] demonstrated that cGANs can solve a wide variety of problems by testing the method on nine different graphics and vision tasks, such as style transfer and product photo generation. By interpreting speech correction task as a spectrogram translation problem, we explore the generality of conditional GANs.
(G, D) = , [log D(x,y)] + , [log(1 -D(x,G(x,z))].(2)
With large enough capacity, the adversarial loss alone may not guarantee that the learned function can map the input to the desired output. In our case, this may result in inappropriate or unwanted corrections generated by the network, which is highly undesirable for self-imitating learning. [13] introduced cycle consistency loss to further reduce the space of possible mapping functions. This is incentivized by the idea that the learned mapping should be cycle-consistent, which is trained by the forward and backward cycle-consistency losses:
(, ) = ~ () [||F(G(x) -x|| ] + ~ () [||G(F(y) -y|| ].(3)
Here, the network contains two mapping functions G : X → Y and F : Y → X. For each image x from domain X, the translation cycle should be able to bring x back to the original image, and vice versa. While the adversarial loss trains to match the distribution of generated images to the data distribution in the target domain, the cycle consistency losses can prevent the learned mappings G and F from contradicting each other. In addition to the conditional GAN, we explore the generator's behavior when trained with the full objective including adversarial and cycle consistency losses.
Self-Imitating Feedback Generation using GAN
The proposed method using GAN is done in five steps: 1) native (N) and non-native (NN) paired speech preparation, 2) speechto-spectrogram conversion, 3) spectrogram-to-spectrogram training, 4) inversion back into audio signal, and 5) playback the generated audio back to the learner. GAN is used in the third step and the conversion techniques are used during the second and the fourth steps. In order to train using conditional GAN, the prepared samples are first concatenated and fed into the generator, where adversarial training is done using the discriminator which classifies whether the samples are fake (generated/corrected speech) or real (native speech). The process is shown in Figure 2. For the cycle-consistent adversarial training, there is no concatenation step, since it takes unpaired input.
Experimental Method
Corpus
The proposed model is trained on L2KSC (L2 as Korean Speech Corpus) [20]. The corpus is used because it is a parallel native and non-native speech database available to the public and fits our experiment settings. There are 217 non-native speakers with 27 mother tongue backgrounds, and 107 native speakers of 54 females and 53 males. Each speaker read 300 short utterances, which are in average one second in length. When each spectrogram of non-native recording is paired with all native recordings of the same utterance, there are 1,357,321 pairs of samples for the conditional GAN training. For cycle consistent adversarial training, there are 32,100 and 65,100 spectrograms in the native and non-native domains, each respectively. The 162 spectrograms for test are completely held-out.
Experiments
Baseline Implementation
Baseline corrective feedback sounds were generated using PSOLA algorithm, implemented in Praat [21]. The acoustic parameters of pitch, intensity, and duration of the native speech of the same utterance are extracted and transplanted on to the held-out non-native recordings manually to provide the best performance of PSOLA algorithm.
Speech-to-Spectrogram and Spectrogram-to-speech Conversions
We first convert audio signal to spectrogram using Short-Time Fourier Transform (STFT) with windows of 512 frames and 33% overlap, converted to dB amplitude scale, represented using mel scale and padded with white noise to generate 128x128 pixels images.
We use griffin_lim framework [22] which is a python implementation of the Griffin and Lim algorithm to convert the spectrogram to audio signal by using the magnitude of its STFT. It performs low-pass filtering of the spectrogram by zeroing all frequency bins above the preset cutoff frequency, and then uses the Griffin and Lim algorithm to reconstruct an audio signal from the spectrogram. The algorithm works to rebuild the signal with STFT such that the magnitude part is as close as possible to the spectrogram. For high quality output and minimum loss in transformations, it is run for 1,000 iterations. Perceptual evaluation of the regenerated audio signal before and after transformation do not show any significant difference in quality. The different utility tools built around the framework are released on our github repository.
Spectrogram2Spectrogrm Training
The spectrogram2spectrogrm translation for conditional GAN follows the same network architecture as in Pix2Pix framework [19], which uses "U-Net" shaped generator [23] with skip connections that allows to capture low-level information shared by the input and output while circumventing the information loss at the bottleneck. For the discriminator training, Markovian PatchGAN [19] is used for classifying if each N x N patch in an image is real or fake. The CycleGAN framework [13] is used for the cycle consistent adversarial training, which adopts the generator architectures from [24], which has shown impressive results for neural style transfer, and uses PatchGAN discriminator.
Native and non-native pairs of spectrograms corresponding to the same utterances are taken as the input into the Pix2Pix framework, while unpaired 97,200 spectrogram images of the two domains are fed into the CycleGAN network. Data augmentation option by flipping images is disabled and batch size was increased to 4 from the default 1. When the training is finished, the model is applied to all the test spectrograms. Web interface visualization of the training process, which was offered in the frameworks, was used to monitor the training and track how spectrogram and the corresponding sounds evolve over time. The supplementary material, "Training Process Visualization with Sound.mp4," shows a case of corrective evolution for the word "first time (처음)," where the spectrogram is correcting the syllable deletion error. Figure 3 shows the spectrograms for non-native, generated, and native speeches at epoch 1 and epoch 3 in the Pix2Pix framework. It shows that the generator quickly learns to imitate the native spectrogram by generating a fake version of the reference. After more training, the generator has discovered to generate spectrograms with higher proximity to the native. Since the test data was completely held out, this means that the model learned to recognize which word the spectrogram represents, and identified which native spectrogram should be mapped to the given non-native.
Results and Evaluation
Spectrogram Generation Results
Perceptual Evaluation
Method
Our ultimate goal is to produce examples that are corrective, self-imitative, and intelligible to humans. To this end, we measure the ability of human annotators to label the generated audio. Using our three models, PSOLA, pix2pix, and CycleGAN, we generate evaluation files, which amount to 486 waveforms in total. Examples of the spectrograms before the conversion are shown in Figure 4. The four native Korean human raters with knowledge in linguistics assigned subjective values from 1 to 5 for the five criteria: holistic impression of correction, degree of segmental correction, degree of suprasegmental correction, sound quality, and speaker voice imitability. The score of 3 was assigned if there is no difference before and after the manipulation. They listened to the original non-native utterance, followed by a generated output from one of the three models. The order of presentation was randomized.
Result
We report MOS (mean opinion scores) values in Table 1. It shows that our newly proposed CycleGAN-based speech correction method is able to generate corrective feedback. Linguistic analysis shows that the generator's corrective ability Figure 4. Different methods for corrective generation by using PSOLA algorithm and spectrogram learning. is effective both in the segmental and suprasegmental aspects. Since an error in the generated feedback can be critical in learning applications, we verified that all corrective ability scores in CycleGAN are 3 or above, which means that there was no degradation. For the baseline PSOLA method, the evaluators report that there were numerous cases when the generated results does not make corrections, or make corrections that are perceptually trivial. On the other hand, the generated results using Pix2Pix framework often fails to generate a corrected speech. The supplementary material, "Test Data Visualization with Sound.mp4," enables direct comparisons with auditory data.
In addition to MOS scores, we conducted auditory transcription of the generated utterance on a random subsample of the test set for qualitatively analyzing where the correction occurs. Successful cases include corrections of detensifying errors of /s˭/ in the word "fishing (낚시)," as mentioned in the spectrogram comparisons in Section 2. Moreover, while the statement "It is fast (빨라요)" was realized as a question with a final rise, it was corrected by the generator. The rate of speech tends to be closer to the native when there were silence between syllables in the non-native speech. We also found cases of negative correction, such as omitting a syllable or a final stop.
In all cases, the generated sound qualities were worse than the original recording. For the two generative models, it possible that the poor qualitative ratings are primarily caused by the lossy Griffin-Lim inversion, therefore, synthesizing clear audio needs to be addressed in the future work. Moreover, there is a room for improvement in CycleGAN's imitability score, which is lower than PSOLA method. This may be due to the diversity in reference styles and future work can be expanded for the generator to better imitate speaker voice characteristics.
Conclusion
This study lays the groundwork for an automatic self-imitating speech correction system for pronunciation training. To the best of our knowledge, it is the first approach comparing different GAN architectures on spectrogram. The perceptual evaluation shows that cycle consistent adversarial training is a promising approach for speech correction task. In our future work we plan to extend to improve speaker voice imitability and operate on longer length audio recordings to explore a variety of conditioning strategies.
Figure 2 .Figure 3 .
23Experiment Method using the speech2wav conversion, spectrogram learning using conditional GAN, and wav2speech conversion for feedback generation Spectrogram learning using Pix2Pix framework for non-native, generated, and native speeches, from left to right, at epoch 1 (above) and epoch3 (below).
Table 1 :
1MOS values of perceptual test by four human experts on self-imitation feedback generation (SQ: Sound Quality)Model
Corrective Ability
Imit-
ability
SQ
Avg.
Holis
-tic
Seg-
mental
Supra-
segmental
PSOLA
3.118
3.029
3.324
4.029
2.794
3.259
Pix2Pix
1.970
2.485
2.152
2.697
1.636
2.188
Cycle-
GAN
4.000
4.333
4.364
3.515
2.667
3.776
An overview of spoken language technology for education. M Eskenazi, Speech Communication. 51M. Eskenazi, "An overview of spoken language technology for education," Speech Communication, vol. 51, pp. 832-844, 2009.
Enhancing foreign language tutors -In search of the golden speaker. " K Probst, Y Ke, M Eskenazi, Speech Communication. 37"K. Probst, Y. Ke and M. Eskenazi, "Enhancing foreign language tutors -In search of the golden speaker," Speech Communication, vol. 37, pp. 161-173, 2002.
Foreign accent conversion in computer assisted pronunciation training. D Felps, H Bortfeld, R Gutierrez-Osuna, Speech Communication. 5110D. Felps, H. Bortfeld, and R. Gutierrez-Osuna, "Foreign accent conversion in computer assisted pronunciation training," Speech Communication, vol.51, no.10, 2009, pp.920-932.
Imposing native speakers' prosody on non-native speakers' utterances: The technique of cloning prosody. K Yoon, Journal of the Modern British and American Language & Literature. 254K. Yoon, "Imposing native speakers' prosody on non-native speakers' utterances: The technique of cloning prosody," Journal of the Modern British and American Language & Literature, vol. 25 (4), 2007, pp. 197-215.
English speech training using voice conversion. K Nagano, K Ozawa, 1st International Conference on Spoken Language Processing. Kobe, JapanK. Nagano, K. Ozawa, "English speech training using voice conversion," 1st International Conference on Spoken Language Processing, Kobe, Japan, 1990, pp. 1169-1172.
Lexical Stress Training of German Compounds for Italian Speakers by means of Resynthesis and Emphasis. M P Bissiri, H Pfitzinger, H G Tillmann, Proceedings of the 11th Australian International Conference on Speech Science & Technology. the 11th Australian International Conference on Speech Science & TechnologyNew ZealandUniversity of AucklandM. P. Bissiri, H. R Pfitzinger, and H.G. Tillmann, "Lexical Stress Training of German Compounds for Italian Speakers by means of Resynthesis and Emphasis," Proceedings of the 11th Australian International Conference on Speech Science & Technology, New Zealand: University of Auckland, 2006. pp. 24-29.
Self-imitation in prosody training: a study on Japanese learners of Italian. E Pellegrino, V Debora, Proceedings of SLaTE. SLaTELeipzig, Germany5E. Pellegrino, and V. Debora, "Self-imitation in prosody training: a study on Japanese learners of Italian," Proceedings of SLaTE 2015, Leipzig, Germany, vol. 5, 2015, pp. 53-57.
Towards automatic tone correction in nonnative mandarin. M Peabody, S Seneff, Chinese Spoken Language Processing. Berlin, HeidelbergSpringerSpoken Language ProcessM. Peabody and S. Seneff, "Towards automatic tone correction in nonnative mandarin. Chinese. Spoken Language Process", Chinese Spoken Language Processing. Springer, Berlin, Heidelberg, 2006. pp. 602-613.
Transplanting native prosody into second language speech. M Pettorino, M Vitale, Methodological Perspectives on Second Language Prosody. M. Pettorino and M. Vitale, "Transplanting native prosody into second language speech," Methodological Perspectives on Second Language Prosody, 2012, pp. 11-16, 2012.
Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones. F Charpentier, E Moulines, Proceedings of the First European Conference on Speech Communication and Technology. the First European Conference on Speech Communication and TechnologyF. Charpentier and E. Moulines, "Pitch-synchronous waveform processing techniques for text-to-speech synthesis using diphones," Proceedings of the First European Conference on Speech Communication and Technology, Eurospeech, 1989, pp. 2013-2019.
Linguistic Factors Affecting Evaluation of L2 Korean Speech Proficiency. S H Yang, M Chung, Proceedings of SLaTE 2017. SLaTE 2017Stockholm, SwedenS.H. Yang and M. Chung, "Linguistic Factors Affecting Evaluation of L2 Korean Speech Proficiency," Proceedings of SLaTE 2017, Stockholm, Sweden, 2017, pp. 53-58.
Generative adversarial nets. I Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Wardefarley, S Ozair, A Courville, Y Bengio, Advances in Neural Information Processing Systems. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. WardeFarley, S. Ozair, A. Courville, and Y. Bengio, "Generative adversarial nets," Advances in Neural Information Processing Systems, 2014, pp. 2672-2680.
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. J Y Zhu, T Park, P Isola, A Efros, Proceedings of ICCV. ICCVJ.Y. Zhu, T. Park, P. Isola, A. Efros, "Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks," Proceedings of ICCV, 2017.
Speech Assessment using Generative Adversarial Network. S H Yang, M Chung, Proceedings of Machine Learning in Speech and Language Processing Workshop. Machine Learning in Speech and Language Processing WorkshopS.H. Yang and M. Chung, "Speech Assessment using Generative Adversarial Network," Proceedings of Machine Learning in Speech and Language Processing Workshop, 2018.
Unsupervised representation learning with deep convolutional generative adversarial networks. A Radford, L Metz, S Chintala, Proceedings of ICLR. ICLRA. Radford, L. Metz, and S. Chintala, "Unsupervised representation learning with deep convolutional generative adversarial networks," Proceedings of ICLR, 2016.
Progressive growing of GANs for improved quality, stability, and variation. T Karras, T Aila, S Laine, J Lehtinen, Proceedings of ICLR. T. Karras, T. Aila, S. Laine, and J. Lehtinen, "Progressive growing of GANs for improved quality, stability, and variation," Proceedings of ICLR, 2018.
CNN architectures for large-scale audio Classification. S Hershey, S Chaudhuri, D P Ellis, J Gemmeke, A Jansen, C Moore, M Plakal, D Platt, R Saurous, B Seybold, Proceedings of ICASSP. ICASSPS. Hershey, S. Chaudhuri, D.P. Ellis, J. Gemmeke, A. Jansen, C. Moore, M. Plakal, D. Platt, R. Saurous, B. Seybold, et al., "CNN architectures for large-scale audio Classification," Proceedings of ICASSP, 2017.
Modeling Pronunciation Variations for Non-native Speech Recognition of Korean Produced by Chinese Learners. S H Yang, M Na, M Chung, Proceedings of SLaTE. SLaTELeipzig, GermanyS.H. Yang, M. Na, and M. Chung, "Modeling Pronunciation Variations for Non-native Speech Recognition of Korean Produced by Chinese Learners," Proceedings of SLaTE 2015, Leipzig, Germany, 2015, pp. 95-99.
Image-to-Image Translation with Conditional Adversarial Networks. P Isola, J Y Zhu, T Zhou, A Efros, Proceedings of CVPR. CVPRP. Isola, J.Y. Zhu, T. Zhou, A. Efros, "Image-to-Image Translation with Conditional Adversarial Networks," Proceedings of CVPR, 2017.
Design and Construction of Speech Corpus for Korean as a Foreign Language (L2KSC). S Lee, J Chang, The Journal of Chinese Language and Literature. 33S. Lee, and J. Chang, "Design and Construction of Speech Corpus for Korean as a Foreign Language (L2KSC)," The Journal of Chinese Language and Literature, vol. 33, 2005, pp. 35-53.
Praat, a system for doing phonetics by computer. P Boersma, Glot International. 5P. Boersma, Praat, a system for doing phonetics by computer, Glot International 5, 2001, pp. 341-345.
Signal Estimation from Modified Short-Time Fourier Transform. D Griffin, J Lim, IEEE Transactions on Acoustics, Speech and Signal Processing. 32D. Griffin, and J. Lim, "Signal Estimation from Modified Short- Time Fourier Transform," IEEE Transactions on Acoustics, Speech and Signal Processing. vol. 32, 1984, pp. 236-243.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, MICCAI. SpringerO. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," In MICCAI, Springer, 2015, pp. 234-241.
Perceptual losses for realtime style transfer and super-resolution. J Johnson, A Alahi, L Fei-Fei, Proceedings of ECCV. ECCVJ. Johnson, A. Alahi, and L. Fei-Fei, "Perceptual losses for real- time style transfer and super-resolution," Proceedings of ECCV, 2016.
| [] |
[
"HIGHLY FAST TEXT SEGMENTATION WITH PAIRWISE MARKOV CHAINS",
"HIGHLY FAST TEXT SEGMENTATION WITH PAIRWISE MARKOV CHAINS"
] | [
"Elie Azeraf elie.azeraf@ibm.com ",
"Emmanuel Monfrini ",
"Wojciech Pieczynski ",
"\nWatson Department\nSAMOVAR, Telecom SudParis Institut Polytechnique de Paris Emmanuel Vignon Watson Department IBM GSB\nIBM GSB\nFrance, France\n",
"\nSAMOVAR, Telecom SudParis Institut Polytechnique de Paris\n\n"
] | [
"Watson Department\nSAMOVAR, Telecom SudParis Institut Polytechnique de Paris Emmanuel Vignon Watson Department IBM GSB\nIBM GSB\nFrance, France",
"SAMOVAR, Telecom SudParis Institut Polytechnique de Paris\n"
] | [] | Natural Language Processing (NLP) models' current trend consists of using increasingly more extradata to build the best models as possible. It implies more expensive computational costs and training time, difficulties for deployment, and worries about these models' carbon footprint reveal a critical problem in the future. Against this trend, our goal is to develop NLP models requiring no extra-data and minimizing training time. To do so, in this paper, we explore Markov chain models, Hidden Markov Chain (HMC) and Pairwise Markov Chain (PMC), for NLP segmentation tasks. We apply these models for three classic applications: POS Tagging, Named-Entity-Recognition, and Chunking. We develop an original method to adapt these models for text segmentation's specific challenges to obtain relevant performances with very short training and execution times. PMC achieves equivalent results to those obtained by Conditional Random Fields (CRF), one of the most applied models for these tasks when no extra-data are used. Moreover, PMC has training times 30 times shorter than the CRF ones, which validates this model given our objectives. | 10.1109/cist49399.2021.9357304 | [
"https://arxiv.org/pdf/2102.11037v1.pdf"
] | 231,985,699 | 2102.11037 | 9de7cda9a43dfac403cdeca6fbdda7565bab8432 |
HIGHLY FAST TEXT SEGMENTATION WITH PAIRWISE MARKOV CHAINS
17 Feb 2021
Elie Azeraf elie.azeraf@ibm.com
Emmanuel Monfrini
Wojciech Pieczynski
Watson Department
SAMOVAR, Telecom SudParis Institut Polytechnique de Paris Emmanuel Vignon Watson Department IBM GSB
IBM GSB
France, France
SAMOVAR, Telecom SudParis Institut Polytechnique de Paris
HIGHLY FAST TEXT SEGMENTATION WITH PAIRWISE MARKOV CHAINS
17 Feb 2021Chunking · Hidden Markov Chain · Named Entity Recognition · Pairwise Markov Chain · Part-Of-Speech tagging
Natural Language Processing (NLP) models' current trend consists of using increasingly more extradata to build the best models as possible. It implies more expensive computational costs and training time, difficulties for deployment, and worries about these models' carbon footprint reveal a critical problem in the future. Against this trend, our goal is to develop NLP models requiring no extra-data and minimizing training time. To do so, in this paper, we explore Markov chain models, Hidden Markov Chain (HMC) and Pairwise Markov Chain (PMC), for NLP segmentation tasks. We apply these models for three classic applications: POS Tagging, Named-Entity-Recognition, and Chunking. We develop an original method to adapt these models for text segmentation's specific challenges to obtain relevant performances with very short training and execution times. PMC achieves equivalent results to those obtained by Conditional Random Fields (CRF), one of the most applied models for these tasks when no extra-data are used. Moreover, PMC has training times 30 times shorter than the CRF ones, which validates this model given our objectives.
Introduction
In the past ten years, developments of Deep Learning methods [1,2] have enabled research in Natural Language Processing (NLP) to take an impressive leap. Some tasks, like Question Answering [3] or Sentiment Analysis [4], seemed unrealistic twenty years ago, and nowadays recent neural models achieved better scores than humans [5] [6] for these applications. This dynamic's main motivation is the direct applications of NLP models in the industry, with tasks such as Named-Entity-Recognition and mail classification. However, the cost of improving models' performances increases, and learning algorithms always need more data and power to be trained and make a prediction. We can produce models that achieve impressive scores, but deployment and climate change problems [7] raise some issues. Our aim in this paper is to initiate a reflection around light models by introducing a new Markov model design for text segmentation and comparing it with machine learning algorithms that have a reasonable carbon impact and execution time. Motivation about the choice of segmentation tasks is explained later in this paper.
The Hidden Markov Chains (HMC), introduced by Stratonovitch sixty years ago [8] [9] [10] [11] [12] [13], which model poor correlations, are widely used in machine learning, and have especially been applied for NLP segmentation tasks [14] [15] [16].
For over twenty years, HMCs have been strictly generalized to Pairwise Markov Chains (PMCs) [17], a family of models including HMCs. In PMCs, the "hidden" chain is not necessarily Markov, and the noise is modeled in a more correlated -and thus more informative -way. However, PMCs keep the same advantages as HMCs in regards to the hidden data estimation. In particular, the training and estimation tasks still have linear complexity. PMCs have especially been studied for image segmentation, with discrete hidden variables and continuous observations. It turns out that using PMCs instead of HMCs can divide the error rate by two [18] [19]. However, for specific reasons relative to language processing, which we will develop later, PMCs have never been applied for NLP tasks.
This paper explores the interest in using PMCs for three of the main text segmentation tasks: Part-Of-Speech (POS) Tagging, Chunking, and Named-Entity-Recognition (NER). The best methods for these tasks are based on Deep Learning models [20] [21]. However, to produce excellent scores, these models require a large amount of extra-data, which results in very long training time and difficulties in deploying them with classic architectures.
The paper is organized as follows. In the next section, we present and compare PMCs and HMCs, the bayesian segmentation methods, and the parameter estimation algorithm. The third section is devoted to the text segmentation tasks and an original way to adapt Markov chain models for these tasks while keeping fast training and relevant results. Experiments are presented in section four. The last section is devoted to conclusions and perspectives.
Markov Chain Models
Let X 1:T = (X 1 , ..., X T ) and Y 1:T = (Y 1 , ..., Y T ) be two discrete stochastic processes. For all t in {1, ..., T }, X t takes its values in Λ X = {λ 1 , ..., λ N } and Y t takes its values in Ω Y = {ω 1 , ..., ω M }. Let us then consider the process Z 1:T = (X 1:T , Y 1:T ).
Our study takes place in the case of latent variables, with an observed realization y 1:T of Y 1:T , and the corresponding hidden realization x 1:T of X 1:T having to be estimated.
In the following, in order to simplify the notation, for all t in {1, ..., T }, events {X t = x t }, {Y t = y t } and {Z t = z t } will be denoted by {x t }, {y t } and {z t }, respectively.
HMC -PMC
Starting from the most correlated shape of the couple (X 1:T , Y 1:T ), we can say that Z 1:T is an HMC if it is possible to write its probability law as:
p(z 1:T ) = p(x 1 )p(y 1 |x 1 )p(x 2 |x 1 )p(y 2 |x 2 ) ...p(x T |x T −1 )p(y T |x T )(1)
Moreover, for all λ i , λ j ∈ Λ X and ω k ∈ Ω Y , we consider homogeneous HMCs defined with the following parameters:
• Initial probability π = (π(1), ..., π(N )) with
π(i) = p(x 1 = λ i ) • Transition matrix A = {a i (j)} with ∀t ∈ {1, ..., T − 1}, a i (j) = p(x t+1 = λ j |x t = λ i ) • Emission matrix B = {b i (k)} with ∀t ∈ {1, ..., T }, b i (k) = p(y t = ω k |x t = λ i )
An oriented dependency graph of the HMC is given in figure 1.
We can note that the dependency graph is almost reduced to the minimum to link dependent couple of sequential data, and yet HMC is known to be a robust model. PMCs, which we will detail just after, allow to introduce more connections to the graph, as shown in figure 2.
Starting from the most correlated shape of the couple (x 1:T , y 1:T ), we can say that z 1:T is a PMC if z 1:T is a Markov chain, which is equivalent to admitting that its distribution is of the form:
p(z 1:T ) = p(z 1 )p(z 2 |z 1 )...p(z T |z T −1 )(2)p(z 1:T ) =p(x 1 )p(y 1 |x 1 )p(x 2 |x 1 , y 1 )p(y 2 |x 2 , x 1 , y 1 ) ...p(x T |x T −1 , y T −1 )p(y T |x T , x T −1 , y T −1 )(3)
When comparing (1) and (3) we can see that a PMC is a HMC [17] if and only if ∀t ∈ {1, ..., T − 1},
• p(x t+1 |x t , y t ) = p(x t+1 |x t ) and • p(y t+1 |x t+1 , x t , y t ) = p(y t+1 |x t+1 ).
From a graphical point of view, we can materialize on figures 1 and 2 how PMCs are more general than HMCs. In particular, the markovian assumption of x 1:T made for HMCs is not supposed for PMCs, but the determinist bayesian inference is still possible.
In this paper, we consider homogeneous PMCs defined with three sets of parameters, ∀λ i , λ j ∈ Λ X , ∀ω k , ω l ∈ Ω Y , ∀t ∈ {1, ..., T − 1}:
• Initial probability matrix Π P MC = {π P MC (i, k)} with π P MC (i, k) = p(x 1 = λ i , y 1 = ω k ); • Transition matrix A P MC = {a P MC i,k (j)} with: a P MC i,k (j) = p(x t+1 = λ j |x t = λ i , y t = ω k ); • Emission matrix B P MC = {b P MC i,j,k (l)}: b P MC i,j,k (l) = p(y t+1 = ω l |x t = λ i , x t+1 = λ j , y t = ω k ).
Bayesian segmentation
There are two Bayesian methods to estimate the realizationx 1:T of the hidden chain: the Marginales Posterior Mode (MPM) and the Maximum A Posteriori (MAP).
The MPM estimator is given by : MPM estimator is given with Forward-Backward algorithm [9] [12], while MAP is given by the Viterbi one [22]. We tested both of them, and MPM gives slightly better results than MAP in term of accuracy for the tasks we consider, so we only present results for MPM.
x MP M = (x 1 , ...,x T ) with ∀t ∈ {1, ..., T }, p(x t |y 1:T ) = sup
We thus have to compute ∀t ∈ {1, ..., T }, ∀λ i ∈ Λ X , p(x t = λ i |y 1:T ) to maximize posterior marginals with the most probable state at every time t ∈ {1, ..., T }. We present the Forward-Backward algorithm in the case of PMC which generalizes the classical HMC one.
∀t ∈ {1, ..., T }, ∀λ i ∈ Λ X :
p(x t = λ i |y 1:T ) = α t (i)β t (i) j∈ΛX α t (j)β t (j) where ∀t ∈ {1, ..., T }, ∀λ i ∈ Λ X : α t (i) = p(y 1:t , x t = λ i ) β t (i) = p(y t+1:T |x t = λ i , y t ) ∀λ i ∈ Λ X , ∀t ∈ {1, .
.., T }, α t (i) and β t (i) can still be computed thanks to forward and backward recursions:
• with y 1 = ω k , α 1 (i) = π P MC (i, k) • ∀t ∈ {1, ..., T − 1}, with y t = ω k , y t+1 = ω l , α t+1 (i) = λj ∈ΛX a P MC j,i (k)b P MC j,i,k (l)α t (j) and • β T (i) = 1 • ∀t ∈ {1, ..., T − 1} with y t = ω k , y t+1 = ω l , β t (i) = λj ∈ΛX a P MC i,j (k)b P MC i,j,k (l)β t+1 (j)
One can normalize forward and backward probabilities at every step, which allows them to avoid underflowed computational problems without modifying the results. This algorithm can be executed with matrix computation, allowing a highly fast execution.
Parameter estimation
We estimate the parameters with maximum likelihood estimator [23] [14]. It consists in computing the empirical frequencies of the probabilities of interest. We have, ∀λ i , λ j ∈ Λ X , ∀ω k , ω l ∈ Ω Y ,
π(i) = N 0 i L ,â i (j) = N i,j N i ,b i (k) = M i,k N i andπ P MC (i, k) = N 0 i,k L ,â P MC i,k (j) = N i,k,j M i,k , b P MC i,j,k (l) = N i,k,j,l N i,k,j
where N i,k,j,l is the number of occurrences of the pattern (x t = λ i , y t = ω k , x t+1 = λ j , y t+1 = ω l ) in the L chains of the training set. Then,
N i,k,j = ω l ∈ΩY N i,k,j,l , N i,j = ω k ∈ΩY N i,k,j , M i,k = λj ∈ΛX N i,k,j and N i = λj ∈ΛX N i,j .
Finally, N 0 i and N 0 i,k are respectively the number of times x 1 = λ i and z 1 = (λ i , ω k ) in the L chains of the training set.
When card(Λ X ) or card(Ω Y ) are huge, some patterns may not be observed in the training set, which implies that the corresponding estimation is zero. It is the case for NLP with Ω Y , the space of possible written words. To minimize this problem, especially in PMC, we accept to partially "downgrade" PMC to HMC when necessary. This original process is represented in figure 3. It consists of approximating the forward and backward probabilities of the PMC by the HMC ones when those of PMC equal 0.
Moreover, it is essential to note that "online" learning is fast and easy in those models. If new sentences enrich the training set, updating parameters only requires us to "add" information to already determined values without retraining the model on the complete dataset.
Text segmentation tasks with Markov Chain models
Let us now have a few words on the three text segmentation tasks that we are interested in: POS Tagging, Chunking, and NER. For those three tasks, we observe the words of some sentences for which we have to find the specific labels. The set of entities depends on the use case, for example, finding the name of protein or DNA in medical data as in [24]. The performance for NER is evaluated with F 1 score [25].
Chunking consists of segmenting a sentence into its sub-constituents, such as noun (NP), verb (VP), and prepositional phrases (PP). For example, (We, saw, the, yellow, dog, .) has the chunks: (NP, VP, NP, NP, NP, O). Like NER, the performance is evaluated with the F 1 score. More details about these tasks can be found in the NLTK book [26].
We took these three tasks as they can be the basis of more complex ones. For example, with text classification when the corpus is small, with limited architecture avoiding the deployment of a heavy deep learning model. This case can happen, for example, with email classification, with models having to process hundreds or thousands of emails per second with a maximum RAM power of 4Gb, given a relatively small training set. One way to handle this problem is to construct a text segmentation model, then another one using the predicted labels, to make a prediction. The three label types we consider, POS tag, chunk tag, and entity, are particularly useful to construct this type of architecture, motivating the choice of these tasks.
However, When working in the context of text segmentation, the cardinal of Ω Y is very large, and managing unknown patterns (not in the train set) is then a real challenge. Although the Markov properties of our processes partially help to tackle this problem, it is not enough. PMC and HMC have to be adapted to consider this problem. Our goal is also to keep a very short training time. First of all, we can use the "downgrading" process from PMC to HMC described in Section II.B to alleviate this problem when PMC is used.
Moreover, for HMC, we look for extra information in the unknown observed words. Given a word ω, We introduce the following functions:
• u(ω) = 1 if the first letter of ω is up, 0 otherwise • h(ω) = 1 if ω has a hyphen, 0 otherwise • f (ω) = 1 if ω is the first word of the sentence, 0 otherwise • d(ω) = 1 if ω has a digit, 0 otherwise • s m (ω) = the suffix of length m of ω Then, for ω k ∈ Ω Y unknown, and ∀λ i ∈ Λ X , we approximate b i,k by
b i (k) ≈ p(u(ω k ), h(ω k ), f (ω k ), d(ω k ), s 3 (ω k )|x t = λ i )
which is also estimated with empirical frequencies. If s 3 (ω k ) is unknown in the training set, we use s 2 (ω k ) or otherwise s 1 (ω k ) or finally s 0 (ω k ). This original approximation method allows us to improve the segmentation of unknown words while keeping a very short training time with PMC and HMC. The choice of the different features depends of the language, these ones are selected for English, and one can add any features according to language characteristics.
It is important to note that the choice of the features is crucial as we cannot use arbitrary features with Markov chain models [23] [27] [28]. It preserves the model from a "second level" of unknown patterns, and it avoids having our new approximation of b i (k) to be equal to 0 too many times.
Experiments
To calibrate the performance of our models, we compare their performances to benchmark models with no extra-data: Maximum Entropy Markov Models [27], Recurrent Neural Networks [29] [30], Long-Short-Term-Memory (LSTM) network [31], Gated Recurrent Unit [32], Conditional Random Field (CRF) [33], and the BiLSTM-CRF [34]. We present the results of the best one with appreciable training and execution time, the CRF, having the same features described in section 3, to which we add the suffix of length 4 and prefixes of length 1 to 4. Comparing results to this model is particularly relevant, as it is one of the most applied models for these tasks when no extra-data are used. The experiments are done with python. We code our own Markov Chain models, and we use the CRF Suite [35] library for the CRF models. We use reference datasets for every tasks: CoNLL 2000 [36] for Chunking, UD English [37] for POS Tagging, and CoNLL 2003 [38] for NER 2 . In addition, we take advantage of having enough data to use CoNLL 2000 and CoNLL The results are presented in Table I for POS Tagging, Table II for NER, and Table III for Chunking. Training times on CPU with 8 Gb of RAM are presented in Table IV. HMC has the lowest training times, but its performances are significantly worse than those of the other models. The difference can achieve 20%. As HMC is a particular case of PMC, these results were expected. They illustrate the interest in using the PMC model rather than the HMC one. The most relevant results are those regarding the comparison between PMC and CRF. They have equivalent scores, with PMC having better results in 4 of the 6 experiments. In general, PMC is better for known words and has worse results for unknown ones. The main difference between the two models rests on the time required to train them. PMC is about 30 times faster to train than CRF, and adding new data does not entail the retraining of models. About execution time, we have the same observation, with PMC about ten times faster. It is our work's central goal: to have the best and fastest segmentation model as possible with no extra-data, and PMC seems to achieve this objective.
Conclusion
PMC has good performances with no extra-data for these tasks, with equivalent results to the CRF. The main advantage of PMC is its training and execution time, much faster for text segmentation. It confirms the interest of using PMC to build an extra-light model for text segmentation, which can then be used for most complex tasks without being restrictive for deployment.
We are conscious that our models are not as competitive as the best deep learning ones, especially for NER. These latter use a large amount of data to construct word embeddings [40] [41] [42], which become the input of models, like BiLSTM-CRF, for example, to achieve state-of-the-art results [43]. On the one hand, using these embeddings is not possible with PMC with the Viterbi or the Forward-Backward algorithms, as this model cannot use observations with arbitrary features. On the other hand, these neural models are heavy and impossible to deploy without a significant configuration.
Figure 1 :Figure 2 :
12Probabilistic Probabilistic graphical model of PMC which can be rewritten, without loss of generality,
t = λ i |y 1:T ) and the MAP estimator is :x MAP = arg max x1:T ∈(ΛX ) T p(x 1:T |y 1:T )
Figure 3 :
3Graphical model of a partially "downgraded" PMC -The couple of observations (y 2 , y 3 ) and (y 5 , y 6 ) never appeared in this order in the training set
POS tagging consists in labeling each word with its grammatical function as verb, determinant, adjective... For example, (y 1 , y 2 , ..., y 12 ) = (John, likes, the, blue, house, at, the, end, of, the, street, .) and (x 1 , x 2 , ..., x 12 ) = (Noun, Verb, Det, Adj, Noun, Prep, Det, Noun, Prep, Det, Noun, Punct). The performance of POS Tagging is evaluated with the accuracy error. NER consists in discriminating the "entities" from the words of sentences. The "entities" can be a person (PER), a localization (LOC), or an organization (ORG). The set of entities depends on the use case. For example, (John, works, at, IBM, in, Paris, .) can have the following entities (PER, O, O, ORG, O, LOC, O) where O stands for "no entity".
Table 1 :
1HMC, CRF and PMC for POS Tagging with error rates, KW stands for Known Words, and UW for UnknownWords
HMC
CRF
PMC
CONLL 2000
2.96%
2.47%
2.32%
CONLL 2000 KW 1.94%
1.72%
1.27%
CONLL 2000 UW 16.54% 12.47% 16.41%
CONLL 2003
5.29%
4.28%
4.71%
CONLL 2003 KW 4.03%
3.21%
3.40%
CONLL 2003 UW 15.30% 12.79% 15.16%
UD ENGLISH
8.13%
6.95%
7.16%
UD ENGLISH KW
6.07%
5.46%
5.00%
UD ENGLISH UW 30.73% 23.25% 30.78%
Table 2 :
2HMC, CRF and PMC for NER with F 1 scores, KW stands for Known Words, and UW for Unknown Words KW 87.07 87.41 88.47 CONLL 2003 UW 56.67 58.67 56.87HMC
CRF
PMC
CONLL 2003
78.44 79.16 79.52
CONLL 2003
Table 3 :
3HMC, CRF and PMC for Chunking with F 1 scores, KW stands for Known Words, and UW for Unknown CONLL 2000 KW 93.18 93.35 95.09 CONLL 2000 UW 87.45 89.63 87.58 CONLL 2003 KW 94.65 94.92 96.17 CONLL 2003 UW 91.95 93.88 91.85Words
HMC
CRF
PMC
CONLL 2000
92.72 93.05 94.49
CONLL 2003
94.30 94.79 95.61
Table 4 :
4Training times of the different models on CPU with 8 Gb of RAM HMC CRF PMC 2003 for POS Tagging, and CoNLL 2003 for chunking. We use universal tagset [39] for each experiment of POS Tagging.CONLL 2000 POS
1.3S
140S
5S
CONLL 2000 CHUNK
1.3S
140S
5S
CONLL 2003 POS
1.5S 180S 5.5S
CONLL 2003 NER
1.5S
260S 5.5S
CONLL 2003 CHUNK
1.5S
160S 5.5S
UD ENGLISH POS
1.7S
185S 6.3S
All these datasets are freely available. CoNLL 2003 upon request at https:/www.clips.uantwerpen.be/conll2003/ner/, UD English is available on https:/universaldependencies.org/#language-, and CoNLL 2000 with the NLTK [26] library with python.
The training time, the execution time, and the carbon footprint of PMC are significantly lower, which is our project's main objective. As a perspective, PMC has been extended with the Triplet Markov Chain (TMC) model[44][45][46]. We can apply this extension to observe the possible improvements, especially for NER, while keeping Markov models' relevant properties.
Deep Learning. Ian Goodfellow, Yoshua Bengio, Aaron Courville, MIT PressIan Goodfellow, Yoshua Bengio, and Aaron Courville. Deep Learning. MIT Press, 2016. http://www.deeplearningbook.org.
Deep learning. nature. Yann Lecun, Yoshua Bengio, Geoffrey Hinton, 521Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. nature, 521(7553):436-444, 2015.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, arXiv:1606.05250arXiv preprintPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250, 2016.
Learning Word Vectors for Sentiment Analysis. Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, Christopher Potts, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesPortland, Oregon, USAAssociation for Computational LinguisticsAndrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. Learn- ing Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 142-150, Portland, Oregon, USA, June 2011. Association for Computational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova Bert, arXiv:1810.04805Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintZhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942, 2019.
Energy and policy considerations for deep learning in NLP. Emma Strubell, Ananya Ganesh, Andrew Mccallum, arXiv:1906.02243arXiv preprintEmma Strubell, Ananya Ganesh, and Andrew McCallum. Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243, 2019.
Conditional Markov processes. Ruslan Leont, ' Stratonovich, Non-linear transformations of stochastic processes. ElsevierRuslan Leont'evich Stratonovich. Conditional Markov processes. In Non-linear transformations of stochastic processes, pages 427-453. Elsevier, 1965.
Statistical inference for probabilistic functions of finite state Markov chains. The annals of mathematical statistics. E Leonard, Ted Baum, Petrie, 37Leonard E Baum and Ted Petrie. Statistical inference for probabilistic functions of finite state Markov chains. The annals of mathematical statistics, 37(6):1554-1563, 1966.
Inference in hidden Markov models. Olivier Cappé, Eric Moulines, Tobias Rydén, Springer Science & Business MediaOlivier Cappé, Eric Moulines, and Tobias Rydén. Inference in hidden Markov models. Springer Science & Business Media, 2006.
An introduction to hidden Markov models. Lawrence Rabiner, Juang, IEEE ASSP Magazine. 31Lawrence Rabiner and B Juang. An introduction to hidden Markov models. IEEE ASSP Magazine, 3(1):4-16, 1986.
A tutorial on hidden Markov models and selected applications in speech recognition. Lawrence R Rabiner, Proceedings of the IEEE. 772Lawrence R Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE, 77(2):257-286, 1989.
Hidden Markov processes. Yariv Ephraim, Neri Merhav, IEEE Transactions on information theory. 486Yariv Ephraim and Neri Merhav. Hidden Markov processes. IEEE Transactions on information theory, 48(6):1518-1569, 2002.
TnT: a statistical part-of-speech tagger. Thorsten Brants, Proceedings of the sixth conference on Applied natural language processing. the sixth conference on Applied natural language processingAssociation for Computational LinguisticsThorsten Brants. TnT: a statistical part-of-speech tagger. In Proceedings of the sixth conference on Applied natural language processing, pages 224-231. Association for Computational Linguistics, 2000.
Named entity recognition using hidden markov model (hmm). Sudha Morwal, Nusrat Jahan, Deepti Chopra, International Journal on Natural Language Computing (IJNLC). 14Sudha Morwal, Nusrat Jahan, and Deepti Chopra. Named entity recognition using hidden markov model (hmm). International Journal on Natural Language Computing (IJNLC), 1(4):15-23, 2012.
Pos tagging using hmm and rule-based chunking. The Proceedings of SPSAL. Asif Ekbal, Sivaji Mondal, Bandyopadhyay, 8Asif Ekbal, S Mondal, and Sivaji Bandyopadhyay. Pos tagging using hmm and rule-based chunking. The Proceedings of SPSAL, 8(1):25-28, 2007.
Pairwise Markov Chains. Wojciech Pieczynski, IEEE Trans. Pattern Anal. Mach. Intell. 255Wojciech Pieczynski. Pairwise Markov Chains. IEEE Trans. Pattern Anal. Mach. Intell., 25(5):634-639, May 2003.
Signal and image segmentation using pairwise Markov chains. Stéphane Derrode, Wojciech Pieczynski, IEEE Transactions on Signal Processing. 529Stéphane Derrode and Wojciech Pieczynski. Signal and image segmentation using pairwise Markov chains. IEEE Transactions on Signal Processing, 52(9):2477-2489, Sep. 2004.
Assessing the segmentation performance of pairwise and triplet Markov models. Ivan Gorynin, Hugo Gangloff, Emmanuel Monfrini, Wojciech Pieczynski, Signal Processing. 122017Ivan Gorynin, Hugo Gangloff, Emmanuel Monfrini, and Wojciech Pieczynski. Assessing the segmentation performance of pairwise and triplet Markov models. Signal Processing, 145, 12 2017.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Gen- eralized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763, 2019.
Contextual String Embeddings for Sequence Labeling. Alan Akbik, Duncan Blythe, Roland Vollgraf, COLING 2018, 27th International Conference on Computational Linguistics. Alan Akbik, Duncan Blythe, and Roland Vollgraf. Contextual String Embeddings for Sequence Labeling. In COLING 2018, 27th International Conference on Computational Linguistics, pages 1638-1649, 2018.
Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. Andrew Viterbi, IEEE transactions on Information Theory. 132Andrew Viterbi. Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on Information Theory, 13(2):260-269, 1967.
Speech and Language Processing: An Introduction to Natural Language Processing. Daniel Jurafsky, James H Martin, Computational Linguistics, and Speech Recognition. Daniel Jurafsky and James H. Martin. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition.
The GENIA corpus: An annotated research abstract corpus in molecular biology domain. Tomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, Junichi Tsujii, Proceedings of the second international conference on Human Language Technology Research. the second international conference on Human Language Technology ResearchTomoko Ohta, Yuka Tateisi, Jin-Dong Kim, Hideki Mima, and Junichi Tsujii. The GENIA corpus: An annotated research abstract corpus in molecular biology domain. In Proceedings of the second international conference on Human Language Technology Research, pages 82-86, 2002.
Complementarity, f-score, and nlp evaluation. Leon Derczynski, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Leon Derczynski. Complementarity, f-score, and nlp evaluation. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 261-266, 2016.
Edward Loper, Steven Bird, cs/0205028NLTK: the natural language toolkit. arXiv preprintEdward Loper and Steven Bird. NLTK: the natural language toolkit. arXiv preprint cs/0205028, 2002.
Maximum Entropy Markov Models for Information Extraction and Segmentation. Andrew Mccallum, Dayne Freitag, Fernando Cn Pereira, Icml. 17Andrew McCallum, Dayne Freitag, and Fernando CN Pereira. Maximum Entropy Markov Models for Informa- tion Extraction and Segmentation. In Icml, volume 17, pages 591-598, 2000.
An introduction to conditional random fields for relational learning. Introduction to statistical relational learning. Charles Sutton, Andrew Mccallum, 2Charles Sutton and Andrew McCallum. An introduction to conditional random fields for relational learning. Introduction to statistical relational learning, 2:93-128.
An empirical exploration of recurrent network architectures. Rafal Jozefowicz, Wojciech Zaremba, Ilya Sutskever, International conference on machine learning. Rafal Jozefowicz, Wojciech Zaremba, and Ilya Sutskever. An empirical exploration of recurrent network archi- tectures. In International conference on machine learning, pages 2342-2350, 2015.
A critical review of recurrent neural networks for sequence learning. John Zachary C Lipton, Charles Berkowitz, Elkan, arXiv:1506.00019arXiv preprintZachary C Lipton, John Berkowitz, and Charles Elkan. A critical review of recurrent neural networks for se- quence learning. arXiv preprint arXiv:1506.00019, 2015.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recur- rent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Cn Pereira, John Lafferty, Andrew McCallum, and Fernando CN Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. 2001.
Bidirectional LSTM-CRF models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, arXiv:1508.01991arXiv preprintZhiheng Huang, Wei Xu, and Kai Yu. Bidirectional LSTM-CRF models for sequence tagging. arXiv preprint arXiv:1508.01991, 2015.
CRFsuite: a fast implementation of Conditional Random Fields (CRFs). Naoaki Okazaki, Naoaki Okazaki. CRFsuite: a fast implementation of Conditional Random Fields (CRFs), 2007.
F Erik, Sabine Sang, Buchholz, Introduction to the CoNLL-2000 shared task: Chunking. arXiv preprint cs/0009008. Erik F Sang and Sabine Buchholz. Introduction to the CoNLL-2000 shared task: Chunking. arXiv preprint cs/0009008, 2000.
Universal dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, D Christopher, Ryan Manning, Slav Mcdonald, Sampo Petrov, Natalia Pyysalo, Silveira, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. Universal dependencies v1: A multilin- gual treebank collection. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, 2016.
Introduction to the conll-2003 shared task: Language-independent named entity recognition. F Erik, Fien Sang, De Meulder, cs/0306050arXiv preprintErik F Sang and Fien De Meulder. Introduction to the conll-2003 shared task: Language-independent named entity recognition. arXiv preprint cs/0306050, 2003.
A universal part-of-speech tagset. Slav Petrov, Dipanjan Das, Ryan Mcdonald, arXiv:1104.2086arXiv preprintSlav Petrov, Dipanjan Das, and Ryan McDonald. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086, 2011.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543, 2014.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. Enriching word vectors with subword information. Transactions of the Association for Computational Linguistics, 5:135-146, 2017.
Pooled contextualized embeddings for named entity recognition. Alan Akbik, Tanja Bergmann, Roland Vollgraf, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Alan Akbik, Tanja Bergmann, and Roland Vollgraf. Pooled contextualized embeddings for named entity recogni- tion. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 724-728, 2019.
FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP. Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, Roland Vollgraf, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Alan Akbik, Tanja Bergmann, Duncan Blythe, Kashif Rasul, Stefan Schweter, and Roland Vollgraf. FLAIR: An Easy-to-Use Framework for State-of-the-Art NLP. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 54-59, 2019.
Chaınes de Markov triplet. Wojciech Pieczynski, Comptes Rendus Mathematique. 3353Wojciech Pieczynski. Chaınes de Markov triplet. Comptes Rendus Mathematique, 335(3):275-278, 2002.
Modeling Repayment Behavior of Consumer Loan in Portfolio across Business Cycle: A Triplet Markov Model Approach. Shou Chen, Xiangqian Jiang, Complexity. Shou Chen and Xiangqian Jiang. Modeling Repayment Behavior of Consumer Loan in Portfolio across Business Cycle: A Triplet Markov Model Approach. Complexity, 2020, 2020.
An adaptive and on-line imu-based locomotion activity classification method using a triplet markov model. Haoyu Li, Stéphane Derrode, Wojciech Pieczynski, Neurocomputing. 362Haoyu Li, Stéphane Derrode, and Wojciech Pieczynski. An adaptive and on-line imu-based locomotion activity classification method using a triplet markov model. Neurocomputing, 362:94-105, 2019.
| [] |
[
"Short-answer scoring with ensembles of pretrained language models",
"Short-answer scoring with ensembles of pretrained language models"
] | [
"Christopher Ormerod christopher.ormerod@cambiumassessment.com \nCambium Assessment, Inc\n1000 Thomas Jefferson St20007WashingtonN.W., D.C\n"
] | [
"Cambium Assessment, Inc\n1000 Thomas Jefferson St20007WashingtonN.W., D.C"
] | [] | We investigate the effectiveness of ensembles of pretrained transformer-based language models on short answer questions using the Kaggle Automated Short Answer Scoring dataset. We fine-tune a collection of popular small, base, and large pretrained transformer-based language models, and train one feature-base model on the dataset with the aim of testing ensembles of these models. We used an early stopping mechanism and hyperparameter optimization in training. We observe that generally that the larger models perform slightly better, however, they still fall short of state-of-the-art results one their own. Once we consider ensembles of models, there are ensembles of a number of large networks that do produce state-of-the-art results, however, these ensembles are too large to realistically be put in a production environment. | null | [
"https://arxiv.org/pdf/2202.11558v1.pdf"
] | 247,058,701 | 2202.11558 | 296910d56a31f8baf506b300740775e3eb91f701 |
Short-answer scoring with ensembles of pretrained language models
23 Feb 2022 January 2022
Christopher Ormerod christopher.ormerod@cambiumassessment.com
Cambium Assessment, Inc
1000 Thomas Jefferson St20007WashingtonN.W., D.C
Short-answer scoring with ensembles of pretrained language models
23 Feb 2022 January 2022
We investigate the effectiveness of ensembles of pretrained transformer-based language models on short answer questions using the Kaggle Automated Short Answer Scoring dataset. We fine-tune a collection of popular small, base, and large pretrained transformer-based language models, and train one feature-base model on the dataset with the aim of testing ensembles of these models. We used an early stopping mechanism and hyperparameter optimization in training. We observe that generally that the larger models perform slightly better, however, they still fall short of state-of-the-art results one their own. Once we consider ensembles of models, there are ensembles of a number of large networks that do produce state-of-the-art results, however, these ensembles are too large to realistically be put in a production environment.
Introduction
Free-form constructed textual responses generally fall into one of two categories; essays and short answers. These two categories are not just distinguished by the average response length, they are also assessed very differently [4]. Rubrics for essays often take grammatical rules, organization, and argumentation into consideration whereas rubrics for short answer questions tend to assess specific analytic or compre-hension skills. This means that a response is not penalized if there are multiple spelling or grammatical errors present. Automated Short Answer Scoring (ASAS) and Automated Essay Scoring (AES) are two classes of techniques that utilize statistical models to approximate the assessment of constructed textual responses. Given the difference in rubrics, the performance of particular models and the importance of particular features used in each setting vary greatly.
Traditionally, statistical models for AES have been based on bag-of-words (BoW) methods which combine frequency-based and hand-crafted features [2,8,17]. As neural networks were developed in other areas of NLP, they became increasingly adopted for AES [1,5,29]. One of the most important developments in NLP has been the effectiveness of transformerbased pretrained language models such as the Bidirectional Encoder Representation by Transformers (BERT) model [6] which can be fine-tuned to a range of downstream tasks. The effectiveness of these models on the Kaggle essay dataset has been investigated by numerous authors [16,18,30]. More recently, we saw a combination of hand-crafted features and language models define the state-of-the-art on this dataset [30].
The most effective methods for short answer questions differ depending on the various types of responses. In the case of the Powergrading dataset, where there are fewer than twenty words per response on average, a simple yet effective clustering technique is sufficient [3]. In the case of the SemEval-2013 Joint Student Response Analysis (SRA) task, the current state-of-the-art was achieved by fine-tuning BERT models [27]. This work is concerned with the Kaggle Short Answer Scoring (KSAS) dataset 1 [24]. Each prompt consists of a passage and a prompt that asks the student to describe or explain aspects of the passage using evidence [24]. Given the semantic nature of descriptions and explanations, we expect well-trained neural networks to perform well in this task. Despite the advancements of neural networks, the current state-of-the-art for this dataset has been achieved by the application of random forest classifiers to a set of rule-based features [10]. What is remarkable from a production standpoint is that the calculations for these models can be done in a low resource setting.
Our goal of this short note is to explore how some of the most popular language models do when subjected to the KSAS dataset. We were expecting that language models on their own could surpass previous results, but when it comes to single models, on average, this is not the case. We are able to show that particular ensembles are capable of exceeding this benchmark, but the computational cost would be prohibitive from a production standpoint. In this sense, this work is the antithesis of [10] in that we simply bludgeon the problem to death with computational power. Even in doing so, there remain a few prompts that seem to fall drastically short of the methods in [10]. Conversely, there are some other prompts in which we see even our most efficient models perform comparably or even exceed rule-based methods, which we believe is sufficient to show that these methods and the results of this paper are of interest.
This paper is outlined as follows: In §2 we specify the way in which we have fine-tune, train, and ensemble the pretrained transformer-based language models and feature-based models, in §3 we present the results of the various models produced, and in §4 we discuss some corollaries of this work in terms of future directions.
1 https://www.kaggle.com/c/asap-sas
Method
Since BERT was introduced in [6], a veritable cornucopia of language models have been introduced, each either varying the underlying architecture of BERT or the way in which they were trained. The General Language Understanding Evaluation (GLUE) benchmark has been one of several benchmarks used to evaluate the performance these language models in a range of classification, generation and understanding tasks [31]. We expect that some models, due to their architectural changes or training methods, should perform differently to other models. We shall compare each model by applying the same fine-tuning procedures to each pretrained model to each prompt in the KSAS dataset.
We start by introducing the metrics typically used to evaluate model performance in most production systems in AES and ASAS [32]. The primary statistic used in automated assessment is the Cohen's quadratic weighted kappa (QWK) score, defined by
κ = w ij x ij w ij m ij(1)
where x i,j is the observed probability
m i,j = x ij (1 − x ij ), and w ij = 1 − (i − j) 2 (k − 1) 2 ,
where k is the number of classes. One interpretation of this statistic is that i represents the level of agreement between two scorers where you negate the agreement by chance. In production systems, we often require that the QWK between the true score and the scores predicted by the model are within 0.1 of the QWK between two humans. In an educational setting, most scoring engines are also required to have a standardized mean difference (SMD) with the final score of below 0.15 and the discrepancy between the IRR accuracy and the engines accuracy must be within some limit [32]. Our first step is to isolate a development set to be used for an early stopping mechanisms and hyperparameter-tuning. This set was chosen at random without any stratification. The properties of the development set are specified in Table 1. Because the original test set has been withheld by the organizers of the competition, we use the public test set to validate the models. Unfortunately, this means that we do not have second reads for the validation set, hence, we cannot assess whether the results provided satisfies the operational criteria provided [32]. The training of a single model or model in an ensemble was performed using the AdamW optimizer [15] with a linear learning rate scheduler. The loss function used was the usual binary-cross-entropy function. We train each model 20 epochs and select the model over that range the best QWK on the development set. To select the learning rate and the batch size, we used the Tree-structured Parzen Estimator (TPE) algorithm [7] with 10 trials with batch sizes between 6 and 12 and learning rates between 5e-6 and 1e-4. We used the Optuna implementation of the TPE algorithm [28]. The source code we used to train and score for this project will be made available in a future version of this paper.
To keep the code accessible, we chose a range of models that were both popular, accessible through a single API 2 and achieved high GLUE scores [31]. The selection of models, their references, and an approximation of their GLUE scores, and their respective 2 huggingface.co sizes in millions of parameters is given in Table 2.
While each of these models are transformer-based pretrained language models, they differ in some key aspects. The RoBERTa models were trained longer and removes the next sentence prediction task in the original BERT [14]. A novel aspect of the ALBERT models is the weight-sharing mechanism [13]. This mechanism was used in the base version to drastically reduce model size, and in the large version, to increase the size of the hidden units and feed-forward layers while keeping the size in terms of parameters managable. The Electra models differ greatly in the way they were trained; they are trained in generator and discriminator pair in which one model is used to generate tokens in masked positions, while the other attempts to distinguish between the generated and true labels [11]. The DeBERTa model we uses a disentangled attention mechanism in which the relative positions of words are considered, but not the absolute positions. Furthermore, the DeBERTa model is trained similarly to Electra however the goal of the discriminator in the case of DeBERTa is to detect replaced tokens rather than determine whether known tokens are generated or true [9].
The difference between XLNet and BERT is that the tokens are essentially predicted simultaneously by considering all permutations of token prediction order [34]. The Convolutional BERT model is novel in that it replaces fully connected layers in the feed-forward mechanisms in BERT with convolutional layers that can be done more efficiently [12]. The MobileBERT architecture uses linear layers called bottlenecks to reduce the dimension of the attention matrix computations, which is efficient and effective due to the rank of the attention matrix [26]. Lastly, the Distilled BERT model has 6 layers instead of 12 and is trained using knowledge distillation. The authors claim this is a much smaller and faster version of BERT with 97% of the performance [22].
Inspired by previous works both in essays and in short answer questions, we wanted to ensemble a range of models with a feature based model that captures the important components of the feature based models [10]. It has been our experience that ensembles between models of different natures better gains in performance than those that are similar. It is in this vein that we consider a feature model several key analogues of the features that were considered important. These features are summarized in Table 3.
Firstly, we consider the embedding of documents delivered by the sentence embedding given by Sentence-BERT [20], a term-frequency inversedocument-frequency model, a set of text overlaps, and the frequencies of key n-grams for n between 1 and 3. To evaluate text overlaps, we use minutia; any spaces, numbers, or punctuation are removed from the text and the prompt, at which point we count all overlapping strings between 5 and 20 characters, making a total of 15 dimensions. To evaluation "near matches" we use the text difference library difflib 3 . There is a threshhold for the near matching cutoff that ranges from between 0.5, where half the characters are correct, and 1, in which case all the characters are correct. This threshold is varied in accordance with hyperparameter tuning choices. The reasons for this approach is that short answer scoring should disregard spelling, hence, the near-matches are a way to make the keyword features more robust to spelling 3 https://docs.python.org/3/library/difflib.html errors. Lastly, we use a number of typical statistics concerned with word and sentence lengths. We normalize any features so that adhere to a normal distribution with mean 0 and a standard deviation of 1.
Once all the features are compiled, we summarize the document into a single vector of featuers between dimension 579 and 779. A multilayer perceptron model is fit to the training set and the performance is then evaluated on the development set with some fixed learning rate and batch size. The learning rate, batch-size, TF-IDF dimension, and cutoff are subjected to a hyperparameter optimization with 20 trials. The goal of the feature model is to produce something with sufficient performance, that is distinct in nature, to ensemble with our pretrained language models. Given a selection of models, the structure of our ensemble is simple; we take the log-probabilities of each model on the development set as the training set of a logistic regression. The output of the logistic regression is considered to be the ensemble output. In this reigeme, the test set is only considered at this stage. The structure that we use for ensembling is depicted in Figure 1. The key terms are extracted and near matches in the target text are counted. text-stats 10 A number of key statistics like length, average word length, etc.
Results
We optimized each model using a Xeon E5-2620 v4 @ 2.10GHz with a Nvidia RTX 8000 with 48Gb of on-board memory. This allowed for the batch sizes used for the larger models. Since the original test set for the competition has not been made available, we use the results on the publicly available test set, as was done for studies that are comparable to our own []. The individual models, and the ensemble results, are shown in Table 5. From a production standpoint, it is not just important that the QWK is high relative to the agreement between two human raters, it is also important that the SMD is within appropriate bounds [32]. This is where the feature-based model does very well. In this situation, it is more important that the SMD is low across models, and we find that the ensembles do remarkably well. If we consider a typical violation to be a model with an SMD of over 0.15, most violations occur with small models. Generally speaking, as we found with QWK, the SMDs are far better for large models than the small models, but ensembles seem to do even better. Even ensembles of models with large SMDs have much better controlled SMDs than the individual models in the ensemble. If nothing else, this points to the fact that ensembles in the way we consider here give us a way of controlling SMDs.
We find it interesting to note that there are a number of items in which pretrained models have succeeded where rule-based methods did not do as well. For example, several individual large pretrained lan-guage models exceed the state-of-the-art for prompts 2 and 8, yet pretrained models, or even their ensembles, seem to even come close to the results of [10] for item 10. It should be noted that even our naive feature model seemed to perform better than many of the pretrained language models tested.
The three best models that performed on the development set were the large RoBERTa model, and the large and base Electra models in that order, even though that order is not reflected in the test set. When we ensemble the best two and three models that perform best on average, we obtain results that seem to be on-par and even slightly exceeding the state-of-the-art results of [10]. That said, the combination of these models is a model with approximately 875 million parameters. We do not believe it was a coincidence that the best models on both the test set and development set were among the largest, highest scoring models with respect to the GLUE benchmarks.
Discussion
It seems to be the case that automated short answer scoring with pretrained transformer-based language models one their own can be outperformed generally by a mixture of regular expressions and other classical classifiers [10]. While we managed to exceed benchmarks with an ensemble of 3 very large networks, to do so with such huge computational power is a little bit dissatisfying. It is clear that such a solution is not feasible from a production standpoint.
We firmly believe that the ideal solution, from a production and accuracy standpoint, would be the ensemble of an efficient network like [16] and a rules based method like [10]. Firstly, this would require more careful consideration of the features used, and secondly, a careful consideration of how to incorporate these features into the score prediction. For example, concatenating the features to the set of features used by the classification head might yield better results [30]. We have generally found that ensembles In terms of architectures, there are a range of models we did not consider that would be worth mentioning. Some of these are are a range of architectures that we did not consider, such as Reformer, Longer-Former, FNet, Linformer, Performer, MPNet. These all are variations on the transformer-based architecture that approximate attention using architectural differences that may be prove to be an advantage in short answer scoring.
Lastly, we mention that there is still some work to be done in linking the output of the language model to the rubric. Most work on explainable AI has been focused on token-level importance, however, more semantically complex elements of a rubric are not simply stated in terms of the presence or absence of particular tokens. Knowing what features work well might be useful in determining interpretations of certain vectors in the feature space used to assign scores. This would be an important step in establishing a validity argument from these methods beyond their pure statistical performance.
Figure 1 :
1The structure of our general ensemble of models. The linear transformations, lin i , are the outputs of the classification heads that give logprobabilities.
outut of the encoder component of Sentence-BERT. TF-IDF 100-300 The transformation induced by the transformation of the largest 300 eigenvectors of the TF-IDF training matrix. text-overlap 15The number of minutia that intersect with the prompt. key words, bigrams and trigrams 90
Table 1 :
1The properties of the training and development set used in the training procedure.
Table 2 :
2A list of the various models used in this study in addition to the number of parameters and GLUE score. We have designated three different sizes as base
Table 3 :
3A summary of the set of features we considered in the feature model.
Table 4 :
4The results of various small, base, and large language models. The three best models on the development set were indicated with 1,2, and 3 to the right along side the names. Ensemble 2 included the DeBERTa and the large RoBERTa, while the ensemble 3 included the large Electra models.1
2
3
4
5
6
7
8
9
10
Features
0.000
0.036
0.109
0.095
0.079
0.030
0.082
0.021
0.030
0.073
ALBERT
0.002
0.016
0.064
0.069
0.000
0.032
0.018
0.026
0.002
0.027
BERT(L)
0.044
0.129
0.059
0.038
0.003
0.063
0.064
0.077
0.052
0.074
Electra(L)
0.041
0.142
0.044
0.043
0.021
0.036
0.068
0.069
0.065
0.100
RoBERTa (L)
0.025
0.077
0.097
0.208
0.029
0.089
0.043
0.014
0.015
0.086
BERT (base)
0.034
0.009
0.011
0.068
0.030
0.062
0.022
0.124
0.060
0.092
DeBERTa V3 (base)
0.034
0.090
0.101
0.221
0.031
0.071
0.033
0.137
0.039
0.027
Electra (base)
0.056
0.101
0.125
0.090
0.028
0.079
0.043
0.068
0.021
0.117
RoBERTa (base)
0.141
0.132
0.079
0.156
0.039
0.007
0.078
0.047
0.050
0.073
XNet (base)
0.080
0.082
0.145
0.021
0.036
0.107
0.064
0.122
0.019
0.211
ConvBERT
0.281
0.246
0.454
0.291
0.008
0.134
0.063
0.221
0.026
0.192
DistilledBERT
0.023
0.048
0.102
0.016
0.028
0.050
0.104
0.016
0.037
0.211
Electra (small)
0.057
0.158
0.081
0.057
0.070
0.055
0.103
0.117
0.019
0.099
MobileBERT
0.023
0.091
0.045
0.072
0.060
0.044
0.060
0.106
0.060
0.069
Ensemble (best of 2) 0.004
0.062
0.011
0.093
0.037
0.010
0.024
0.002
0.024
0.047
Ensemble (best of 3) 0.016
0.065
0.004
0.082
0.045
0.022
0.012
0.008
0.017
0.081
Table 5 :
5The SMD results of various small, base, and large language models used.
Automatic text scoring using neural networks. Dimitrios Alikaniotis, Helen Yannakoudakis, Marek Rei, arXiv:1606.04289arXiv preprintAlikaniotis, Dimitrios, Helen Yannakoudakis, and Marek Rei. "Automatic text scoring using neu- ral networks." arXiv preprint arXiv:1606.04289 (2016).
Automated essay scoring with e-rater® V. 2. Yigal Attali, Jill Burstein, The Journal of Technology, Learning and Assessment. 43Attali, Yigal, and Jill Burstein. "Automated es- say scoring with e-rater® V. 2." The Journal of Technology, Learning and Assessment 4, no. 3 (2006).
Powergrading: a clustering approach to amplify human effort for short answer grading. Sumit Basu, Chuck Jacobs, Lucy Vanderwende, Transactions of the Association for Computational Linguistics. 1Basu, Sumit, Chuck Jacobs, and Lucy Vander- wende. "Powergrading: a clustering approach to amplify human effort for short answer grading." Transactions of the Association for Computa- tional Linguistics 1 (2013): 391-402.
Automated evaluation of essays and short answers. Jill Burstein, Claudia Leacock, Richard Swartz, Burstein, Jill, Claudia Leacock, and Richard Swartz. "Automated evaluation of essays and short answers." (2001).
Attentionbased recurrent convolutional neural network for automatic essay scoring. Fei Dong, Yue Zhang, Jie Yang, Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningDong, Fei, Yue Zhang, and Jie Yang. "Attention- based recurrent convolutional neural network for automatic essay scoring." In Proceedings of the 21st Conference on Computational Natural Lan- guage Learning (CoNLL 2017), pp. 153-162. 2017.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805arXiv preprintDevlin, Jacob, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. "Bert: Pre-training of deep bidirectional transformers for language un- derstanding." arXiv preprint arXiv:1810.04805 (2018).
Forward and reverse gradient-based hyperparameter optimization. Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil, International Conference on Machine Learning. PMLRFranceschi, Luca, Michele Donini, Paolo Frasconi, and Massimiliano Pontil. "Forward and reverse gradient-based hyperparameter optimization." In International Conference on Machine Learning, pp. 1165-1173. PMLR, 2017.
The intelligent essay assessor: Applications to educational technology. Peter W Foltz, Darrell Laham, Thomas K Landauer, Interactive Multimedia Electronic Journal of Computer-Enhanced Learning. 12Foltz, Peter W., Darrell Laham, and Thomas K. Landauer. "The intelligent essay assessor: Ap- plications to educational technology." Interac- tive Multimedia Electronic Journal of Computer- Enhanced Learning 1, no. 2 (1999): 939-944.
DeBERTaV3: Improving DeBERTa using Electra-style pre-training with gradientdisentangled embedding sharing. Pengcheng He, Jianfeng Gao, Weizhu Chen, arXiv:2111.09543arXiv preprintHe, Pengcheng, Jianfeng Gao, and Weizhu Chen. "DeBERTaV3: Improving DeBERTa us- ing Electra-style pre-training with gradient- disentangled embedding sharing." arXiv preprint arXiv:2111.09543 (2021).
Get it scored using autosas-an automated system for scoring short answers. Yaman Kumar, Swati Aggarwal, Debanjan Mahata, Rajiv Ratn, Ponnurangam Shah, Roger Kumaraguru, Zimmermann, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Kumar, Yaman, Swati Aggarwal, Debanjan Mahata, Rajiv Ratn Shah, Ponnurangam Ku- maraguru, and Roger Zimmermann. "Get it scored using autosas-an automated system for scoring short answers." In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, pp. 9662-9669. 2019.
Electra: Pre-training text encoders as discriminators rather than generators. Kevin Clark, Minh-Thang Luong, Quoc V Le, Christopher D Manning, arXiv:2003.10555arXiv preprintClark, Kevin, Minh-Thang Luong, Quoc V. Le, and Christopher D. Manning. "Elec- tra: Pre-training text encoders as discrimina- tors rather than generators." arXiv preprint arXiv:2003.10555 (2020).
Convbert: Improving bert with spanbased dynamic convolution. Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, Shuicheng Yan, arXiv:2008.02496arXiv preprintJiang, Zihang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, and Shuicheng Yan. "Convbert: Improving bert with span- based dynamic convolution." arXiv preprint arXiv:2008.02496 (2020).
Albert: A lite bert for selfsupervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, arXiv:1909.11942arXiv preprintLan, Zhenzhong, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. "Albert: A lite bert for self- supervised learning of language representations." arXiv preprint arXiv:1909.11942 (2019).
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintLiu, Yinhan, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. "Roberta: A robustly optimized bert pretrain- ing approach." arXiv preprint arXiv:1907.11692 (2019).
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, arXiv:1711.05101arXiv preprintLoshchilov, Ilya, and Frank Hutter. "Decou- pled weight decay regularization." arXiv preprint arXiv:1711.05101 (2017).
Automated essay scoring using efficient transformer-based language models. Christopher M Ormerod, Akanksha Malhotra, Amir Jafari, arXiv:2102.13136arXiv preprintOrmerod, Christopher M., Akanksha Malhotra, and Amir Jafari. "Automated essay scoring us- ing efficient transformer-based language models." arXiv preprint arXiv:2102.13136 (2021).
Project Essay Grade: PEG. Ellis Page, Batten, Page, Ellis Batten. "Project Essay Grade: PEG." (2003).
Language models and Automated Essay Scoring. Pedro Rodriguez, Amir Uria, Christopher M Jafari, Ormerod, arXiv:1909.09482arXiv preprintRodriguez, Pedro Uria, Amir Jafari, and Christopher M. Ormerod. "Language models and Automated Essay Scoring." arXiv preprint arXiv:1909.09482 (2019).
Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. L Ramachandran, J Cheng, P Foltz, Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications. the Tenth Workshop on Innovative Use of NLP for Building Educational ApplicationsRamachandran, L., Cheng, J., and Foltz, P. 2015. Identifying patterns for short answer scoring using graph-based lexico-semantic text matching. In Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educa- tional Applications, 97-106.
Sentencebert: Sentence embeddings using siamese bert-networks. Nils Reimers, Iryna Gurevych, arXiv:1908.10084arXiv preprintReimers, Nils, and Iryna Gurevych. "Sentence- bert: Sentence embeddings using siamese bert-networks." arXiv preprint arXiv:1908.10084 (2019).
Investigating neural architectures for short answer scoring. B Riordan, A Horbach, A Cahill, T Zesch, C M Lee, Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. the 12th Workshop on Innovative Use of NLP for Building Educational ApplicationsRiordan, B.; Horbach, A.; Cahill, A.; Zesch, T.; and Lee, C. M. 2017. Investigating neural archi- tectures for short answer scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, 159-168
DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. Sanh, L Victor, J Debut, T Chaumond, Wolf, arXiv:1910.01108arXiv preprintSanh, Victor, L. Debut, J. Chaumond, and T. Wolf. "DistilBERT, a distilled version of BERT: Smaller, faster, cheaper and lighter. arXiv 2019." arXiv preprint arXiv:1910.01108 (2019).
State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration. Mark D Shermis, Assessing Writing. 20Shermis, Mark D. "State-of-the-art automated essay scoring: Competition, results, and future directions from a United States demonstration." Assessing Writing 20 (2014): 53-76.
Contrasting state-of-the-art in the machine scoring of short-form constructed responses. Mark D Shermis, Educational Assessment. 201Shermis, Mark D. "Contrasting state-of-the-art in the machine scoring of short-form constructed responses." Educational Assessment 20, no. 1 (2015): 46-65.
Adv-bert: Bert is not robust on misspellings! generating nature adversarial samples on bert. Lichao Sun, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, Caiming Xiong, arXiv:2003.04985arXiv preprintSun, Lichao, Kazuma Hashimoto, Wenpeng Yin, Akari Asai, Jia Li, Philip Yu, and Caiming Xiong. "Adv-bert: Bert is not robust on misspellings! generating nature adversarial samples on bert." arXiv preprint arXiv:2003.04985 (2020).
Mobilebert: a compact task-agnostic bert for resourcelimited devices. Zhiqing Sun, Hongkun Yu, Xiaodan Song, Renjie Liu, Yiming Yang, Denny Zhou, arXiv:2004.02984arXiv preprintSun, Zhiqing, Hongkun Yu, Xiaodan Song, Ren- jie Liu, Yiming Yang, and Denny Zhou. "Mobile- bert: a compact task-agnostic bert for resource- limited devices." arXiv preprint arXiv:2004.02984 (2020).
Improving short answer grading using transformer-based pre-training. Chul Sung, Tejas Indulal Dhamecha, Nirmal Mukhi, International Conference on Artificial Intelligence in Education. ChamSpringerSung, Chul, Tejas Indulal Dhamecha, and Nir- mal Mukhi. "Improving short answer grading us- ing transformer-based pre-training." In Interna- tional Conference on Artificial Intelligence in Ed- ucation, pp. 469-481. Springer, Cham, 2019.
Optuna: A Next-generation Hyperparameter Optimization Framework. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, Masanori Koyama, KDD. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta,and Masanori Koyama. 2019. Op- tuna: A Next-generation Hyperparameter Opti- mization Framework. In KDD.
A neural approach to automated essay scoring. Kaveh Taghipour, Hwee Tou Ng, Proceedings of the 2016 conference on empirical methods in natural language processing. the 2016 conference on empirical methods in natural language processingTaghipour, Kaveh, and Hwee Tou Ng. "A neu- ral approach to automated essay scoring." In Proceedings of the 2016 conference on empirical methods in natural language processing, pp. 1882- 1891. 2016.
Neural Automated Essay Scoring Incorporating HandcraftedFeatures. Masaki Uto, Yikuan Xie, Maomi Ueno, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsUto, Masaki, Yikuan Xie, and Maomi Ueno. "Neural Automated Essay Scoring Incorporating HandcraftedFeatures." In Proceedings of the 28th International Conference on Computational Lin- guistics, pp. 6077-6088. 2020.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, arXiv:1804.07461arXiv preprintWang, Alex, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. "GLUE: A multi-task benchmark and analysis platform for natural language understanding." arXiv preprint arXiv:1804.07461 (2018).
A framework for evaluation and use of automated scoring. David M Williamson, Xiaoming Xi, F Jay Breyer, 31Williamson, David M., Xiaoming Xi, and F. Jay Breyer. "A framework for evaluation and use of automated scoring." Educational measurement: issues and practice 31, no. 1 (2012): 2-13.
Huggingface's transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, arXiv:1910.03771arXiv preprintWolf, Thomas, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac et al. "Huggingface's trans- formers: State-of-the-art natural language pro- cessing." arXiv preprint arXiv:1910.03771 (2019).
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, Quoc V Le, Advances in neural information processing systems. 32Yang, Zhilin, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R. Salakhutdinov, and Quoc V. Le. "Xlnet: Generalized autoregressive pretrain- ing for language understanding." Advances in neural information processing systems 32 (2019)
| [] |
[
"One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive Translation",
"One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive Translation"
] | [
"Chenze Shao shaochenze18z@ict.ac.cn \nKey Laboratory of Intelligent Information Processing Institute of Computing Technology\nChinese Academy of Sciences (ICT/CAS\n\n\nUniversity of Chinese Academy of Sciences\n\n",
"Xuanfu Wu wuxuanfu20s@ict.ac.cn \nKey Laboratory of Intelligent Information Processing Institute of Computing Technology\nChinese Academy of Sciences (ICT/CAS\n\n\nUniversity of Chinese Academy of Sciences\n\n",
"Yang Feng fengyang@ict.ac.cn \nKey Laboratory of Intelligent Information Processing Institute of Computing Technology\nChinese Academy of Sciences (ICT/CAS\n\n\nUniversity of Chinese Academy of Sciences\n\n"
] | [
"Key Laboratory of Intelligent Information Processing Institute of Computing Technology\nChinese Academy of Sciences (ICT/CAS\n",
"University of Chinese Academy of Sciences\n",
"Key Laboratory of Intelligent Information Processing Institute of Computing Technology\nChinese Academy of Sciences (ICT/CAS\n",
"University of Chinese Academy of Sciences\n",
"Key Laboratory of Intelligent Information Processing Institute of Computing Technology\nChinese Academy of Sciences (ICT/CAS\n",
"University of Chinese Academy of Sciences\n"
] | [
"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Non-autoregressive neural machine translation (NAT) suffers from the multi-modality problem: the source sentence may have multiple correct translations, but the loss function is calculated only according to the reference sentence. Sequence-level knowledge distillation makes the target more deterministic by replacing the target with the output from an autoregressive model. However, the multi-modality problem in the distilled dataset is still nonnegligible. Furthermore, learning from a specific teacher limits the upper bound of the model capability, restricting the potential of NAT models. In this paper, we argue that one reference is not enough and propose diverse distillation with reference selection (DDRS) for NAT. Specifically, we first propose a method called SeedDiv for diverse machine translation, which enables us to generate a dataset containing multiple high-quality reference translations for each source sentence. During the training, we compare the NAT output with all references and select the one that best fits the NAT output to train the model. Experiments on widely-used machine translation benchmarks demonstrate the effectiveness of DDRS, which achieves 29.82 BLEU with only one decoding pass on WMT14 En-De, improving the state-of-the-art performance for NAT by over 1 BLEU. 1 * Corresponding author: Yang Feng 1 Source code: https://github.com/ictnlp/DDRS-NAT. | 10.18653/v1/2022.naacl-main.277 | [
"https://www.aclanthology.org/2022.naacl-main.277.pdf"
] | 249,192,028 | 2205.14333 | 7ae957290bff11afd782bf5cecf907a2634034f0 |
One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive Translation
July 10-15, 2022
Chenze Shao shaochenze18z@ict.ac.cn
Key Laboratory of Intelligent Information Processing Institute of Computing Technology
Chinese Academy of Sciences (ICT/CAS
University of Chinese Academy of Sciences
Xuanfu Wu wuxuanfu20s@ict.ac.cn
Key Laboratory of Intelligent Information Processing Institute of Computing Technology
Chinese Academy of Sciences (ICT/CAS
University of Chinese Academy of Sciences
Yang Feng fengyang@ict.ac.cn
Key Laboratory of Intelligent Information Processing Institute of Computing Technology
Chinese Academy of Sciences (ICT/CAS
University of Chinese Academy of Sciences
One Reference Is Not Enough: Diverse Distillation with Reference Selection for Non-Autoregressive Translation
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJuly 10-15, 2022
Non-autoregressive neural machine translation (NAT) suffers from the multi-modality problem: the source sentence may have multiple correct translations, but the loss function is calculated only according to the reference sentence. Sequence-level knowledge distillation makes the target more deterministic by replacing the target with the output from an autoregressive model. However, the multi-modality problem in the distilled dataset is still nonnegligible. Furthermore, learning from a specific teacher limits the upper bound of the model capability, restricting the potential of NAT models. In this paper, we argue that one reference is not enough and propose diverse distillation with reference selection (DDRS) for NAT. Specifically, we first propose a method called SeedDiv for diverse machine translation, which enables us to generate a dataset containing multiple high-quality reference translations for each source sentence. During the training, we compare the NAT output with all references and select the one that best fits the NAT output to train the model. Experiments on widely-used machine translation benchmarks demonstrate the effectiveness of DDRS, which achieves 29.82 BLEU with only one decoding pass on WMT14 En-De, improving the state-of-the-art performance for NAT by over 1 BLEU. 1 * Corresponding author: Yang Feng 1 Source code: https://github.com/ictnlp/DDRS-NAT.
Introduction
Non-autoregressive machine translation (Gu et al., 2018) has received increasing attention in the field of neural machine translation for the property of parallel decoding. Despite the significant speedup, NAT suffers from the performance degradation compared to autoregressive models (Bahdanau et al., 2015;Vaswani et al., 2017) due to the multimodality problem: the source sentence may have multiple correct translations, but the loss is calculated only according to the reference sentence. The multi-modality problem will cause the inaccuracy of the loss function since NAT has no prior knowledge about the reference sentence during the generation, where the teacher forcing algorithm (Williams and Zipser, 1989) makes autoregressive models less affected by feeding the golden context.
How to overcome the multi-modality problem has been a central focus in recent efforts for improving NAT models (Shao et al., 2019Sun and Yang, 2020;Du et al., 2021). A standard approach is to use sequence-level knowledge distillation (Kim and Rush, 2016), which attacks the multimodality problem by replacing the target-side of the training set with the output from an autoregressive model. The distilled dataset is less complex and more deterministic , which becomes a default configuration of NAT. However, the multi-modality problem in the distilled dataset is still nonnegligible . Furthermore, the distillation requires NAT models to imitate the behavior of a specific autoregressive teacher, which limits the upper bound of the model capability and restricts the potential of developing stronger NAT models.
In this paper, we argue that one reference is not enough and propose diverse distillation with reference selection (DDRS) for NAT. Diverse distillation generates a dataset containing multiple reference translations for each source sentence, and reference selection finds the reference translation that best fits the model output for the training. As illustrated in Figure 1, diverse distillation provides candidate references "I must leave tomorrow" and "Tomorrow I must leave", and reference selection selects the former which fits better with the model output. More importantly, NAT with DDRS does not imitate the behavior of a specific teacher but learns selectively from multiple references, which I must leave tomorrow Tomorrow I must leave improves the upper bound of the model capability and allows for developing stronger NAT models.
The object of diverse distillation is similar to the task of diverse machine translation, which aims to generate diverse translations with high translation quality (Li et al., 2016;Vijayakumar et al., 2018;Shen et al., 2019;Li et al., 2021). We propose a simple yet effective method called SeedDiv, which directly uses the randomness in model training controlled by random seeds to produce diverse reference translations without losing translation quality. For reference selection, we compare the model output with all references and select the one that best fits the model output, which can be efficiently conducted without extra neural computations. The model learns from all references indiscriminately in the beginning, and gradually focuses more on the selected reference that provides accurate training signals for the model. We also extend the reference selection approach to reinforcement learning, where we encourage the model to move towards the selected reference that gives the maximum reward to the model output.
We conduct experiments on widely-used machine translation benchmarks to demonstrate the effectiveness of our method. On the competitive task WMT14 En-De, DDRS achieves 27.60 BLEU with 14.7× speedup and 28.33 BLEU with 5.0× speedup, outperforming the autoregressive Transformer while maintaining considerable speedup. When using the larger version of Transformer, DDRS even achieves 29.82 BLEU with only one decoding pass, improving the state-of-the-art performance level for NAT by over 1 BLEU.
2 Background 2.1 Non-Autoregressive Translation Gu et al. (2018) proposes non-autoregressive ma-chine translation to reduce the translation latency through parallel decoding. The vanilla-NAT models the translation probability from the source sentence x to the target sentence y = {y 1 , ..., y T } as:
p(y|x, θ) = T t=1 p t (y t |x, θ),(1)
where θ is a set of model parameters and p t (y t |x, θ) is the translation probability of word y t in position t. The vanilla-NAT is trained to minimize the crossentropy loss:
L CE (θ) = − T t=1 log(p t (y t |x, θ)).(2)
The vanilla-NAT has to know the target length before constructing the decoder inputs. The target length T is set as the reference length during the training and obtained from a length predictor during the inference. The target length cannot be changed dynamically during the inference, so it often requires generating multiple candidates with different lengths and re-scoring them to produce the final translation (Gu et al., 2018).
The length issue can be overcome by connectionist temporal classification (CTC, Graves et al., 2006). CTC-based models usually generate a long alignment containing repetitions and blank tokens. The alignment will be post-processed by a collapsing function Γ −1 to recover a normal sentence, which first collapses consecutive repeated tokens and then removes all blank tokens. CTC is capable of efficiently finding all alignments a which the reference sentence y can be recovered from, and marginalizing the log-likelihood with dynamic programming:
log p(y|x, θ) = log a∈Γ(y) p(a|x, θ). (3)
Due to the superior performance and the flexibility of generating predictions with variable length, CTC is receiving increasing attention in nonautoregressive translation (Libovický and Helcl, 2018;Kasner et al., 2020;Saharia et al., 2020;Gu and Kong, 2020;Zheng et al., 2021).
Sequence-Level Knowledge Distillation
Sequence-level Knowledge Distillation (SeqKD, Kim and Rush, 2016) is a widely used knowledge distillation method in NMT, which trains the student model to mimic teacher's actions at sequence-level. Given the student prediction p and the teacher prediction q, the distillation loss is:
L SeqKD (θ) = − y q(y|x) log p(y|x, θ) ≈ − log p(ŷ|x, θ),(4)
where θ are parameters of the student model and y is the output from running beam search with the teacher model. The teacher outputŷ is used to approximate the teacher distribution otherwise the distillation loss will be intractable. The procedure of sequence-level knowledge distillation is: (1) train a teacher model, (2) run beam search over the training set with this model, (3) train the student model with cross-entropy on the source sentence and teacher translation pairs. The distilled dataset is less complex and more deterministic , which helps to alleviate the multi-modality problem and becomes a default configuration in NAT models.
Diverse Machine Translation
The task of diverse machine translation requires to generate diverse translations and meanwhile maintain high translation quality. Assume the reference sentence is y and we have multiple translations {y 1 , ..., y k }, the translation quality is measured by the average reference BLEU (rfb):
rfb = 1 k k i=1 BLEU(y, y i ),(5)
and the translation diversity is measured by the average pairwise BLEU (pwb):
pwb = 1 (k − 1)k k i=1 j =i BLEU(y i , y j ). (6)
Higher reference BLEU indicates better translation quality and lower pairwise BLEU indicates better translation diversity. Generally speaking, there is a trade-off between quality and diversity. In existing methods, translation diversity has to be achieved at the cost of losing translation quality.
Approach
In this section, we first introduce the diverse distillation technique we use to generate multiple reference translations for each source sentence, and then apply reference selection to select the reference that best fits the model output for the training.
Diverse Distillation
The objective of diverse distillation is to obtain a dataset containing multiple high-quality references for each source sentence, which is similar to the task of diverse machine translation that aims to generate diverse translations with high translation quality. However, the translation diversity is achieved at a certain cost of translation quality in previous work, which is not desired in diverse distillation.
Using the randomness in model training, we propose a simple yet effective method called Seed-Div to achieve translation diversity without losing translation quality. Specifically, given the desired number of translations k, we directly set k different random seeds to train k translation models, where random seeds control the random factors during the model training such as parameter initialization, batch order, and dropout. During the decoding, each model translates the source sentence with beam search, which gives k different translations in total. Notably, SeedDiv does not sacrifice the translation quality to achieve diversity since random seeds do not affect the expected model performance.
We conduct the experiment on WMT14 En-De to evaluate the performance of SeedDiv. We use the base setting of Transformer and train the model for 150K steps. The detailed configuration is described in section 4.1. We also re-implement several existing methods with the same setting for comparison, including Beam Search, Diverse Beam Search (Vijayakumar et al., 2018), HardMoE (Shen et al., 2019), Head Sampling and Concrete Dropout . We set the number of translations k = 3, and set the number of heads to be sampled as 3 for head sampling. We also implement a weaker version of our method SeedDiv-ES, which early stops the training process with only these methods in Figure 2.
It is surprising to see that SeedDiv achieves outstanding translation diversity besides the superior translation quality, which outperforms most methods on both translation quality and diversity. Only HardMoe has a better pairwise BLEU than SeedDiv, but its reference BLEU is much lower. The only concern is that SeedDiv requires a larger training cost to train multiple models, so we also use a weaker version SeedDiv-ES for comparison. Though the model performance is degraded due to the early stop, SeedDiv-ES still achieves a good trade-off between the translation quality and diversity, demonstrating the advantage of using the training randomness controlled random seeds to generate diverse translations. Therefore, we use SeedDiv as the technique for diverse distillation.
Reference Selection
Losses under Diverse Distillation
After diverse distillation, we obtain a dataset containing k reference sentences y 1:k for each source sentence x. Traditional data augmentation algorithms for NMT (Sennrich et al., 2016a;Zhang and Zong, 2016;Zhou and Keung, 2020;Nguyen et al., 2020) generally calculate cross-entropy losses on all data and use their summation to train the model:
L sum (θ) = − 1 k k i=1 log p(y i |x, θ).(7)
However, this loss function is inaccurate for NAT due to the increase of data complexity. Sequencelevel knowledge distillation works well on NAT by reducing the complexity of target data . In comparison, the target data generated by diverse distillation is relatively more complex. If NAT learns from the k references indiscriminately, it will not eventually converge to any one reference but generate a mixture of all references.
Using the multi-reference dataset, we propose to train NAT with reference selection to evaluate the model output with better accuracy. We compare the model output with all reference sentences and select the one with the maximum probability assigned by the model. We train the model with only the selected reference:
L max (θ) = − log max 1≤i≤k p(y i |x, θ).(8)
In this way, we do not fit the model to all references but only encourage it to generate the nearest reference, which is an easier but more suitable objective for the model. Besides, when the ability of autoregressive teacher is limited, the NAT model can learn to ignore bad references in the data and select the clean reference for the training, which makes the capability of NAT not limited by a specific autoregressive teacher.
In addition to minimizing all losses L sum (θ) or the selected loss L max (θ), there is also an intermediate choice to assign different weights to reference sentences. We can optimize the log-likelihood of generating any reference sentence as follows:
L mid (θ) = − log k i=1 p(y i |x, θ).(9)
The gradient of Equation 9 is equivalent to assigning weight p(y i |x,θ) i p(y i |x,θ) to the cross-entropy loss of each reference sentence y i . In this way, the model focuses more on suitable references but also assigns non-zero weights to other references.
We use a linear annealing schedule with two stages to train the NAT model. In the first stage, we begin with the summation L sum (θ) and linearly anneal the loss to L mid (θ). Similarly, we linearly switch to the selected loss L max (θ) in the second stage. We use t and T to denote the current time step and total training steps respectively, and use a constant λ to denote the length of the first stage. The loss function is:
L(θ) = T 1 L mid (θ)+(1−T 1 )L sum (θ), t ≤ λT T 2 L max (θ)+(1−T 2 )L mid (θ), t > λT ,(10)
Models where T 1 and T 2 are defined as:
L max (θ) L sum (θ) For Back For Back AT k× 1× k× k× vanilla-NAT k× 1× k× k× CTC 1× 1× 1× 1×T 1 = t λT , T 2 = t − λT T − λT .(11)
In this way, the model learns from all references indiscriminately at the beginning, which serves as a pretraining stage that provides comprehensive knowledge to the model. As the training progresses, the model focuses more on the selected reference, which provides accurate training signals and gradually finetunes the model to the optimal state.
Efficient Calculation with CTC
To calculate the probability p(y|x, θ), the vanilla-NAT must set the decoder length to the length of y. Therefore, calculating the probability of k reference sentences requires running the decoder for at most k times, which will greatly increase the training cost. Fortunately, for CTC-based NAT, the training cost is nearly the same since its decoder length is only determined by the source sentence. We only need to run the model once and calculate the probabilities of the k reference sentences with dynamic programming, which has a minor cost compared with forward and backward propagations. In Table 1, we show the calculation cost of L max (θ) and L sum (θ) for different models. We use CTC as the baseline model due to its superior performance and training efficiency.
Max-Reward Reinforcement Learning
Following Shao et al. (2019, we finetune the NAT model with the reinforcement learning objective (Williams, 1992;Ranzato et al., 2015):
L rl (θ) = E y [log p(y|x, θ) · r(y)],(12)
where r(y) is the reward function and will be discussed later. The usual practice is to sample a sentence y from the distribution p(y|x, θ) to estimate the above equation. For CTC based NAT, p(y|x, θ) cannot be directly sampled, so we sample from the equivalent distribution p(a|x, θ) instead. We recover the target sentence by the collapsing function Γ −1 and calculate its probability with dynamic programming to estimate the following equation:
L rl (θ) = E a [log p(Γ −1 (a)|x, θ) · r(Γ −1 (a))].(13)
The reward function is usually evaluation metrics for machine translation (e.g., BLEU, GLEU), which evaluate the prediction by comparing it with the reference sentence. We use r(y 1 , y 2 ) to denote the reward of prediction y 1 when y 2 is the reference. As we have k references y 1:k , we define our reward function to be the maximum reward:
r(y) = max 1≤i≤k r(y, y i ).(14)
By optimizing the maximum reward, we encourage the model to move towards the selected reference, which is the closest to the model. Otherwise, rewards provided by other references may mislead the model to generate a mixture of all references.
Experiments
Experimental Settings
Datasets We conduct experiments on major benchmark datasets for NAT: WMT14 English↔German (En↔De, 4.5M sentence pairs) and WMT16 English↔Romanian (En↔Ro, 0.6M sentence pairs). We also evaluate our approach on a largescale dataset WMT14 English→French (En→Fr, 23.7M sentence pairs) and a small-scale dataset IWSLT14 German→English (De→En, 160K sentence pairs). The datasets are tokenized into subword units using a joint BPE model (Sennrich et al., 2016b). We use BLEU (Papineni et al., 2002) to evaluate the translation quality.
Hyperparameters We use 3 teachers for diverse distillation and set the seed to i when training the ith teacher. We set the first stage length λ to 2/3. We use sentence-level BLEU as the reward. We adopt Transformer-base (Vaswani et al., 2017) we train AT for 150K steps and train NAT for 300K steps with dropout 0.2. On WMT16 En↔Ro and IWSLT14 De→En, we train AT for 18K steps and train NAT for 150K steps with dropout 0.3. We finetune NAT for 3K steps. The learning rate warms up to 5 · 10 −4 within 10K steps in pretraining and warms up to 2 · 10 −5 within 500 steps in RL finetuning, and then decays with the inverse squareroot schedule. We average the last 5 checkpoints to obtain the final model. We use GeForce RTX 3090 GPU for the training and inference. We implement our models based on the open-source framework of fairseq .
Knowledge Distillation For baseline NAT models, we follow previous works on NAT to apply sequence-level knowledge distillation (Kim and Rush, 2016) to make the target more deterministic. Our method applies diverse distillation with k = 3 by default, that is, we use SeedDiv to generate 3 reference sentences for each source sentence.
Beam Search Decoding For autoregressive models, we use beam search with beam width 5 for the inference. For NAT, the most straightforward way is to generate the sequence with the highest probability at each position. Furthermore, CTCbased models also support beam search decoding optionally combined with n-gram language models (Kasner et al., 2020). Following Gu and Kong (2020), we use beam width 20 combined with a 4gram language model to search the target sentence, which can be implemented efficiently in C++ 2 .
Main Results
We compare the performance of DDRS and existing methods in ing methods, DDRS beats the state-of-the-art for one-pass NAT on all benchmarks and beats the autoregressive Transformer on most benchmarks with 14.7× speedup over it. The performance of DDRS is further boosted by beam search and 4-gram language model, which even outperforms all iterative NAT models with only one-pass decoding. Notably, on WMT16 En↔Ro, our method improves stateof-the-art performance levels for NAT by over 1 BLEU. Compared with autoregressive models, our method outperforms the Transformer with knowledge distillation, and meanwhile maintains 5.0× speedup over it. We further explore the capability of DDRS with a larger model size and stronger teacher models. We use the big version of Transformer for distillation, and also add 3 right-to-left (R2L) teachers to enrich the references. We respectively use Transformer-base and Transformer-big as the NAT architecture and report the performance of DDRS in Table 3. Surprisingly, the performance of DDRS can be further greatly boosted by using a larger model size and stronger teachers. DDRS-big with beam search achieves 29.82 BLEU on WMT14 En-De, which is close to the state-of-the-art performance of autoregressive models on this competitive dataset and improves the state-of-the-art performance for NAT by over 1 BLEU with only one-pass decoding.
We also evaluate our approach on a large-scale dataset WMT14 En-Fr and a small-scale dataset IWSLT14 De-En. achieves considerable improvements over the CTC baseline and DDRS with beam search can outperform the autoregressive Transformer.
Ablation Study
In Table 5, we conduct an ablation study to analyze the effect of techniques used in DDRS. First, we separately use the loss functions defined in Equation 7, Equation 8 and Equation 9 to train the model. The summation loss L sum (θ) has a similar performance to the CTC baseline, showing that simply using multiple references is not helpful for NAT due to the increase of data complexity. The other two losses L mid (θ) and L max (θ) achieve considerable improvements to the CTC baseline, demonstrating the effectiveness of reference selection. Then we use different λ to verify the effect of the annealing schedule. With the annealing schedule, the loss is a combination of the three losses but performs better than each of them. Though the summation loss L sum (θ) does not perform well when used separately, it can play the role of pretraining and improve the final performance. When λ is 2/3, the annealing schedule performs the best and improves L max (θ) by about 0.3 BLEU.
Finally, we verify the effect of the reward function during the fine-tuning. When choosing a random reference to calculate the reward, the finetuning barely brings improvement to the model. The average reward is better than the random reward, and the maximum reward provided by the selected reference performs the best.
DDRS on Autoregressive Transformer
Though DDRS is proposed to alleviate the multimodality problem for NAT, it can also be applied to autoregressive models. In Table 6, we report the performance of the autoregressive Transformer when trained by the proposed DDRS losses. In contrast to NAT, AT prefers the summation loss L sum , and the other two losses based on reference selection even degrade the AT performance. It is within our expectation that AT models do not benefit much from reference selection. NAT generates the whole sentence simultaneously without any prior knowledge about the reference sentence, so the reference may not fit the NAT output well, in which case DDRS is helpful by selecting an appropriate reference for the training. In comparison, AT models generally apply the teacher forcing algorithm (Williams and Zipser, 1989) for the training, which feeds the golden context to guide the generation of the reference sentence. With teacher forcing, AT models do not suffer much from the multi-modality problem and therefore do not need reference selection. Besides, as shown in Table 1, another disadvantage is that the training cost of DDRS is nearly k times as large, so we do not recommend applying DDRS on AT.
Effect of Diverse Distillation
In the diverse distillation part of DDRS, we apply SeedDiv to generate multiple references. There are also other diverse translation techniques that can be used for diverse distillation. In this section, we evaluate the effect of diverse distillation techniques on the performance of DDRS. Besides SeedDiv, we also use HardMoe (Shen et al., 2019) and Concrete Dropout to generate multiple references, and report their performance in Table 7. When applying other techniques for diverse distillation, the performance of DDRS significantly decreases. The performance degradation indicates the importance of high reference BLEU in diverse distillation, as the NAT student directly learns from the generated references.
Effect of Reward
There are many automatic metrics to evaluate the translation quality. To measure the effect of reward, we respectively use different automatic metrics as reward for RL, which include traditional metrics (BLEU (Papineni et al., 2002), METEOR (Banerjee and Lavie, 2005), GLEU ) and pretraining-based metrics (BERTScore , BLEURT (Sellam et al., 2020)). We report the results in Table 8. Comparing the three traditional metrics, we can see that there is no significant difference in their performance. The two pretraining-based metrics only perform slightly better than traditional metrics. Considering the performance and computational cost, we use the traditional metric BLEU as the reward.
Number of References
In this section, we evaluate how the number of references affects the DDRS performance. We set the number of references k to different values and train the CTC model with reference selection. We report the performance of DDRS with different k in Table 9. The improvement brought by increasing k is considerable when k is small, but it soon becomes marginal. Therefore, it is reasonable to use a middle number of references like k = 3 to balance the distillation cost and performance.
Time Cost
The cost of preparing the training data is larger for DDRS since it requires training k teacher models and using each model to decode the training set. We argue that the cost is acceptable since the distillation cost is minor compared to the training cost of NAT, and we can reduce the training cost to make up for it. In Table 10, we report the performance and time cost of models with different batch sizes on the test set of WMT14 En-De. DDRS makes up for the larger distillation cost by using a smaller training batch, which has a similar cost to the CTC model of 64K batch and achieves superior performance compared to models of 128K batch. Gu et al. (2018) proposes non-autoregressive translation to reduce the translation latency, which suffers from the multi-modality problem. A line of work introduces latent variables to model the nondeterminism in the translation process, where latent variables are based on fertilities (Gu et al., 2018), vector quantization (Kaiser et al., 2018; and variational inference (Ma et al., 2019;Shu et al., 2020). Another branch of work proposes training objectives that are less influenced by the multi-modality problem to train NAT models Shao et al., 2019Shan et al., 2021;Du et al., 2021). Some researchers consider transferring the knowledge from autoregressive models to NAT Wei et al., 2019;Guo et al., 2020a;Sun and Yang, 2020). Besides, some work propose iterative NAT models that refine the model outputs with multi-pass iterative decoding Gu et al., 2019;Ghazvininejad et al., 2019;Kasai et al., 2020). Our work is most related to CTC-based NAT models (Graves et al., 2006;Libovický and Helcl, 2018;Kasner et al., 2020;Saharia et al., 2020;Zheng et al., 2021;Gu and Kong, 2020), which apply the CTC loss to model latent alignments for NAT. In autoregressive models, translations different from the reference can be evaluated with reinforcement learning (Ranzato et al., 2015;, probabilistic n-gram matching (Shao et al., 2018), or an evaluation module . Our work is also related to the task of diverse machine translation. Li et al. (2016);Vijayakumar et al. (2018) adjust the beam search algorithm by introducing regularization terms to encourage generating diverse outputs. He et al. (2018); Shen et al. (2019) introduce latent variables with the mixture of experts method and use different latent variables to generate diverse translations. generates diverse translations by sampling different attention heads. train the translation model with concrete dropout and samples different models from a posterior distribution. Li et al. (2021) generate different translations for the input sentence by mixing it with different sentence pairs sampled from the training corpus. Nguyen et al. (2020) augment the training set by translating the source-side and target-side data with multiple translation models, but they do not evaluate the diversity of the augmented data.
Related Work
Conclusion
In this paper, we propose diverse distillation with reference selection (DDRS) for NAT. Diverse distillation generates a dataset containing multiple references for each source sentence, and reference selection finds the best reference for the training. DDRS demonstrates its effectiveness on various benchmarks, setting new state-of-the-art performance levels for non-autoregressive translation.
Figure 1 :
1Illustration of diverse distillation and reference selection. Diverse distillation provides multiple references and reference selection selects the one that best fits the model output for the training.
Figure 2 :
2Reference BLEU and pairwise BLEU scores of SeedDiv and other diverse translation methods on the test set of WMT14 En-De. We do not use compound split to keep consistency with previous work.
Table 1 :
1The calculation cost of L max (θ) and L sum (θ) for different models. 'For' and 'Back' indicate forward and backward propagations respectively.
Table 2 :
2Performance comparison between our models and existing methods. The speedup is measured on WMT14 En-De test set. N denotes the length of translation. k means ensemble distillation(Freitag et al., 2017) from an ensemble of k AT models. -means not reported.
Table 2 .
2Compared with the com-
petitive CTC baseline, DDRS achieves a strong
improvement of more than 1.5 BLEU on average,
demonstrating the effectiveness of diverse distilla-
tion and reference selection. Compared with exist-
Table 3 :
3Performance of DDRS on the test set of WMT14 En-De with Transformer-big for distillation.Models
En-Fr De-En
Transformer
40.15
34.17
CTC
38.40
31.37
DDRS
39.91
33.12
DDRS +beam20&lm 40.59
34.74
Table 4 :
4Performance of DDRS on the test sets of WMT14 En-Fr and IWSLT14 De-En.
Table 4
4shows that DDRS stillLsum L mid Lmax λ
Reward
BLEU1 BLEU2
24.61
25.97
25.23
26.90
25.31
26.88
0
25.41
26.99
1/3
25.48
27.13
2/3
25.59
27.18
1
25.45
27.09
2/3
random
25.63
27.26
2/3
average
25.79
27.51
2/3 maximum
25.92
27.60
Table 5 :
5Ablation study on WMT14 En-De with differ-
ent combinations of techniques. BLEU 1 is the BLEU
score on validation set. BLEU 2 is the BLEU score on
test set. The validation performance of CTC baseline is
24.57 BLEU. λ is the length of the first training stage.
random means the reward of a random reference, av-
erage means the average reward, and maximum means
the maximum reward among all references.
Table 6 :
6The performance of AT and CTC-based NAT
on the same diverse distillation dataset of WMT14 En-
De with different loss functions. L CE is the cross-
entropy loss with sequence-level distillation. L sum ,
L mid , and L max described in section 3.2.1 are losses
for the diverse distillation dataset.
Methods
pwb ⇓
rfb⇑
BLEU
HardMoe
53.57
24.77
24.51
Dropout
69.71
26.23
25.35
SeedDiv
59.87
26.99
25.92
Table 7 :
7Pairwise BLEU (pwb) and reference BLEU
(rfb) scores of diverse translation techniques and their
DDRS performance on WMT14 En-De validation set.
pwb and rfb scores are measured on WMT14 En-De
test set without compound split.
Table 8 :
8BLEU scores on WMT test sets when using different automatic metrics as reward to finetune CTC.
Table 9 :
9Performance of DDRS with diffent number of references k on WMT14 En-De validation set.Models
Distill Train
Total BLEU
CTC (64K)
5.5h
26.4h 31.9h
26.34
CTC (128K)
5.5h
52.5h 58.0h
26.59
DDRS (32K) 16.5h 19.3h 35.8h
27.60
Table 10 :
10Performance and time cost of models with different batch sizes on WMT14 En-De. The time cost is measured on 8 GeForce RTX 3090 GPUs. The cost of Distill includes training teacher models and decoding source sentences.
k of total training steps. We report the results of
https://github.com/parlance/ctcdecode
AcknowledgementWe thank the anonymous reviewers for their insightful comments.This work was supported by National Key R&D Program of China (NO.2017YFE9132900).
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, date: 07-05-2015 Through 09-05- 20153rd International Conference on Learning Representations. Dzmitry Bahdanau, KyungHyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. 3rd International Conference on Learning Representations, ICLR 2015 ; Conference date: 07-05-2015 Through 09-05- 2015.
METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. Satanjeev Banerjee, Alon Lavie, Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization. the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or SummarizationAnn Arbor, MichiganAssociation for Computational LinguisticsSatanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with im- proved correlation with human judgments. In Pro- ceedings of the ACL Workshop on Intrinsic and Ex- trinsic Evaluation Measures for Machine Transla- tion and/or Summarization, pages 65-72, Ann Ar- bor, Michigan. Association for Computational Lin- guistics.
Nonautoregressive translation by learning target categorical codes. Yu Bao, Shujian Huang, Tong Xiao, Dongqi Wang, Xinyu Dai, Jiajun Chen, 10.18653/v1/2021.naacl-main.458Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsYu Bao, Shujian Huang, Tong Xiao, Dongqi Wang, Xinyu Dai, and Jiajun Chen. 2021. Non- autoregressive translation by learning target categori- cal codes. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 5749-5759, Online. Association for Computational Linguistics.
Order-agnostic cross entropy for non-autoregressive machine translation. Cunxiao Du, Zhaopeng Tu, Jing Jiang, ICML. Cunxiao Du, Zhaopeng Tu, and Jing Jiang. 2021. Order-agnostic cross entropy for non-autoregressive machine translation. In ICML.
Modeling fluency and faithfulness for diverse neural machine translation. Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, Dong Yu, The Thirty-Fourth AAAI Conference on Artificial Intelligence. New York, NY, USAAAAI Press2020Yang Feng, Wanying Xie, Shuhao Gu, Chenze Shao, Wen Zhang, Zhengxin Yang, and Dong Yu. 2020. Modeling fluency and faithfulness for diverse neu- ral machine translation. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 59- 66. AAAI Press.
Ensemble distillation for neural machine translation. Markus Freitag, Yaser Al-Onaizan, Baskaran Sankaran, arXiv:1702.01802arXiv preprintMarkus Freitag, Yaser Al-Onaizan, and Baskaran Sankaran. 2017. Ensemble distillation for neural machine translation. arXiv preprint arXiv:1702.01802.
Learning to rewrite for non-autoregressive neural machine translation. Xinwei Geng, Xiaocheng Feng, Bing Qin, 10.18653/v1/2021.emnlp-main.265Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsXinwei Geng, Xiaocheng Feng, and Bing Qin. 2021. Learning to rewrite for non-autoregressive neural machine translation. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan- guage Processing, pages 3297-3308, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Aligned cross entropy for non-autoregressive machine translation. Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, Omer Levy, ICML. Marjan Ghazvininejad, Vladimir Karpukhin, Luke Zettlemoyer, and Omer Levy. 2020. Aligned cross entropy for non-autoregressive machine translation. In ICML.
Mask-predict: Parallel decoding of conditional masked language models. Marjan Ghazvininejad, Omer Levy, Yinhan Liu, Luke Zettlemoyer, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Mask-predict: Parallel de- coding of conditional masked language models. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 6112- 6121.
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks. Alex Graves, Santiago Fernández, Faustino Gomez, Jürgen Schmidhuber, 10.1145/1143844.1143891Proceedings of the 23rd International Conference on Machine Learning, ICML '06. the 23rd International Conference on Machine Learning, ICML '06New York, NY, USAAssociation for Computing MachineryAlex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber. 2006. Connectionist temporal classification: Labelling unsegmented se- quence data with recurrent neural networks. In Pro- ceedings of the 23rd International Conference on Machine Learning, ICML '06, page 369-376, New York, NY, USA. Association for Computing Machin- ery.
Nonautoregressive neural machine translation. Jiatao Gu, James Bradbury, Caiming Xiong, O K Victor, Richard Li, Socher, International Conference on Learning Representations. Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor O.K. Li, and Richard Socher. 2018. Non- autoregressive neural machine translation. In Inter- national Conference on Learning Representations.
Fully nonautoregressive neural machine translation: Tricks of the trade. Jiatao Gu, Xiang Kong, Jiatao Gu and Xiang Kong. 2020. Fully non- autoregressive neural machine translation: Tricks of the trade.
Levenshtein transformer. Jiatao Gu, Changhan Wang, Junbo Zhao, Advances in Neural Information Processing Systems. Curran Associates, Inc32Jiatao Gu, Changhan Wang, and Junbo Zhao. 2019. Levenshtein transformer. In Advances in Neural In- formation Processing Systems, volume 32. Curran Associates, Inc.
Fine-tuning by curriculum learning for non-autoregressive neural machine translation. Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, Tie-Yan Liu, 10.1609/aaai.v34i05.6289Proceedings of the AAAI Conference on Artificial Intelligence. 34Junliang Guo, Xu Tan, Linli Xu, Tao Qin, Enhong Chen, and Tie-Yan Liu. 2020a. Fine-tuning by cur- riculum learning for non-autoregressive neural ma- chine translation. Proceedings of the AAAI Confer- ence on Artificial Intelligence, 34:7839-7846.
Jointly masked sequence-to-sequence model for nonautoregressive neural machine translation. Junliang Guo, Linli Xu, Enhong Chen, 10.18653/v1/2020.acl-main.36Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsJunliang Guo, Linli Xu, and Enhong Chen. 2020b. Jointly masked sequence-to-sequence model for non- autoregressive neural machine translation. In Pro- ceedings of the 58th Annual Meeting of the Associa- tion for Computational Linguistics, pages 376-385, Online. Association for Computational Linguistics.
Sequence to sequence mixture model for diverse machine translation. Xuanli He, Gholamreza Haffari, Mohammad Norouzi, 10.18653/v1/K18-1056Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningBrussels, BelgiumAssociation for Computational LinguisticsXuanli He, Gholamreza Haffari, and Mohammad Norouzi. 2018. Sequence to sequence mixture model for diverse machine translation. In Proceed- ings of the 22nd Conference on Computational Nat- ural Language Learning, pages 583-592, Brussels, Belgium. Association for Computational Linguis- tics.
Non-autoregressive translation with layer-wise prediction and deep supervision. Chenyang Huang, Hao Zhou, Lili Osmar R Zaiane, Lei Mou, Li, abs/2110.07515ArXiv. Chenyang Huang, Hao Zhou, Osmar R Zaiane, Lili Mou, and Lei Li. 2021. Non-autoregressive transla- tion with layer-wise prediction and deep supervision. ArXiv, abs/2110.07515.
Fast decoding in sequence models using discrete latent variables. Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, Noam Shazeer, PMLRProceedings of the 35th International Conference on Machine Learning. the 35th International Conference on Machine Learning80Lukasz Kaiser, Samy Bengio, Aurko Roy, Ashish Vaswani, Niki Parmar, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence models using discrete latent variables. In Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Re- search, pages 2390-2399. PMLR.
Non-autoregressive machine translation with disentangled context transformer. Jungo Kasai, James Cross, Marjan Ghazvininejad, Jiatao Gu, ICML. Jungo Kasai, James Cross, Marjan Ghazvininejad, and Jiatao Gu. 2020. Non-autoregressive machine trans- lation with disentangled context transformer. In ICML.
Improving fluency of non-autoregressive machine translation. Zdeněk Kasner, Jindřich Libovický, Jindřich Helcl, Zdeněk Kasner, Jindřich Libovický, and Jindřich Helcl. 2020. Improving fluency of non-autoregressive ma- chine translation.
Sequencelevel knowledge distillation. Yoon Kim, Alexander M Rush, 10.18653/v1/D16-1139Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsYoon Kim and Alexander M. Rush. 2016. Sequence- level knowledge distillation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1317-1327, Austin, Texas. Association for Computational Linguistics.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, abs/1412.6980CoRRDiederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Deterministic non-autoregressive neural sequence modeling by iterative refinement. Jason Lee, Elman Mansimov, Kyunghyun Cho, 10.18653/v1/D18-1149Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 1173- 1182, Brussels, Belgium. Association for Computa- tional Linguistics.
Mixup decoding for diverse machine translation. Jicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, Haifeng Wang, 10.18653/v1/2021.findings-emnlp.29Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsJicheng Li, Pengzhi Gao, Xuanfu Wu, Yang Feng, Zhongjun He, Hua Wu, and Haifeng Wang. 2021. Mixup decoding for diverse machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 312-320, Punta Cana, Dominican Republic. Association for Compu- tational Linguistics.
A simple, fast diverse decoding algorithm for neural generation. Jiwei Li, Will Monroe, Dan Jurafsky, Jiwei Li, Will Monroe, and Dan Jurafsky. 2016. A sim- ple, fast diverse decoding algorithm for neural gen- eration.
Hint-based training for non-autoregressive machine translation. Zhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Liwei Wang, Tie-Yan Liu, 10.18653/v1/D19-1573Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsZhuohan Li, Zi Lin, Di He, Fei Tian, Tao Qin, Li- wei Wang, and Tie-Yan Liu. 2019. Hint-based train- ing for non-autoregressive machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5708- 5713, Hong Kong, China. Association for Computa- tional Linguistics.
End-toend non-autoregressive neural machine translation with connectionist temporal classification. Jindřich Libovický, Jindřich Helcl, 10.18653/v1/D18-1336Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsJindřich Libovický and Jindřich Helcl. 2018. End-to- end non-autoregressive neural machine translation with connectionist temporal classification. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 3016- 3021, Brussels, Belgium. Association for Computa- tional Linguistics.
Enriching non-autoregressive transformer with syntactic and semantic structures for neural machine translation. Ye Liu, Yao Wan, Jianguo Zhang, Wenting Zhao, Philip Yu, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsYe Liu, Yao Wan, Jianguo Zhang, Wenting Zhao, and Philip Yu. 2021. Enriching non-autoregressive trans- former with syntactic and semantic structures for neural machine translation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1235-1244, Online. Association for Computational Linguistics.
FlowSeq: Nonautoregressive conditional sequence generation with generative flow. Xuezhe Ma, Chunting Zhou, Xian Li, Graham Neubig, Eduard Hovy, 10.18653/v1/D19-1437Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsXuezhe Ma, Chunting Zhou, Xian Li, Graham Neu- big, and Eduard Hovy. 2019. FlowSeq: Non- autoregressive conditional sequence generation with generative flow. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 4282-4292, Hong Kong, China. As- sociation for Computational Linguistics.
Data diversification: A simple strategy for neural machine translation. Xuan-Phi Nguyen, Shafiq R Joty, Kui Wu, Ai Ti Aw, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. 2020virtualXuan-Phi Nguyen, Shafiq R. Joty, Kui Wu, and Ai Ti Aw. 2020. Data diversification: A simple strategy for neural machine translation. In Advances in Neu- ral Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
Reward augmented maximum likelihood for neural structured prediction. Mohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, Dale Schuurmans, Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems. Barcelona, SpainMohammad Norouzi, Samy Bengio, Zhifeng Chen, Navdeep Jaitly, Mike Schuster, Yonghui Wu, and Dale Schuurmans. 2016. Reward augmented max- imum likelihood for neural structured prediction. In Advances in Neural Information Processing Sys- tems 29: Annual Conference on Neural Informa- tion Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pages 1723-1731.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Proceedings of NAACL-HLT 2019: Demonstrations. NAACL-HLT 2019: DemonstrationsMyle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
Glancing transformer for non-autoregressive neural machine translation. Lihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, Lei Li, 10.18653/v1/2021.acl-long.155Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingLong Papers1Online. Association for Computational LinguisticsLihua Qian, Hao Zhou, Yu Bao, Mingxuan Wang, Lin Qiu, Weinan Zhang, Yong Yu, and Lei Li. 2021. Glancing transformer for non-autoregressive neural machine translation. In Proceedings of the 59th An- nual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer- ence on Natural Language Processing (Volume 1: Long Papers), pages 1993-2003, Online. Associa- tion for Computational Linguistics.
Learning to recover from multi-modality errors for non-autoregressive neural machine translation. Yankai Qiu Ran, Peng Lin, Jie Li, Zhou, 10.18653/v1/2020.acl-main.277Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsQiu Ran, Yankai Lin, Peng Li, and Jie Zhou. 2020. Learning to recover from multi-modality errors for non-autoregressive neural machine translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3059- 3069, Online. Association for Computational Lin- guistics.
Aurelio Marc, Sumit Ranzato, Michael Chopra, Wojciech Auli, Zaremba, arXiv:1511.06732Sequence level training with recurrent neural networks. arXiv preprintMarc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level train- ing with recurrent neural networks. arXiv preprint arXiv:1511.06732.
Aurko Roy, Ashish Vaswani, Arvind Neelakantan, Niki Parmar, arXiv:1805.11063Theory and experiments on vector quantized autoencoders. arXiv preprintAurko Roy, Ashish Vaswani, Arvind Neelakantan, and Niki Parmar. 2018. Theory and experiments on vector quantized autoencoders. arXiv preprint arXiv:1805.11063.
Non-autoregressive machine translation with latent alignments. Chitwan Saharia, William Chan, Saurabh Saxena, Mohammad Norouzi, 10.18653/v1/2020.emnlp-main.83Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsChitwan Saharia, William Chan, Saurabh Saxena, and Mohammad Norouzi. 2020. Non-autoregressive ma- chine translation with latent alignments. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1098-1108, Online. Association for Computational Linguistics.
BLEURT: Learning robust metrics for text generation. Thibault Sellam, Dipanjan Das, Ankur Parikh, 10.18653/v1/2020.acl-main.704Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsThibault Sellam, Dipanjan Das, and Ankur Parikh. 2020. BLEURT: Learning robust metrics for text generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 7881-7892, Online. Association for Computa- tional Linguistics.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsLong Papers1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.
Modeling coverage for non-autoregressive neural machine translation. Yong Shan, Yang Feng, Chenze Shao, 2021 International Joint Conference on Neural Networks (IJCNN). IEEEYong Shan, Yang Feng, and Chenze Shao. 2021. Mod- eling coverage for non-autoregressive neural ma- chine translation. In 2021 International Joint Con- ference on Neural Networks (IJCNN), pages 1-8. IEEE.
Greedy search with probabilistic n-gram matching for neural machine translation. Chenze Shao, Xilin Chen, Yang Feng, 10.18653/v1/D18-1510Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsChenze Shao, Xilin Chen, and Yang Feng. 2018. Greedy search with probabilistic n-gram matching for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Nat- ural Language Processing, pages 4778-4784, Brus- sels, Belgium. Association for Computational Lin- guistics.
Retrieving sequential information for non-autoregressive neural machine translation. Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, Jie Zhou, 10.18653/v1/P19-1288Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Xilin Chen, and Jie Zhou. 2019. Retrieving sequential information for non-autoregressive neural machine translation. In Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics, pages 3013-3024, Florence, Italy. Asso- ciation for Computational Linguistics.
Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, Jie Zhou, 10.1162/coli_a_00421Sequence-Level Training for Non-Autoregressive Neural Machine Translation. Computational Linguistics. 47Chenze Shao, Yang Feng, Jinchao Zhang, Fandong Meng, and Jie Zhou. 2021. Sequence-Level Train- ing for Non-Autoregressive Neural Machine Trans- lation. Computational Linguistics, 47(4):891-925.
Minimizing the bag-ofngrams difference for non-autoregressive neural machine translation. Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, Jie Zhou, The Thirty-Fourth AAAI Conference on Artificial Intelligence. New York, NY, USAAAAI Press2020Chenze Shao, Jinchao Zhang, Yang Feng, Fandong Meng, and Jie Zhou. 2020. Minimizing the bag-of- ngrams difference for non-autoregressive neural ma- chine translation. In The Thirty-Fourth AAAI Con- ference on Artificial Intelligence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 198-205. AAAI Press.
Mixture models for diverse machine translation: Tricks of the trade. Tianxiao Shen, Myle Ott, Michael Auli, Marc'aurelio Ranzato, PMLRProceedings of the 36th International Conference on Machine Learning. the 36th International Conference on Machine Learning97Tianxiao Shen, Myle Ott, Michael Auli, and Marc'Aurelio Ranzato. 2019. Mixture models for diverse machine translation: Tricks of the trade. In Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Ma- chine Learning Research, pages 5719-5728. PMLR.
Latent-variable nonautoregressive neural machine translation with deterministic inference using a delta posterior. Raphael Shu, Jason Lee, Hideki Nakayama, Kyunghyun Cho, AAAI. Raphael Shu, Jason Lee, Hideki Nakayama, and Kyunghyun Cho. 2020. Latent-variable non- autoregressive neural machine translation with deter- ministic inference using a delta posterior. In AAAI.
AligNART: Non-autoregressive neural machine translation by jointly learning to estimate alignment and translate. Jongyoon Song, Sungwon Kim, Sungroh Yoon, 10.18653/v1/2021.emnlp-main.1Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican RepublicAssociation for Computational LinguisticsJongyoon Song, Sungwon Kim, and Sungroh Yoon. 2021. AligNART: Non-autoregressive neural ma- chine translation by jointly learning to estimate alignment and translate. In Proceedings of the 2021 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1-14, Online and Punta Cana, Dominican Republic. Association for Compu- tational Linguistics.
Generating diverse translation by manipulating multi-head attention. Zewei Sun, Shujian Huang, Hao-Ran, Xinyu Wei, Jiajun Dai, Chen, The Thirty-Fourth AAAI Conference on Artificial Intelligence. New York, NY, USAAAAI Press2020Zewei Sun, Shujian Huang, Hao-Ran Wei, Xinyu Dai, and Jiajun Chen. 2020. Generating diverse transla- tion by manipulating multi-head attention. In The Thirty-Fourth AAAI Conference on Artificial Intelli- gence, AAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8976-8983. AAAI Press.
Fast structured decoding for sequence models. Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, Zhihong Deng, Advances in Neural Information Processing Systems. 32Zhiqing Sun, Zhuohan Li, Haoqing Wang, Di He, Zi Lin, and Zhihong Deng. 2019. Fast structured decoding for sequence models. In Advances in Neu- ral Information Processing Systems 32, pages 3016- 3026.
An EM approach to non-autoregressive conditional sequence generation. Zhiqing Sun, Yiming Yang, PMLRProceedings of the 37th International Conference on Machine Learning. the 37th International Conference on Machine Learning119Zhiqing Sun and Yiming Yang. 2020. An EM ap- proach to non-autoregressive conditional sequence generation. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 9249-9258. PMLR.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17. the 31st International Conference on Neural Information Processing Systems, NIPS'17Red Hook, NY, USACurran Associates IncAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, pages 6000-6010, Red Hook, NY, USA. Curran Associates Inc.
Diverse beam search for improved description of complex scenes. K Ashwin, Michael Vijayakumar, Ramprasaath R Cogswell, Qing Selvaraju, Stefan Sun, David J Lee, Dhruv Crandall, Batra, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18)Louisiana, USAAAAI PressAshwin K. Vijayakumar, Michael Cogswell, Ram- prasaath R. Selvaraju, Qing Sun, Stefan Lee, David J. Crandall, and Dhruv Batra. 2018. Diverse beam search for improved description of complex scenes. In Proceedings of the Thirty-Second AAAI Confer- ence on Artificial Intelligence, (AAAI-18), New Or- leans, Louisiana, USA, February 2-7, 2018, pages 7371-7379. AAAI Press.
Non-autoregressive machine translation with auxiliary regularization. Yiren Wang, Fei Tian, Di He, Tao Qin, Chengxiang Zhai, Tie-Yan Liu, 10.1609/aaai.v33i01.33015377The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019. Honolulu, Hawaii, USAAAAI PressYiren Wang, Fei Tian, Di He, Tao Qin, ChengXiang Zhai, and Tie-Yan Liu. 2019. Non-autoregressive machine translation with auxiliary regularization. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, Honolulu, Hawaii, USA, 2019, pages 5377-5384. AAAI Press.
Imitation learning for nonautoregressive neural machine translation. Bingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, Xu Sun, 10.18653/v1/P19-1125Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsBingzhen Wei, Mingxuan Wang, Hao Zhou, Junyang Lin, and Xu Sun. 2019. Imitation learning for non- autoregressive neural machine translation. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 1304- 1312, Florence, Italy. Association for Computational Linguistics.
Simple statistical gradientfollowing algorithms for connectionist reinforcement learning. Ronald J Williams, 10.1007/BF00992696Mach. Learn. 83-4Ronald J. Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Mach. Learn., 8(3-4):229-256.
A learning algorithm for continually running fully recurrent neural networks. J Ronald, David Williams, Zipser, Neural computation. 21Ronald J Williams and David Zipser. 1989. A learn- ing algorithm for continually running fully recurrent neural networks. Neural computation, 1(2).
Generating diverse translation from model distribution with dropout. Xuanfu Wu, Yang Feng, Chenze Shao, 10.18653/v1/2020.emnlp-main.82Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsXuanfu Wu, Yang Feng, and Chenze Shao. 2020. Gen- erating diverse translation from model distribution with dropout. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1088-1097, Online. As- sociation for Computational Linguistics.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, Zhifeng Chen, V Quoc, Mohammad Le, Wolfgang Norouzi, Maxim Macherey, Yuan Krikun, Qin Cao, Klaus Gao, Macherey, arXiv:1609.08144arXiv preprintYonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's neural machine translation system: Bridging the gap between hu- man and machine translation. arXiv preprint arXiv:1609.08144.
Exploiting source-side monolingual data in neural machine translation. Jiajun Zhang, Chengqing Zong, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingJiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1535-1545.
Bertscore: Evaluating text generation with bert. Tianyi Zhang, * , Varsha Kishore, * , Felix Wu, * , Kilian Q Weinberger, Yoav Artzi, International Conference on Learning Representations. Tianyi Zhang*, Varsha Kishore*, Felix Wu*, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with bert. In International Conference on Learning Representations.
Duplex sequence-to-sequence learning for reversible machine translation. Zaixiang Zheng, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, Lei Li, Zaixiang Zheng, Hao Zhou, Shujian Huang, Jiajun Chen, Jingjing Xu, and Lei Li. 2021. Duplex sequence-to-sequence learning for reversible ma- chine translation.
Understanding knowledge distillation in nonautoregressive machine translation. Chunting Zhou, Jiatao Gu, Graham Neubig, International Conference on Learning Representations. Chunting Zhou, Jiatao Gu, and Graham Neubig. 2020. Understanding knowledge distillation in non- autoregressive machine translation. In International Conference on Learning Representations.
Improving non-autoregressive neural machine translation with monolingual data. Jiawei Zhou, Phillip Keung, 10.18653/v1/2020.acl-main.171Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsJiawei Zhou and Phillip Keung. 2020. Improving non-autoregressive neural machine translation with monolingual data. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 1893-1898, Online. Association for Computational Linguistics.
| [
"https://github.com/ictnlp/DDRS-NAT.",
"https://github.com/parlance/ctcdecode"
] |
[
"Signal & Image Processing",
"Signal & Image Processing"
] | [
"Nisreen Abdallah \nSudan University of Science and Technology\nSudan\n",
"Serestina Viriri \nUniversity of KwaZulu-Natal\nSouth Africa\n"
] | [
"Sudan University of Science and Technology\nSudan",
"University of KwaZulu-Natal\nSouth Africa"
] | [
"An International Journal (SIPIJ)"
] | The main aim of this study is the assessment and discussion of a model for hand-written Arabic through segmentation. The framework is proposed based on three steps: pre-processing, segmentation, and evaluation. In the pre-processing step, morphological operators are applied for Connecting Gaps (CGs) in written words. Gaps happen when pen lifting-off during writing, scanning documents, or while converting images to binary type. In the segmentation step, first removed the small diacritics then bounded a connected component to segment offline words. Huge data was utilized in the proposed model for applying a variety of handwriting styles so that to be more compatible with real-life applications. Consequently, on the automatic evaluation stage, selected randomly 1,131 images from the IESK-ArDB database, and then segmented into sub-words. After small gaps been connected, the model performance evaluation had been reached 88% against the standard ground truth of the database. The proposed model achieved the highest accuracy when compared with the related works. AUTHORS NISREEN ABDALLAH received a BSc degree in computer science from Omdurman Islamic University, and an M.Sc. degree from the University of Khartoum, Sudan. She is currently pursuing a Ph.D. degree in computer science at Sudan University of Science and Technology, Sudan. She also works as a lecture in the Department of Information Technology at the University of Kordofan. Her research interests include machine learning, computer vision, image processing, and pattern recognition. She attended and presented in some national and international conferences about handwriting recognition.SERESTINA VIRIRI (Member, IEEE) is currently a professor of computer science.He is also working as a Professor of the school of mathematics, Statistics, and Computer Science, University of Kwazulu-Natal, South Africa. He is an NRF-rated researcher. He has published extensively several accredited Computer vision and image Processing journals, and international and national conference proceedings. His research areas include computer vision, image processing, pattern recognition, and other image processing related areas, such as biometrics, medical imaging, and nuclear medicine.Prof. Viriri serves as a Reviewer for several accredited journals. He has served on program committees for numerous international and national conferences. He has graduated in several M.Sc and Ph.D. students. | 10.5121/sipij.2020.11602 | [
"https://arxiv.org/pdf/2101.02797v1.pdf"
] | 231,419,119 | 2101.02797 | 8b8b5096531fd5655fe67ba5313eaeaa34bd0483 |
Signal & Image Processing
December 2020
Nisreen Abdallah
Sudan University of Science and Technology
Sudan
Serestina Viriri
University of KwaZulu-Natal
South Africa
Signal & Image Processing
An International Journal (SIPIJ)
116December 202010.5121/sipij.2020.1160221HandwritingWords SegmentationMorphological Operators
The main aim of this study is the assessment and discussion of a model for hand-written Arabic through segmentation. The framework is proposed based on three steps: pre-processing, segmentation, and evaluation. In the pre-processing step, morphological operators are applied for Connecting Gaps (CGs) in written words. Gaps happen when pen lifting-off during writing, scanning documents, or while converting images to binary type. In the segmentation step, first removed the small diacritics then bounded a connected component to segment offline words. Huge data was utilized in the proposed model for applying a variety of handwriting styles so that to be more compatible with real-life applications. Consequently, on the automatic evaluation stage, selected randomly 1,131 images from the IESK-ArDB database, and then segmented into sub-words. After small gaps been connected, the model performance evaluation had been reached 88% against the standard ground truth of the database. The proposed model achieved the highest accuracy when compared with the related works. AUTHORS NISREEN ABDALLAH received a BSc degree in computer science from Omdurman Islamic University, and an M.Sc. degree from the University of Khartoum, Sudan. She is currently pursuing a Ph.D. degree in computer science at Sudan University of Science and Technology, Sudan. She also works as a lecture in the Department of Information Technology at the University of Kordofan. Her research interests include machine learning, computer vision, image processing, and pattern recognition. She attended and presented in some national and international conferences about handwriting recognition.SERESTINA VIRIRI (Member, IEEE) is currently a professor of computer science.He is also working as a Professor of the school of mathematics, Statistics, and Computer Science, University of Kwazulu-Natal, South Africa. He is an NRF-rated researcher. He has published extensively several accredited Computer vision and image Processing journals, and international and national conference proceedings. His research areas include computer vision, image processing, pattern recognition, and other image processing related areas, such as biometrics, medical imaging, and nuclear medicine.Prof. Viriri serves as a Reviewer for several accredited journals. He has served on program committees for numerous international and national conferences. He has graduated in several M.Sc and Ph.D. students.
INTRODUCTION
Handwriting recognition has attracted the researcher's attention in Optical Character Recognition (OCR) area and needs more integration efforts between different research so that to reach satisfy outcomes [1]. Handwriting recognition simulates human writing in symbolic representation [2], [3] which has been an intense topic in the past three decades. Online and offline are types of handwriting recognition systems. The online type was made at the time of writing and offline after the writing was completed [4].
Handwriting cursive recognition is very challenging [5], also, the Arabic language has a special situation as the overlap between characters and the presence of diacritics like dots and Hamza complicated the task [6]. This is task seems simple, but it isn't. Even for human beings when the absence of full context [7]. Day to day services are offered by offline handwriting recognition, which can be summarized in forms processing, archiving books, signature, writer identification, bank check transaction, and documents editing.
In North Africa and Middle East areas Arabic is the main language. Arabic, Farsi, Sindhi, Pashto, and Urdu in most of the alphabets being the same in writing [8]. Arabic words written from right to left, character after character without spaces in between. However, six characters are linked from the right and disjoint from the left [1]. Samples of characters disjoining words. Characters are written in hand and print, with their names in Arabic and English, are shown in Figure 1. The Arabic words are composed of 28 characters. Each character has 2 to 5 shapes each in a particular position, moreover, is linked in a cursive form in a single stroke, and that why segmentation is seen as a difficult task [9].
The Arabic Alphabet contains 15 characters hold dots. Dots position above or below the primary character body. Just 6 among 28 characters are connected from the right side when are located in the middle. However, other characters connected into both sides left and right [10].
To enumerate sub-words in the word depends on the number of the characters disjoining. This form causes the sub-words phenomenon and also known as (PAW) Pieces of Arabic Words in Arabic and Arabic-like languages [1] [11].
The primary body of the word is completely connected if it doesn't have a disjointed character. Otherwise, the word is partially connected if containing more than one disjoined character. Two samples of handwriting words, completely or partially connected, dots positions and sub-word numbers are shown in Figure 2 and Cursive word is broken at the moment of pen lifting-off or while scanned documents then lost slightly parts of the word. The previous case leads to holes or gaps in the cursive word. The holes lead to a disconnect in the word body. Most Arabic handwriting segmentation errors are caused by gaps or discontinuity [12] [13].
The OCR has four stages: pre-process, segmentation, feature extraction, and classification. The result of a reliable and efficient OCR system depends on the first pre-processing essential stage [14] and incorrect parts may lead to invalid classification or refuse characters [15].
In conclusion, Arabic is written in a cursive form. Six characters disjoining from the left side [16]. Gaps in writing are produced due to lifting-off the pen, ink fades, and whiles after applying pre-processing operations [17].
Handwriting recognition faces many challenges different from print. The challenges of handwriting, as shown in Figure 4, are variant in styles, thickness changes, different writing materials, ligature, touching, and overlapping between characters. The previous challenges are not found in printed styles because the words have the same styles, thickness, and do not have ligature in most font types. The researchers face many challenges in Arabic handwriting recognition discussed with suggested solutions in more detail in [1][18] [19]. This paper presents three main ideas. The first idea is to design a pre-processing method to connect gaps CGs using a combination of three morphological operators. The second idea is to implement a model to segment words into sub-words. The third idea is to evaluate the result of segmentation against the ground truth.
The remaining parts of this paper are arranged as follows: Section 2 describes the related works. Section 3 shows and represents the methods and techniques implemented in the proposed model stages. Section 4 discusses the experimental results. Finally, the conclusion and future work discourses in Section 5.
RELATED WORKS
Many research has been carried out in the field of segmentation of offline Arabic handwritten scripts; however, still requiring intensive studies to segment word into characters. There are two approaches sued in recognition systems: segmentation-based and segmentation-free. Segmentation-based is an approach to segment words into characters or small units. This type also knows as an analytical approach. While segmentation free takes the word as a whole unit. This type knows as a global approach. Some, when used the first approach, assumed there were no gaps while writing the words, even if it already occurred [20]. Most studies prefer using the second approach and deal with a word as a whole and complete unit due to the variability and unconstrained of human writing [21]. This paper tries to segment words into sub-words using the first approach, segmentation-based. Our motivation is that there exist a few efforts to segment words into subwords. The two subsections below discuss the related works in morphological operation and connected component technique then summarize that in Table 1.
Morphological Operators
Morphological operators are often applied as a filter to reduce or remove noise from images in the pre-processing stage. Filters enhance and improve OCR system results. Slight studies have been investigated for Arabic character segmentation based on morphological operators [15].
A review was introduced using sample methods designed to remove noise that might appear through scanned documents images discussed intensively in [22]. Therefore, research in this area demands to be covered widely.
Most research in Arabic handwriting has been done in the pre-processing stage and focused on baseline detection and skew correction as in [23], [24], [25]. A deep discussion for mathematical morphology algorithms that enhance images performing before recognition discoursed in [26].
Due to the importance of pdf documents in day-to-day services, much research has been done. One of these recent research conducted by NH Barna et al. [27], that analyzed various components of documents using opening and dilation and other morphological operations. The method was segmented printed documents into text, image, table, and cell using the bounding box technique. Different sources for this research originate from digitally online and manually pdf scanned data. The accuracy when calculated illustrate table and cell have the best than image and text.
On the other side, some research recently used two morphological operations the top hat and the bottom hat as a filter to remove unwanted noise and contrast enhancement of the medical, remote sensing, and natural images [28].
An investigation was done and carried out using six morphological operators proposed as in [29] to enhance binary images extracted from twenty-five shapes. The six morphological operators were applied to remove the noise without modifying the shape. These operators were dilation, erosion, opening, closing, fil, and majority. Experiment results are subjectively discussed when two or more operators are combined, they significantly enhance images.
Motawa et. Al [30] merged morphological operators and connected components to segment words into characters after detect and correct slant stroke. The algorithm went through three steps: the filter of closing followed by opening, singularities, and regularities. Also, morphological operation filter can generate temporary to fix bit gaps in words while applying methods for character segmentation as in [14].
Connected Compounds CCs
Measuring distances between connected components in Arabic words, AlKhateeb et al. introduced a method using a vertical histogram and connected components to segment sub-words. The technique analyzed the line to decide if space areas correspond to one word or two. Then applied on 200 images from IFN/ENIT database and achieved an accuracy result of 85% [31].
To resolve the sub-words overlapping issue in Arabic handwritten script, Ghaleb et. al. proposed the pushing technique, which includes connected component labeling and threshold to get a clear vertical segmentation. This method evaluation was using IESK-ArDB, KHATT, and IFN/ENIT as an Arabic example and a Persian database and obtained 72% as an accuracy result [13]. Extra technique analyzed sub-word distance. Distance between bounding boxes was used and divided into main and auxiliary connected components to determine cutting points between boxes. Furthermore, they extract features to separate the connected component. The algorithm was tested on 450 images from IESK-ArDB without declaring the overall segmentation accuracy [32].
METHODS AND TECHNIQUES
The framework of the model was composed of three: pre-processing, segmentation, and evaluation stages are shown in Figure 5. The objectives of the model were enhancing and filtering binary images that contained disconnected gaps and segmenting the word into subwords. The methodology is based on assumption that gaps were in the words without separation into a new group.
Select a Database
The most effective factors to produce real-life handwriting recognition systems for the marketplace are availability and the unconstraint of a huge database. Moreover, result evaluating using a benchmark through standard ground truth.
Many studies came out in Arabic handwritten segmentation words. However, datasets were collecting locally and evaluated privately, accordingly, it was inapplicable to compare results from different databases. Although the IFN/ENIT database is familiar, available, and has not a ground truth clarify characters or sub-words position.
Thus, the IFN/ENIT database did not have segmentation-based process requirements. Despite this situation in [12] using this database by adding a manual file to evaluate segmentation results.
Therefore, to automatically evaluate the proposed model of CGs, the IESKArDB database [33]was voted and elected.
Model Algorithm
This section introduces the proposed CGs model. The model was implemented using two stages pre-processing and segmentation to resolve the gap in Arabic handwriting words, then evaluate the model against ground truth. Connect Gap model steps of Arabic handwriting manifest shown in Figure 6.
Pre-processing stage
A pre-processing operation is the first stage applied in OCR systems. The main objective of preprocessing in an OCR system is to prepare and enhance the images to the next stages. Before segmenting handwriting images or recognition by the machine, it requires many processes to enhance and prepare images such as filtering and removing noise. Maybe, the noises were due to low-quality paper, ink fade, and irregular hand movement [34]. In the model pre-processing stage contains binarization, morphological operation, and skeletonization steps. The next subsections discuss these steps in more detail.
Binarization
Is used to generate a binary image with zeros and ones depending on the threshold values. The images in the database stored in grayscale; hence, Otsu's [35] was applied. Otsu's utilized threshold value automatically and minimize variation in images.
Morphological operators
Produced a new image after modifying binary image pixel-by-pixel using mathematical operators. In the algorithm shown in Figure 7, the model requires a binary image before applying morphological operators to connect gaps. Three operators merge to bridge the gaps in Arabic words. These operators were described as follow:
1. Add pixels to expand the outer edge of the word. Repeat the process for the outer edge four times. 2. Set 0-valued pixel to 1, if it has two neighbors that have a 1-valued. Repeat the process for the outer edge twice. 3. Test 8-connectivity of the pixel, if five or more of them have 1-valued set the pixel to 1.
Otherwise, set the pixel to 0. Repeat the process for the outer edge twice.
Skeletonization
Commonly known as thinning, it is an important and crucial step in the OCR application. Skeleton step is used to reduce a binary image to 1-pixel using 4-connectivity and preserve image basic structure. The CGs model used a fast skeleton which was implemented in [36] and might effectively extract word skeleton.
Segmentation Stage
Segmentation is a process that attempts to decompose an image into subunits in an intelligent process after analyzing the content [37]. It is not an easy process, moreover, it depends upon various writing styles. Connected component (CC) is one of the strategies to segment an image by using a bounding box analysis [38]. The CC technique is used to bound the continued component into boxes. The CC technique was improved in [39] and [40], then introduced in a simple and efficient design in [41] which was used to label the CGs algorithm.
Figure7: CGs Algorithm
EXPERIMENTAL RESULTS AND DISCUSSION
Several experiments were examined in this section to evaluate the proposed model. The available and free database was used to validate the model. The proposed model results were compared with existing algorithms. The results were clarified in the following table. [29] Made as Remarks [30] 81.88 Not counted [12] 70 Not counted [31] 85 Not counted [13] 72 Not counted [32] Not Declared Not counted
The general accurate segmentation of the model was illustrated in Table 2. The proposed model obtained 88% as an accuracy result and 12% as a segmentation error rate.
The model segmentation errors were occurred due to variation in writing style, especially too long space of lifting-off the pen as showed in Figure 8 and Figure 9 while separated component found to lead to errors and influences the segmentation result. On the other hand, the touching component closed spaces lead to segmentation errors. As one of the obvious limitations, the results of the current model found that handwriting of Arabic words especially dots and Hamza create more errors in segmentation due to variation in handwriting styles Figure 8 and Figure 9.
the proposed model expanded Arabic words which can lead to touching close parts of the letters. As well as, small parts like dots and Hamza can be bigger which leads to misclassification. For that, the CGs model could not suitable for document words segmentation, which might word touch each other.
Database
The available and free database was used in the segmentation stage. The proposed model automatically evaluated using ground truth files attached to the IESK-ArDB database. The below section describes a particular database and its contents.
The IESK-ArDB dataset contains 4,000 words images with 350 dpi resolution in various sizes. The forms were designed using 8 pages and each page was filled with eight handwritten words. The dataset was collected using 22 writers from several Arabic countries. Some greyscale samples of the dataset shown in Figure 10. The database considered the distribution of basic Arabic letters in various positions (Begin, End, Isolated, Middle). More descriptions for this database found in [23]. The database was already divided into 12 parts. Each part kept both inputs as grayscale images in BMP format and ground truth information in XML format. Only one image identified by the ID Q01-006 did not get complete information in the ground truth. The ground truth contained ID, LetterLabel, baseline, sub-words, and more detailed information such as age and gender. Subwords tags contained pixels (ax, ay) and (bx, by), to establish upper-left and bottom-right bound coordinates. The LetterLabel tag contained the name, shape, and Unicode for each character in the word.
Selected 1,131 images randomly from IESKArDB to test and evaluate the proposed model. A word in the database may contain one or more sub-words. The chosen images included 2652 subwords. Most Arabic words in the database that will be used in the experiment were contained in more than one piece. Words that were selected to experiment proposed model were composed of variant categories as histogram shown in Figure 11.
Model Implementation
The proposed model was implemented using MATLAB R2018a version, windows 10 pro 64-bit Operating System, with RAM 6.00 GB, CPU 1.80 GHz core i5.
Calculation Metrics
The most common image segmentation evaluation metrics to compare results such as Accuracy, precision, recall, f-score, and specificity. Metrics that were used for model evaluation were discussed in the below section.
1-Accuracy:
Measures the ratio of true positive classes and true negative classes overall classes examined. It means how often is the classifier correct.
= ( + ) + + +
2-Precision:
It measures the ratio of the true positive classes over all the predicted positive classes.
= ( + )
3-Recall:
It measures the ratio of true positive overall actual positive classes. The recall is also known as sensitivity.
= ( + )
4-Specificity:
It measures the ratio of true negative overall actual negative classes.
Results and Discussion
The proposed model was evaluated on 1,131 images from IESK-ArDB. The segment sub-words part 1 to 5 were chosen to test the model. Three steps were applied to bridge the gap. In this experiment, some sources of error were observed. The main of these errors was touching between contiguous sub-words in the same word after applying expanded operators.
The proposed model utilized a huge data to apply a variety of handwriting styles so that to be more compatible with real-life applications and get real achievements.
The proposed model was compared with the findings of the related method in the literature and achieved an accuracy of 88% as the best result. In addition to that, the proposed model was evaluated using huge data comparing with other related works. As well, was evaluated automatically using the standard ground truth of the database. On the other hand, this result shows that the proposed model can connect small gaps and segment these words into its subwords properly.
The bounding box detection technique is used to evaluate and test the proposed model. A case of evaluations is shown in Figure 12, the proposed model against ground truth boxes. The red box represents the ground truth, and the green box represents the proposed model. Three cases of evaluation are excellent, good, poor. As an example, while evaluated the connected components, sub-words bounded using a minimal bounding box surrounding the word exactly after removing the dots. Nevertheless, the ground truth bounded the character and the dots as one word using maximal boxes, and some handwriting styles write dots far from characters Figure 13, which leads to minimizing the overlap ratio. The model was unable to detect sub-words which were 13% of all sub-words total. Undetected sub-words as shown in Figure 14, which are called specificity, had been appeared due to the size less than 30 was lost or dropped. The proposed model removed a small size less than 30 which may contain be dots, Hamza, or other small characters. The touching between sub-words leads to under-segmentation as shown in Figure 14 and Figure 15. Touching between sub-words leads to under-segmentation. Under-segmentation occurred when the number of boxes was less than the ground truth. Some examples of these results are shown in Figure 18, Figure 20, and Figure 22 as samples of gaps in words while pen lifting-off or during documents scanned. After applying CGs, gaps were connected as presented in Figure 19, Figure 21, and Figure 23. The connected gap led to true segmentation shown in Figure 16 and Figure 17. The proposed model boxes had been equals to numbers of the ground truth.
However, some words had a long space the CGs unable to connect them, as shown in Figure 24, and Figure 25, which led to over-segmentation as shown in Figure 26
CONCLUSIONS
The Arabic hand-writing model based on segmentation was well presented. Pre-processing steps were essentially applied to convert images into a binary mode. It also connects the small gaps to make images ready for segmentation. Subsequently, the connected component technique was used to segment words into sub-words using bounding boxes. Ground truth of IESK-ArDB database files compared with boundary boxes to evaluate the model. Pre-processing transformation enhanced the segmentation results.
In future work, the model should develop by investigating and resolving handwriting issues such as overlapping, touching, and ligature in sub-words. Resolving these issues intensive investigated research needs to be done in this area by improving more pre-processing filters to connect long gaps. As a suggestion to improve the proposed model can be integrated using the seam carving algorithm in [42]to resolve the touching problem. In addition to that, resolving ligature can apply the techniques using [43] and [44]. Arabic handwriting segmentation requires extensive research to produce suitable solutions to segment connected components to the characters correctly for producing handwriting recognition used in real-life services. Future trends of Arabic character recognition can be found in this survey in more detail [45].
CONFLICTS OF INTEREST
The authors declare no conflict of interest.
Figure 1 .
1Characters disjoining words
Figure 3 .
3
Figure 2 :
2"Barleen"Arabic word composed of two sub-words
Figure 3 :
3"khalyia" Arabic word composed of one sub-word.
Figure 4 .
4(a) Printed word, (b), (c), and (d) The same word was hand written in different styles.
Figure 5 :
5The proposed model framework
Figure 6 :
6Steps followed in Connected components model (CGs)
Figure 8 :Figure 9 :
89Over Segmentation due to lifting-off for long spaces Over segmentation due to the large size of dots and Hamza
Figure 10 :
10IESK-ArDB database samples
Figure 11 :
11Sub-words Histogram categories
Where TP: true positive, TN: true negative, FP: false positive, and FN: false negative.
Figure 12 :Figure 13 :
1213Three cases of bounding box Detection evaluation The dot far from the sub-word body; minimum overlap ratio.
Figure 14 :
14Size less than 30; losing parts.
Figure 15 :
15Under-segmentation; touching two sub-words.
Figure 27 .
27The oversegmentation has occurred when the number of boxes in the proposed model greater than the number of the ground truth.
Figure 16 :
16One word contains 4 sub-words; true segmentation.
Figure 17 :
17One word contains 1 sub-word; True segmentation.
Figure 18 :
18A circle shows the gap position before CGs.
Figure 19 :
19A circle shows the gap connect after CGs.
Figure 20 :
20Circles show gap positions before CGs.
Figure 21 :
21Circles show gaps connect after CGs.
Figure 22 :
22A circle shows the gap position before CGs.
Figure 23 :
23The circle shows the gap connect after CGs.
Figure 24 :
24CGs was not successful to connect the gap; too long horizontal space.
Figure 25 :
25CGs was not successful to connect the gap; too long vertical space.
Figure 26 :
26Over-segmentation; long space and large dots.
Figure 27 :
27Over-segmentation; the large size of Hamza and dots.
Table 1 .
1Summarized of related works.# Ref
Year
Database
ImagesNum
Techniques
Seg.Type
[29]
2008
Private
25
6 Operators
Pattern
[30]
1997
Private
Few hundred
Closing-Opening
One word
[12]
2012
IFN/ENIT
1250
Morphological filter
Sub-words
[31]
2009
IFN/ENIT
200
CCs
Sub-words
[13]
2015
4 Databases
400
CCs & Threshold
Sub-words
[32]
2016
IESK-ArDB
450
BBox distance analysis
Sub-words
Table 2 .
2Segmentation evaluation (%) of the model.Ref No#
Accuracy
Precision
Recall
Specificity
F-score
Proposed Model
88
89
99
13
93
ACKNOWLEDGMENTSThe authors would like to thank Laslo Dings and his colleagues for their efforts and assistance to access the full database through the internet.
Survey on Segmentation and Recognition of Handwritten Arabic Script. A A A Ali, M Suresha, A. A. A. Ali and M. Suresha, "Survey on Segmentation and Recognition of Handwritten Arabic Script," 2019.
Online and off-line handwriting recognition: a comprehensive survey. R Plamondon, S N Srihari, IEEE Transactions. 221R. Plamondon and S. N. Srihari, "Online and off-line handwriting recognition: a comprehensive survey," IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 1, pp. 63-84, 2000.
Offline Arabic handwritten text recognition: a survey. M T Parvez, S A Mahmoud, ACM Computing Surveys (CSUR). 452M. T. Parvez and S. A. Mahmoud, "Offline Arabic handwritten text recognition: a survey," ACM Computing Surveys (CSUR), vol. 45, no. 2, pp. 1-35, 2013.
Off-line handwritten Arabic character recognition: a survey. M Shatnawi, Proceedings of the international conference on image processing, computer vision, and pattern recognition (IPCV). the international conference on image processing, computer vision, and pattern recognition (IPCV)52The Steering Committee of The World Congress in Computer ScienceM. Shatnawi, "Off-line handwritten Arabic character recognition: a survey," in Proceedings of the international conference on image processing, computer vision, and pattern recognition (IPCV), 2015: The Steering Committee of The World Congress in Computer Science, Computer …, p. 52.
The segmentation problem in arabic character recognition the state of the art. A M Zeki, 2005 International Conference on Information and Communication Technologies. IEEEA. M. Zeki, "The segmentation problem in arabic character recognition the state of the art," in 2005 International Conference on Information and Communication Technologies, 2005: IEEE, pp. 11-26.
Off-Line handwriting Arabic text recognition: a survey. I Yousif, A Shaout, International Journal of Advanced Research in Computer Science and Software Engineering. 49I. Yousif and A. Shaout, "Off-Line handwriting Arabic text recognition: a survey," International Journal of Advanced Research in Computer Science and Software Engineering, vol. 4, no. 9, 2014.
Optical character recognition-a survey. S Impedovo, L Ottaviano, S Occhinegro, International journal of pattern recognition and artificial intelligence. 501n02S. Impedovo, L. Ottaviano, and S. Occhinegro, "Optical character recognition-a survey," International journal of pattern recognition and artificial intelligence, vol. 5, no. 01n02, pp. 1-24, 1991.
Offline Arabic handwriting recognition: a survey. L M Lorigo, V Govindaraju, IEEE transactions. 285L. M. Lorigo and V. Govindaraju, "Offline Arabic handwriting recognition: a survey," IEEE transactions on pattern analysis and machine intelligence, vol. 28, no. 5, pp. 712-724, 2006.
The state of the art in online handwriting recognition. C C Tappert, C Y Suen, T Wakahara, IEEE Transactions. 128C. C. Tappert, C. Y. Suen, and T. Wakahara, "The state of the art in online handwriting recognition," IEEE Transactions on pattern analysis and machine intelligence, vol. 12, no. 8, pp. 787-808, 1990.
Off-line Arabic character recognition-a review. M S Khorsheed, Pattern analysis & applications. 5M. S. Khorsheed, "Off-line Arabic character recognition-a review," Pattern analysis & applications, vol. 5, no. 1, pp. 31-45, 2002.
Offline automatic segmentation based recognition of handwritten arabic words. L Dinges, A Al-Hamadi, M Elzobi, Z Aghbari, H Mustafa, International Journal of Signal Processing. 44Image Processing and Pattern RecognitionL. Dinges, A. Al-Hamadi, M. Elzobi, Z. Al Aghbari, and H. Mustafa, "Offline automatic segmentation based recognition of handwritten arabic words," International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 4, no. 4, pp. 131-143, 2011.
Three Evaluation Criteria's towards a Comparison of Two Characters Segmentation Methods for Handwritten Arabic Script. F B Samoud, S S Maddouri, H Amiri, 2012 International Conference on Frontiers in Handwriting Recognition. IEEEF. B. Samoud, S. S. Maddouri, and H. Amiri, "Three Evaluation Criteria's towards a Comparison of Two Characters Segmentation Methods for Handwritten Arabic Script," in 2012 International Conference on Frontiers in Handwriting Recognition, 2012: IEEE, pp. 774-779.
Segmentation of overlapped handwritten Arabic subwords. H Ghaleb, P Nagabhushan, U Pal, International Journal of Computer Applications. 9758887H. Ghaleb, P. Nagabhushan, and U. Pal, "Segmentation of overlapped handwritten Arabic sub- words," International Journal of Computer Applications, vol. 975, p. 8887, 2015.
A comparative study between methods of Arabic baseline detection. A.-S Atallah, K Omar, 2009 International Conference on Electrical Engineering and Informatics. IEEE1A.-S. Atallah and K. Omar, "A comparative study between methods of Arabic baseline detection," in 2009 International Conference on Electrical Engineering and Informatics, 2009, vol. 1: IEEE, pp. 73- 77.
A survey on Arabic character segmentation. Y M Alginahi, International Journal on Document Analysis and Recognition (IJDAR). 162Y. M. Alginahi, "A survey on Arabic character segmentation," International Journal on Document Analysis and Recognition (IJDAR), vol. 16, no. 2, pp. 105-126, 2013.
A method of recognition of Arabic cursive handwriting. H Almuallim, S Yamaguchi, IEEE transactions. 5H. Almuallim and S. Yamaguchi, "A method of recognition of Arabic cursive handwriting," IEEE transactions on pattern analysis and machine intelligence, no. 5, pp. 715-722, 1987.
Visual recognition of Arabic handwriting: challenges and new directions. M Cheriet, Summit on Arabic and Chinese Handwriting Recognition. SpringerM. Cheriet, "Visual recognition of Arabic handwriting: challenges and new directions," in Summit on Arabic and Chinese Handwriting Recognition, 2006: Springer, pp. 1-21.
Arabic handwriting recognition: Challenges and solutions. A A Aburas, M E Gumah, 2008 International Symposium on Information Technology. IEEE2A. A. Aburas and M. E. Gumah, "Arabic handwriting recognition: Challenges and solutions," in 2008 International Symposium on Information Technology, 2008, vol. 2: IEEE, pp. 1-6.
Arabic character recognition: Progress and challenges. P Ahmed, Y Al-Ohali, Journal of King Saud University-Computer and Information Sciences. 12P. Ahmed and Y. Al-Ohali, "Arabic character recognition: Progress and challenges," Journal of King Saud University-Computer and Information Sciences, vol. 12, pp. 85-116, 2000.
A Frame Work For Arabic Handwritten Recognition Based on Segmentation. A Lawgali, M Angelova, A Bouridane, A. Lawgali, M. Angelova, and A. Bouridane, "A Frame Work For Arabic Handwritten Recognition Based on Segmentation," 2014.
Offline Arabic handwriting recognition system based on HMM. D Xiang, H Yan, X Chen, Y Cheng, 2010 3rd International Conference on Computer Science and Information Technology. IEEE1D. Xiang, H. Yan, X. Chen, and Y. Cheng, "Offline Arabic handwriting recognition system based on HMM," in 2010 3rd International Conference on Computer Science and Information Technology, 2010, vol. 1: IEEE, pp. 526-529.
Document image noises and removal methods. A Farahmand, H Sarrafzadeh, J Shanbehzadeh, A. Farahmand, H. Sarrafzadeh, and J. Shanbehzadeh, "Document image noises and removal methods," 2013.
A novel baseline detection method of handwritten Arabic-script documents based on sub-words. T Abu-Ain, S N H S Abdullah, B Bataineh, K Omar, A Abu-Ein, International Multi-Conference on Artificial Intelligence Technology. SpringerT. Abu-Ain, S. N. H. S. Abdullah, B. Bataineh, K. Omar, and A. Abu-Ein, "A novel baseline detection method of handwritten Arabic-script documents based on sub-words," in International Multi-Conference on Artificial Intelligence Technology, 2013: Springer, pp. 67-77.
Pre-processing methods for handwritten Arabic documents. F Farooq, V Govindaraju, M Perrone, Eighth International Conference on Document Analysis and Recognition (ICDAR'05). IEEEF. Farooq, V. Govindaraju, and M. Perrone, "Pre-processing methods for handwritten Arabic documents," in Eighth International Conference on Document Analysis and Recognition (ICDAR'05), 2005: IEEE, pp. 267-271.
A New Approach and Algorithm for Baseline Detection of Arabic Handwriting. A Baz, M Baz, A. Baz and M. Baz, "A New Approach and Algorithm for Baseline Detection of Arabic Handwriting," 2015.
The use of mathematical morphology in image enhancement. J Song, R L Stevenson, E J Delp, Proceedings of the 32nd Midwest Symposium on Circuits and Systems. the 32nd Midwest Symposium on Circuits and SystemsIEEEJ. Song, R. L. Stevenson, and E. J. Delp, "The use of mathematical morphology in image enhancement," in Proceedings of the 32nd Midwest Symposium on Circuits and Systems, 1989: IEEE, pp. 67-70.
Segmentation of heterogeneous documents into homogeneous components using morphological operations. N H Barna, T I Erana, S Ahmed, H Heickal, 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS). N. H. Barna, T. I. Erana, S. Ahmed, and H. Heickal, "Segmentation of heterogeneous documents into homogeneous components using morphological operations," in 2018 IEEE/ACIS 17th International Conference on Computer and Information Science (ICIS), 2018, pp. 513-518.
Two-dimensional gray scale image denoising via morphological operations in NSST domain & bitonic filtering. B Goyal, A Dogra, S Agrawal, B Sohi, Future Generation Computer Systems. 82B. Goyal, A. Dogra, S. Agrawal, and B. Sohi, "Two-dimensional gray scale image denoising via morphological operations in NSST domain & bitonic filtering," Future Generation Computer Systems, vol. 82, pp. 158-175, 2018.
Noise removal and enhancement of binary images using morphological operations. N Jamil, T M T Sembok, Z A Bakar, 2008 International Symposium on Information Technology. IEEE4N. Jamil, T. M. T. Sembok, and Z. A. Bakar, "Noise removal and enhancement of binary images using morphological operations," in 2008 International Symposium on Information Technology, 2008, vol. 4: IEEE, pp. 1-6.
Segmentation of Arabic cursive script. D Motawa, A Amin, R Sabourin, Proceedings of the fourth international conference on document analysis and recognition. the fourth international conference on document analysis and recognitionIEEE2D. Motawa, A. Amin, and R. Sabourin, "Segmentation of Arabic cursive script," in Proceedings of the fourth international conference on document analysis and recognition, 1997, vol. 2: IEEE, pp. 625-628.
Component-based segmentation of words from handwritten Arabic text. J H Alkhateeb, J Jiang, J Ren, S Ipson, International Journal of Computer Systems Science and Engineering. 51J. H. AlKhateeb, J. Jiang, J. Ren, and S. Ipson, "Component-based segmentation of words from handwritten Arabic text," International Journal of Computer Systems Science and Engineering, vol. 5, no. 1, 2009.
Segmentation accuracy for offline Arabic handwritten recognition based on bounding box algorithm. I A Humied, International Journal of Computer Science and Network Security (IJCSNS). 16998I. A. Humied, "Segmentation accuracy for offline Arabic handwritten recognition based on bounding box algorithm," International Journal of Computer Science and Network Security (IJCSNS), vol. 16, no. 9, p. 98, 2016.
IESK-ArDB: a database for handwritten Arabic and an optimized topological segmentation approach. M Elzobi, A Al-Hamadi, Z Aghbari, L Dings, International Journal on Document Analysis and Recognition (IJDAR). 163M. Elzobi, A. Al-Hamadi, Z. Al Aghbari, and L. Dings, "IESK-ArDB: a database for handwritten Arabic and an optimized topological segmentation approach," International Journal on Document Analysis and Recognition (IJDAR), vol. 16, no. 3, pp. 295-308, 2013.
An overview of character recognition focused on off-line handwriting. N Arica, F T Yarman-Vural, IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). 312N. Arica and F. T. Yarman-Vural, "An overview of character recognition focused on off-line handwriting," IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 31, no. 2, pp. 216-233, 2001.
A threshold selection method from gray-level histograms. N Otsu, IEEE transactions on systems, man, and cybernetics. 91N. Otsu, "A threshold selection method from gray-level histograms," IEEE transactions on systems, man, and cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
A fast parallel algorithm for thinning digital patterns. T Zhang, C Y Suen, Communications of the ACM. 273T. Zhang and C. Y. Suen, "A fast parallel algorithm for thinning digital patterns," Communications of the ACM, vol. 27, no. 3, pp. 236-239, 1984.
Segmentation algorithm for Arabic handwritten text based on contour analysis. Y Osman, 2013 INTERNATIONAL CONFERENCE ON COMPUTING, ELECTRICAL AND ELECTRONIC ENGINEERING (ICCEEE). IEEEY. Osman, "Segmentation algorithm for Arabic handwritten text based on contour analysis," in 2013 INTERNATIONAL CONFERENCE ON COMPUTING, ELECTRICAL AND ELECTRONIC ENGINEERING (ICCEEE), 2013: IEEE, pp. 447-452.
A survey of methods and strategies in character segmentation. R G Casey, E Lecolinet, IEEE transactions on pattern analysis and machine intelligence. 18R. G. Casey and E. Lecolinet, "A survey of methods and strategies in character segmentation," IEEE transactions on pattern analysis and machine intelligence, vol. 18, no. 7, pp. 690-706, 1996.
Sequential Operations in Digital Picture Processing. A Rosenfeld, J Pfaltz, Google Scholar Google Scholar Digital Library Digital Library. 13A. Rosenfeld and J. Pfaltz, "Sequential Operations in Digital Picture Processing; JACM 13 (1966) 471-494," Google Scholar Google Scholar Digital Library Digital Library, 1966.
An improved approach to connected component labeling of images. H Samet, M Tamminen, International Conference on Computer Vision And Pattern Recognition. 318312H. Samet and M. Tamminen, "An improved approach to connected component labeling of images," in International Conference on Computer Vision And Pattern Recognition, 1986, vol. 318, p. 312.
A simple and efficient connected components labeling algorithm. L , Di Stefano, A Bulgarelli, Proceedings 10th International Conference on Image Analysis and Processing. 10th International Conference on Image Analysis and ProcessingIEEEL. Di Stefano and A. Bulgarelli, "A simple and efficient connected components labeling algorithm," in Proceedings 10th International Conference on Image Analysis and Processing, 1999: IEEE, pp. 322-327.
Seam carving-based Arabic handwritten sub-word segmentation. L Berriche, A Al-Mutairy, Cogent Engineering. 711769315L. Berriche and A. Al-Mutairy, "Seam carving-based Arabic handwritten sub-word segmentation," Cogent Engineering, vol. 7, no. 1, p. 1769315, 2020.
Enhanced technique for Arabic handwriting recognition using deep belief network and a morphological algorithm for solving ligature segmentation. N Essa, E El-Daydamony, A A Mohamed, ETRI Journal. 406N. Essa, E. El-Daydamony, and A. A. Mohamed, "Enhanced technique for Arabic handwriting recognition using deep belief network and a morphological algorithm for solving ligature segmentation," ETRI Journal, vol. 40, no. 6, pp. 774-787, 2018.
Efficient Approach to Segment Ligatures and Open Characters in Offline Arabic text. A Q M S Saber, A Kamsin, S Hakak, A. Q. M. S. Zerdoumi Saber, A. Kamsin, and S. Hakak, "Efficient Approach to Segment Ligatures and Open Characters in Offline Arabic text," 2017.
A Survey on the Existing Arabic Optical Character Recognition and Future Trends. L S Alhomed, K M Jambi, International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE). 73L. S. Alhomed and K. M. Jambi, "A Survey on the Existing Arabic Optical Character Recognition and Future Trends," International Journal of Advanced Research in Computer and Communication Engineering (IJARCCE), vol. 7, no. 3, pp. 78-88, 2018.
| [] |
[
"Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion",
"Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion"
] | [
"Disong Wang \nHuman-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina\n",
"Songxiang Liu \nHuman-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina\n",
"Lifa Sun lfsun@speechx.cn \nSpeechX Limited\nShenzhenChina\n",
"Xixin Wu \nDepartment of Engineering\nUniversity of Cambridge\nUK\n",
"Xunying Liu \nHuman-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina\n",
"Helen Meng hmmeng@se.cuhk.edu.hk \nHuman-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina\n"
] | [
"Human-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina",
"Human-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina",
"SpeechX Limited\nShenzhenChina",
"Department of Engineering\nUniversity of Cambridge\nUK",
"Human-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina",
"Human-Computer Communications Laboratory The\nChinese University of Hong Kong\nHong Kong SARChina"
] | [] | Though significant progress has been made for the voice conversion (VC) of typical speech, VC for atypical speech, e.g., dysarthric and second-language (L2) speech, remains a challenge, since it involves correcting for atypical prosody while maintaining speaker identity. To address this issue, we propose a VC system with explicit prosodic modelling and deep speaker embedding (DSE) learning. First, a speech-encoder strives to extract robust phoneme embeddings from atypical speech. Second, a prosody corrector takes in phoneme embeddings to infer typical phoneme duration and pitch values. Third, a conversion model takes phoneme embeddings and typical prosody features as inputs to generate the converted speech, conditioned on the target DSE that is learned via speaker encoder or speaker adaptation. Extensive experiments demonstrate that speaker adaptation can achieve higher speaker similarity, and the speaker encoder based conversion model can greatly reduce dysarthric and non-native pronunciation patterns with improved speech intelligibility. A comparison of speech recognition results between the original dysarthric speech and converted speech show that absolute reduction of 47.6% character error rate (CER) and 29.3% word error rate (WER) can be achieved. | 10.21437/interspeech.2021-285 | [
"https://arxiv.org/pdf/2011.01678v2.pdf"
] | 235,458,466 | 2011.01678 | e85e6f093b6bec9d73bb537bed04830b3c0f8a46 |
Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion
Disong Wang
Human-Computer Communications Laboratory The
Chinese University of Hong Kong
Hong Kong SARChina
Songxiang Liu
Human-Computer Communications Laboratory The
Chinese University of Hong Kong
Hong Kong SARChina
Lifa Sun lfsun@speechx.cn
SpeechX Limited
ShenzhenChina
Xixin Wu
Department of Engineering
University of Cambridge
UK
Xunying Liu
Human-Computer Communications Laboratory The
Chinese University of Hong Kong
Hong Kong SARChina
Helen Meng hmmeng@se.cuhk.edu.hk
Human-Computer Communications Laboratory The
Chinese University of Hong Kong
Hong Kong SARChina
Learning Explicit Prosody Models and Deep Speaker Embeddings for Atypical Voice Conversion
Index Terms: dysarthric speech reconstructionaccent conver- sionprosodic modellingspeaker encoderspeaker adaptation
Though significant progress has been made for the voice conversion (VC) of typical speech, VC for atypical speech, e.g., dysarthric and second-language (L2) speech, remains a challenge, since it involves correcting for atypical prosody while maintaining speaker identity. To address this issue, we propose a VC system with explicit prosodic modelling and deep speaker embedding (DSE) learning. First, a speech-encoder strives to extract robust phoneme embeddings from atypical speech. Second, a prosody corrector takes in phoneme embeddings to infer typical phoneme duration and pitch values. Third, a conversion model takes phoneme embeddings and typical prosody features as inputs to generate the converted speech, conditioned on the target DSE that is learned via speaker encoder or speaker adaptation. Extensive experiments demonstrate that speaker adaptation can achieve higher speaker similarity, and the speaker encoder based conversion model can greatly reduce dysarthric and non-native pronunciation patterns with improved speech intelligibility. A comparison of speech recognition results between the original dysarthric speech and converted speech show that absolute reduction of 47.6% character error rate (CER) and 29.3% word error rate (WER) can be achieved.
Introduction
Voice conversion (VC) is a technique for converting nonlinguistic and para-linguistic information, such as speaker identity [1], prosody [2] and accent [3], with potential applications in assistive speech technologies and language acquisition technologies [4,5]. This work aims to apply VC techniques to convert atypical speech to a typical form. Specifically, we consider two types of atypical speech [6] -dysarthric speech and secondlanguage (L2) speech. Dysarthric speech results from neuromotor disorders [7] that cause disturbances in muscular control during articulation. L2 speech is spoken by L2 learners with non-native accents [8]. Both dysarthric and L2 speech exhibits atypical prosody, imprecise articulation and reduced intelligibility. These may engender substantial communication difficulties for dysarthric patients and hinder the pronunciation clarity of L2 learners.
To enhance the quality of the atypical speech, our previous work [9,10] presented an end-to-end VC (E2E-VC) method, where a speech-encoder is used to extract linguistic representations, e.g., phoneme embeddings, from the atypical speech, and a text-to-speech (TTS) decoder with attention maps phoneme embeddings to typical speech features. The speaker identity of the converted speech is controlled by the target speaker embedding produced by a speaker encoder [10]. Though high-fidelity speech can be generated, the prosody, speaker similarity and speech intelligibility require further improvement.
In this paper, we propose an improved VC system, where the previous TTS-decoder with attention is broken into a prosody corrector and a conversion model. The prosody corrector contains phoneme duration and pitch predictors that are introduced to explicitly model the prosody for predicting typical phoneme duration and pitch features. The conversion model maps pitch and phoneme embeddings expanded by the duration to mel-spectrograms, conditioned on the target deep speaker embedding (DSE). To obtain effective DSE that captures speaker characteristics, two different approaches are investigated in our work: (1) Speaker encoder, where a speaker classifier trained independently is adopted to extract DSE from the reference target speech; (2) Speaker adaptation, where the DSE is jointly learned and fine-tuned with a pre-trained multispeaker conversion model by using the target speech. We assume that the DSE obtained using the two approaches contains no prosody cues, so prosody and speaker identity are controlled by individual conditions, i.e., the prosody is controlled by phoneme duration and pitch, and speaker identity is controlled by the DSE. As a result, with the predicted typical prosody features and the effective DSE as conditions, the converted speech has typical pronunciation patterns with high speaker similarity and improved speech intelligibility.
The main advantages of the proposed approach include: (1) Explicit prosody correction to reduce dysarthric or non-native pronunciation patterns; (2) Improvements over previous methods [9,10] in generating speech with enhanced speaker similarity, naturalness and intelligibility; (3) Potential extensibility to other atypical voice conversion and enhancement tasks.
Related work
Dysarthric speech reconstruction (DSR) aims to convert dysarthric speech to be near-normal speech with higher intelligibility and naturalness. Various VC techniques have been applied for DSR. Rule-based VC modifies the temporal or frequency characteristics of speech according to specific rules [11]. Statistical VC builds a mapping function between the acoustic features of dysarthric and normal speech [9,[12][13][14]. Significant progress has been achieved, but the converted speech has low speaker similarity.
Accent conversion (AC) aims to convert the non-native L2 accented speech to become near-native speech. [15] proposed a GMM based VC by using vocal tract length normalization and linguistic content similarity matching. [16,17] utilized phonetic posteriorgrams (PPG) of the native speaker to generate target acoustic features. Although the non-native accent can be reduced, these methods require native reference utterances that may not be readily available. E2E-VC [10] can effectively solve this issue, but speaker similarity needs to be improved as well.
We intend to convert dysarthric and L2 speech respectively to become near-normal and near-native speech with typical prosody, high speaker similarity and improved intelligibility. We reference multi-speaker TTS [18,19] that uses prosody features for speech synthesis, and DSE obtained via speaker encoder or speaker adaptation to control the speaker identity. Inspired by Deep Voice 2 [18], we introduce predictors of phoneme duration and pitch to attain typical values in order to generate the speech with typical (i.e., normal or native) prosody characteristics.
Baseline method
In this paper, we adopt the previously proposed E2E-VC for DSR [9] and AC [10] as the baseline method. E2E-VC is composed of three components: (1) A sequence-to-sequence (seq2seq) based TTS model, e.g., Tacotron [20], is first trained with transcribed typical speech. The TTS-decoder with attention implicitly models the prosody, e.g., phoneme duration and pitch, which are inflexible to control during inference. (2) Given the transcribed atypical speech, a speech-encoder is trained to produce similar linguistic representations with those produced by the TTS-encoder. (3) By concatenating the speech-encoder and TTS-decoder with attention, an E2E-VC is formed to convert atypical speech to its typical version. Note that speaker similarity issue was not considered in the DSR work [9], so we extend the E2E-VC based DSR with the speaker encoder introduced in AC [10] to preserve speaker identity.
Proposed method
This section elaborates on the proposed VC approach with explicit prosodic modelling and DSE learning. The main differences from the baseline E2E-VC approach lie in two aspects: (1) Prosody is modelled in an explicit manner, so the prosody of the converted speech can be effectively controlled and corrected; (2) Speaker adaptation is proposed to obtain more effective DSE that is strongly related with speaker characteristics, leading to higher speaker similarity. As shown in Figure 1, the whole VC system consists of three key modules, i.e., speechencoder, prosody corrector and conversion model.
Speech-encoder for phoneme embeddings extraction
To preserve the linguistic content of original atypical speech, a speech-encoder is used to extract robust linguistic representations. Following [9,10], the speech-encoder adopts a seq2seq network to predict phoneme sequence. The speech-encoder is first pre-trained on large-scale typical speech data, then finetuned on the atypical speech of the dysarthric or L2 speaker s k to improve phoneme prediction accuracy. The pre-trained and fine-tuned speech-encoders are denoted as Φp and Φs k respectively. We adopt the speech-encoder outputs that denote the phoneme probability distribution as the phoneme embeddings.
Prosody corrector for explicit prosodic modelling
As atypical speech has atypical prosody, e.g., phoneme duration and pitch values, we propose explicit prosodic modelling by designing a prosody corrector to amend the atypical prosody to its Figure 1: Diagram of the proposed VC system: (1) DSE e is extracted from the speaker encoder, which corresponds to Enc-CM; (2) DSE e is obtained by joint learning with the conversion model, which corresponds to Ada-CM.
typical version. As shown in Figure 1, the prosody corrector contains the phoneme duration and pitch predictors, where the pitch can be described by fundamental frequency (F0). Both duration and F0 predictors are trained with L1 loss by using typical speech of a single speaker: (1) For duration prediction, the inputs are phoneme embeddings extracted by the speech-encoder Φp using the teacher-force mode. The targets are ground-truth phoneme durations, which are obtained from pairs of text and audio by forced alignment with Montreal Forced Aligner [21].
(2) For F0 prediction, the expanded phoneme embeddings p by using the ground-truth phoneme duration are used as the inputs, and the targets are the ground-truth F0, denoted by v, which has the same length of p. When the duration and F0 predictors are well-trained, the prosody corrector is expected to infer typical phoneme duration and F0 values that are used to replace their abnormal counterparts for typical speech generation.
Conversion model for speech generation
As shown in Figure 1, we adopt a conversion model with function f and parameters W to generate mel-spectrograms f (p,v;W,e), where the spoken content and duration are both controlled by the expanded phoneme embeddings p, the pitch and speaker identity are separately controlled by F0 v and DSE e. e is repeated and concatenated with p and v for generation. Given the typical speech of a set of speakers S and atypical speech of a dysarthric or L2 speaker s k , let Ts i and Ts k denote the set of mel-spectrogram features for speaker si (si∼S) and speaker s k , respectively. Two DSE learning approaches are investigated and incorporated into the conversion model, i.e., speaker encoder based conversion model (Enc-CM) and speaker adaptation based conversion model (Ada-CM). For clarity, we denote the DSE used in Enc-CM and Ada-CM as e and e respectively.
Speaker encoder based conversion model
The speaker encoder is a neural network for speaker verification and produces a fixed-dimensional DSE from acoustic feature frames of a speech utterance with variable length. It is trained to optimize a generalized end-to-end (GE2E) loss for DSE learning [22], so that the DSEs extracted from utterances of the same speaker and different speakers have high and low similarity, respectively. The DSE es i derived from the speaker encoder is expected to capture speaker characteristics of si. By using typical speech data, Enc-CM is trained to minimize a loss L (e.g., L1 loss) measuring the distance between the predicted and ground-truth mel-spectrograms:
W SE = argmin W E s i ∼S,a i,j ∼Ts i {L(f (p i,j ,v i,j ;W, es i ),a i,j )} (1)
where p i,j are phoneme embeddings extracted by speechencoder Φp and expanded by ground-truth duration, vi,j and ai,j are ground-truth F0 and mel-spectrograms for speaker si (si∼S), respectively.
At the conversion phase, the atypical speech of the speaker s k is used as the input of the speaker encoder and the fine-tuned speech-encoder Φs k to extract DSE es k and phoneme embeddings, respectively. The phoneme embeddings are used as the inputs of the prosody corrector to obtain the expanded phoneme embeddingsp with typical duration and F0v. Finally, the system generates converted mel-spectrograms as f (p,v; WSE, es k ).
Speaker adaptation based conversion model
Instead of obtaining the speaker representations from an external network, DSE can be jointly learned with the conversion model. The joint learning enables the DSE to directly capture the speaking characteristics related with speech generation, leading to higher speaker similarity. Specifically, Ada-CM involves two-stage training: First, the conversion model is pretrained with typical speech data, where the DSE es i for each speaker si is randomly initialized and jointly trained with W:
W SA ,{ês i } = argmin W,{e s i } E s i ∼S,a i,j ∼Ts i {L(f (p i,j ,v i,j ;W,es i ),a i,j )}(2)
Second, with the speech data of multiple speakers for training, the conversion modelŴSA has good generalization capacity and can be fine-tuned well to unseen speakers for DSE learning. Therefore, for the dysarthric or L2 speaker s k with the expanded phoneme embeddings p k,j and F0 v k,j , speaker adaptation is performed as:
W SA ,es k = argmin W,e s k E a k,j ∼Ts k {L(f (p k,j ,v k,j ;W,es k ),a k,j )} (3)
where W is initialized byŴSA and DSE es k is also randomly initialized. After adaptation, target speaker characteristics that are beneficial for speech generation are encoded into es k . Similar with Enc-CM, at the inference phase, we can use the adapted conversion model WSA to generate the converted mel-spectrograms as f (p,v;WSA,es k ) with high speaker similarity achieved by es k , and typical prosody controlled by predicted typical phoneme duration and F0.
Experiments
Experimental settings
Experiments are conducted on the LibriSpeech [23], VCTK [24], LJSpeech [25], UASpeech [26] and L2-ARCTIC [27] datasets. Parallel WaveGAN (PWG) [28] is adopted as the vocoder to synthesize the waveform from the converted melspectrograms. We use 960h training data of LibriSpeech for pre-training the speech-encoder Φp, 105 native speakers of VCTK for the training of PWG, and the training of Enc-CM and Ada-CM to obtain WSE andŴSA, respectively. The typical speech of single female speaker from LJSpeech is used for training duration and F0 predictors. For atypical speech, we select speaker M05 of UASpeech and speaker LXC of L2-ARCTIC for experiments. M05 has moderate-severe dysarthria with the speech having middle intelligibility. Following [9], we use the speech of blocks 1 and 3 for speech-encoder and Ada-CM fine-tuning, and the speech of block 2 for testing. As the audio of M05 has strong background noise which degrades the speaker adaptation performance, we adopt log-MMSE speech enhancement algorithm [29] to pre-process the audio. LXC speaker is a non-native English speaker with Mandarin accent and has 1131 recorded utterances, which are randomly divided into 1000/66/65 for training/validation/testing, where training and validation data are used for fine-tuning the speech-encoder and Ada-CM. The speech is sampled or resampled to 16kHz, and all speech features are calculated with 25ms Hanning window, 10ms frame shift and 400-point fast Fourier transform.
The speech-encoder has a similar architecture as in [9], including a 6-layer VGG extractor and a 5-layer BLSTM with 512 units per direction in the encoder, 512-dimensional locationaware attention and 2-layer LSTM with 1024 units in the decoder. The inputs to the speech-encoder are 40-band melspectrograms appended with delta and delta-delta features. Adadelta optimizer [30] with learning rate of 1 and batch size 8 is applied for the pre-training and fine-tuning of the speechencoder with 1M and 2k steps, respectively. Duration and F0 predictors adopt the same structure, which consists of a 3-layer BGRU with 256 units per direction, 3 convolution layers with kernel size of 5, 9 and 19 respectively, and a 1-dimensional fully-connected (FC) layer to predict the duration or F0 value. Both predictors are trained by the Adam optimizer [31] with a learning rate of 0.001, batch size of 16 and 30k steps. The settings and training of speaker encoder is same as [10], and DSE is set to 256-dimensional for both Enc-CM and Ada-CM. The conversion model is a frame-to-frame network composed of two 512-dimensional FC layers, 4-layer BLSTM with 512 units per direction and one 80-dimensional FC layer to predict melspectrograms. Both Enc-CM training and Ada-CM pre-training use the learning rate of 0.001 and batch size of 16 with 50k steps, and Ada-CM fine-tuning takes 3k steps. Readers are encouraged to listen to our audio samples 1 .
We compare the Enc-CM and Ada-CM with our previously proposed E2E-VC for DSR [9] and AC [10], where the original settings are adopted. To evaluate the performance of all methods, 20 listeners are invited to give subjective evaluations, including mean opinion score (MOS) tests (1-bad, 2-poor, 3fair, 4-good, 5-excellent) to evaluate speech naturalness and speaker similarity, AB preference tests to evaluate the impact of phoneme duration and F0, and objective evaluation of speech intelligibility based on a speech recognition model. Figure 2 shows the MOS results for speech naturalness and speaker similarity, where 'Original' denotes the original dysarthric or L2 speech. We randomly select 15 testing utterances of M05 or LXC for evaluation. For DSR experiments as shown in Figure 2(a), we observe that compared with the original dysarthric speech, the converted speech by all methods achieves improvements in naturalness. The proposed Enc-CM achieves higher naturalness than E2E-VC, indicating that the effectiveness of the proposed prosody corrector to generate the speech with stable and accurate prosody. Ada-CM achieves lower naturalness than E2E-VC, partially due to the speech enhancement algorithm degrading the quality of M05 audio for speaker adaptation. Besides, speaker adaptation also inevitably incorporates the abnormal speaking characteristics of the dysarthric speaker into the converted speech, such as (a) DSR experiments (b) AC experiments Figure 2: Comparison results of MOS with 95% confidence intervals for speech naturalness and speaker similarity atypical prosody and articulation, which partially contribute to the speaker characteristics. As a result, Ada-CM can achieve highest speaker similarity, followed by Enc-CM and E2E-VC.
Experimental results
Speech naturalness and speaker similarity comparison
For AC experiments as shown in Figure 2(b), all methods achieve high and similar speech naturalness, while Ada-CM also achieves the highest speaker similarity. This verifies the effectiveness of explicit prosodic modelling and DSE learned by speaker adaptation for achieving high speaker similarity.
Impact of phoneme duration and F0
We investigate the impact of the phoneme duration and F0 on the normality and accentedness of the converted speech. We explore three combinations of phoneme duration and F0 used by Enc-CM to generate the speech: (1) Ground-truth Duration and Ground-truth F0 (GD+GF); (2) Ground-truth Duration and Predicted F0 (GD+PF); (3) Predicted Duration and Predicted F0 (PD+PF). The AB preference test is conducted, and results are shown in Figure 3. From the comparison 'GD+GF vs GD+PF', we can see that the predicted typical F0 facilitates the conversion model to generate near-normal or near-native speech. From the comparison 'GD+PF vs PD+PF', we observe that using the predicted typical duration can further improve the quality of the converted speech, this shows that both phoneme duration and F0 affect speech normality or the degree of accentedness, and the proposed prosody corrector is helpful for attaining typical phoneme duration and F0 values, which are beneficial for generating speech with normal or native prosody characteristics.
Speech intelligibility comparison
To show the effectiveness of proposed methods to improve the intelligibility of atypical speech, a publicly released automatic speech recognition model, i.e., Jasper [32], is used to test the character error rate (CER) and word error rate (WER) with greedy decoding. The results are illustrated in Table 1, we also report the results for 'Original (Mel+PWG)' that uses the original mel-spectrograms to synthesize the waveform by using PWG vocoder. We can see that 'Original (Mel+PWG)' is inferior to 'Original', which indicates that the PWG vocoder tends to degrade the speech quality. For DSR experiments, we can
Conclusions
This paper presents a VC system for converting atypical speech to typical speech, by explicit prosodic modelling and DSE learning. prosodic modelling is proposed to leverage phoneme duration and F0 predictors to obtain typical values for prosody correction, while speaker encoder and speaker adaptation approaches are separately proposed to obtain effective DSE to capture speaker characteristics. DSR and AC experiments show that proposed methods can achieve reduction of dysarthric and non-native speaking characteristics, where significant intelligibility improvements can be achieved for dysarthric speech. Enc-CM outperforms previously proposed E2E-VC, and Ada-CM achieves the highest speaker similarity. However, for Ada-CM, atypical pronunciation patterns are also incorporated into the converted speech after speaker adaptation. Explicitly modelling more para-linguistic information may be helpful to mitigate this problem. In addition, using better speech denoising algorithms or cleaner audio data is expected to further improve the performance of Ada-CM, this will be studied in the future.
Figure 3 :
3AB preference test results for different combinations of phoneme duration and F0.
Table 1 :
1Comparisons based on CER (%) and WER (%).observe that CER and WER of the original dysarthric speech can be significantly reduced by the proposed methods, where Enc-CM performs the best and achieves 47.6% and 29.3% absolute reduction for CER and WER respectively. As the original dysarthric speech used in Ada-CM to perform speaker adaptation contains strong background noise, even though log-MMSE is used for denoising, the pre-processed audio still contains artificial noise that hurts Ada-CM performance, thus smaller CER and WER reduction is achieved for Ada-CM. For AC experiments, the proposed Enc-CM can still achieve 2.4% CER and 4.4% WER reduction compared with Original speech, the baseline E2E-VC and proposed Ada-CM have no improvements over Original speech while achieve similar CER and WER with 'Original (Mel+PWG)', adopting a more powerful vocoder is expected to enhance the speech intelligibility.Methods
DSR experiments AC experiments
CER
WER
CER
WER
Original
90.2
91.0
17.7
35.5
Original (Mel+PWG) 94.3
95.3
22.4
40.8
E2E-VC
50.6
69.8
22.7
41.3
Enc-CM
42.6
61.7
15.3
31.1
Ada-CM
56.5
80.5
21.5
40.2
Audio samples: https://wendison.github.io/VC-DSR-AC-demo/
AcknowledgementsThis research is partially supported by a grant from the HKSARG Research Grants Council General Research Fund (Project Reference No. 14208817).References
An overview of voice conversion systems. S H Mohammadi, A Kain, Speech Communication. 88S. H. Mohammadi and A. Kain, "An overview of voice conversion systems," Speech Communication, vol. 88, pp. 65-82, 2017.
Transformation of speaker characteristics for voice conversion. D Rentzos, S Vaseghi, E Turajlic, Q Yan, C.-H Ho, 2003 IEEE Workshop on Automatic Speech Recognition and Understanding. IEEED. Rentzos, S. Vaseghi, E. Turajlic, Q. Yan, and C.-H. Ho, "Trans- formation of speaker characteristics for voice conversion," in 2003 IEEE Workshop on Automatic Speech Recognition and Under- standing (IEEE Cat. No. 03EX721). IEEE, 2003, pp. 706-711.
Non-native speech conversion with consistencyaware recursive network and generative adversarial network. K Oyamada, H Kameoka, T Kaneko, H Ando, K Hiramatsu, K Kashino, 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEEK. Oyamada, H. Kameoka, T. Kaneko, H. Ando, K. Hiramatsu, and K. Kashino, "Non-native speech conversion with consistency- aware recursive network and generative adversarial network," in 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). IEEE, 2017, pp. 182-188.
Speakingaid systems using gmm-based voice conversion for electrolaryngeal speech. K Nakamura, T Toda, H Saruwatari, K Shikano, Speech Communication. 541K. Nakamura, T. Toda, H. Saruwatari, and K. Shikano, "Speaking- aid systems using gmm-based voice conversion for electrolaryn- geal speech," Speech Communication, vol. 54, no. 1, pp. 134-146, 2012.
Foreign accent conversion in computer assisted pronunciation training. D Felps, H Bortfeld, R Gutierrez-Osuna, Speech communication. 5110D. Felps, H. Bortfeld, and R. Gutierrez-Osuna, "Foreign accent conversion in computer assisted pronunciation training," Speech communication, vol. 51, no. 10, pp. 920-932, 2009.
Personalizing asr for dysarthric and accented speech with limited data. J Shor, D Emanuel, O Lang, O Tuval, M Brenner, J Cattiau, F Vieira, M Mcnally, T Charbonneau, M Nollstadt, InterspeechJ. Shor, D. Emanuel, O. Lang, O. Tuval, M. Brenner, J. Cattiau, F. Vieira, M. McNally, T. Charbonneau, M. Nollstadt et al., "Per- sonalizing asr for dysarthric and accented speech with limited data," Interspeech, pp. 784-788, 2019.
Improving the intelligibility of dysarthric speech. A B Kain, J.-P Hosom, X Niu, J P Van Santen, M Fried-Oken, J Staehely, Speech communication. 499A. B. Kain, J.-P. Hosom, X. Niu, J. P. Van Santen, M. Fried- Oken, and J. Staehely, "Improving the intelligibility of dysarthric speech," Speech communication, vol. 49, no. 9, pp. 743-759, 2007.
Language learners' perceptions of accent. J Scales, A Wennerstrom, D Richard, S H Wu, Tesol Quarterly. 404J. Scales, A. Wennerstrom, D. Richard, and S. H. Wu, "Language learners' perceptions of accent," Tesol Quarterly, vol. 40, no. 4, pp. 715-738, 2006.
End-to-end voice conversion via cross-modal knowledge distillation for dysarthric speech reconstruction. D Wang, J Yu, X Wu, S Liu, L Sun, X Liu, H Meng, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPD. Wang, J. Yu, X. Wu, S. Liu, L. Sun, X. Liu, and H. Meng, "End-to-end voice conversion via cross-modal knowledge distilla- tion for dysarthric speech reconstruction," in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7744-7748.
End-to-end accent conversion without using native utterances. S Liu, D Wang, Y Cao, L Sun, X Wu, S Kang, Z Wu, X Liu, D Su, D Yu, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPS. Liu, D. Wang, Y. Cao, L. Sun, X. Wu, S. Kang, Z. Wu, X. Liu, D. Su, D. Yu et al., "End-to-end accent conversion with- out using native utterances," in ICASSP 2020-2020 IEEE Inter- national Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6289-6293.
Acoustic transformations to improve the intelligibility of dysarthric speech. F Rudzicz, Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies. the Second Workshop on Speech and Language Processing for Assistive TechnologiesF. Rudzicz, "Acoustic transformations to improve the intelligibil- ity of dysarthric speech," in Proceedings of the Second Workshop on Speech and Language Processing for Assistive Technologies, 2011, pp. 11-21.
Individuality-preserving voice conversion for articulation disorders based on non-negative matrix factorization. R Aihara, R Takashima, T Takiguchi, Y Ariki, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEER. Aihara, R. Takashima, T. Takiguchi, and Y. Ariki, "Individuality-preserving voice conversion for articulation disor- ders based on non-negative matrix factorization," in 2013 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing. IEEE, 2013, pp. 8037-8040.
Phoneme-discriminative features for dysarthric speech conversion. R Aihara, T Takiguchi, Y Ariki, R. Aihara, T. Takiguchi, and Y. Ariki, "Phoneme-discriminative features for dysarthric speech conversion." in Interspeech, 2017, pp. 3374-3378.
Enhancing intelligibility of dysarthric speech using gated convolutional-based voice conversion system. C.-Y Chen, W.-Z Zheng, S.-S Wang, Y Tsao, P.-C Li, Y.-H Lai, InterspeechC.-Y. Chen, W.-Z. Zheng, S.-S. Wang, Y. Tsao, P.-C. Li, and Y.- H. Lai, "Enhancing intelligibility of dysarthric speech using gated convolutional-based voice conversion system," Interspeech, pp. 4686-4690, 2020.
Can voice conversion be used to reduce non-native accents?. S Aryal, R Gutierrez-Osuna, 2014 IEEE International Conference on Acoustics, Speech and Signal Processing. ICASSPS. Aryal and R. Gutierrez-Osuna, "Can voice conversion be used to reduce non-native accents?" in 2014 IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP).
. IEEE. IEEE, 2014, pp. 7879-7883.
Accent conversion using phonetic posteriorgrams. G Zhao, S Sonsaat, J Levis, E Chukharev-Hudilainen, R Gutierrez-Osuna, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. G. Zhao, S. Sonsaat, J. Levis, E. Chukharev-Hudilainen, and R. Gutierrez-Osuna, "Accent conversion using phonetic posteri- orgrams," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 5314- 5318.
Using phonetic posteriorgram based frame pairing for segmental accent conversion. G Zhao, R Gutierrez-Osuna, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 2710G. Zhao and R. Gutierrez-Osuna, "Using phonetic posterior- gram based frame pairing for segmental accent conversion," IEEE/ACM Transactions on Audio, Speech, and Language Pro- cessing, vol. 27, no. 10, pp. 1649-1660, 2019.
Deep voice 2: Multi-speaker neural textto-speech. A Gibiansky, S Arik, G Diamos, J Miller, K Peng, W Ping, J Raiman, Y Zhou, Advances in neural information processing systems. A. Gibiansky, S. Arik, G. Diamos, J. Miller, K. Peng, W. Ping, J. Raiman, and Y. Zhou, "Deep voice 2: Multi-speaker neural text- to-speech," in Advances in neural information processing systems, 2017, pp. 2962-2970.
Neural voice cloning with a few samples. S Arik, J Chen, K Peng, W Ping, Y Zhou, Advances in Neural Information Processing Systems. 10S. Arik, J. Chen, K. Peng, W. Ping, and Y. Zhou, "Neural voice cloning with a few samples," in Advances in Neural Information Processing Systems, 2018, pp. 10 019-10 029.
Tacotron: Towards end-to-end speech synthesis. Y Wang, R Skerry-Ryan, D Stanton, Y Wu, R J Weiss, N Jaitly, Z Yang, Y Xiao, Z Chen, S Bengio, InterspeechY. Wang, R. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly, Z. Yang, Y. Xiao, Z. Chen, S. Bengio et al., "Tacotron: Towards end-to-end speech synthesis," Interspeech, pp. 4006- 4010, 2017.
Montreal forced aligner: Trainable text-speech alignment using kaldi. M Mcauliffe, M Socolof, S Mihuc, M Wagner, M Sonderegger, Interspeech. M. McAuliffe, M. Socolof, S. Mihuc, M. Wagner, and M. Son- deregger, "Montreal forced aligner: Trainable text-speech align- ment using kaldi." in Interspeech, vol. 2017, 2017, pp. 498-502.
Generalized end-to-end loss for speaker verification. L Wan, Q Wang, A Papir, I L Moreno, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. L. Wan, Q. Wang, A. Papir, and I. L. Moreno, "Generalized end-to-end loss for speaker verification," in 2018 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2018, pp. 4879-4883.
Librispeech: an asr corpus based on public domain audio books. V Panayotov, G Chen, D Povey, S Khudanpur, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEV. Panayotov, G. Chen, D. Povey, and S. Khudanpur, "Lib- rispeech: an asr corpus based on public domain audio books," in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2015, pp. 5206-5210.
Superseded-cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit. C Veaux, J Yamagishi, K Macdonald, C. Veaux, J. Yamagishi, K. MacDonald et al., "Superseded-cstr vctk corpus: English multi-speaker corpus for cstr voice cloning toolkit," 2016.
The lj speech dataset. K Ito, K. Ito et al., "The lj speech dataset," 2017.
Dysarthric speech database for universal access research. H Kim, M Hasegawa-Johnson, A Perlman, J Gunderson, T S Huang, K Watkin, S Frame, Ninth Annual Conference of the International Speech Communication Association. H. Kim, M. Hasegawa-Johnson, A. Perlman, J. Gunderson, T. S. Huang, K. Watkin, and S. Frame, "Dysarthric speech database for universal access research," in Ninth Annual Conference of the International Speech Communication Association, 2008.
L2-arctic: A non-native english speech corpus. G Zhao, S Sonsaat, A O Silpachai, I Lucic, E Chukharev-Khudilaynen, J Levis, R Gutierrez-Osuna, Perception Sensing Instrumentation Lab. G. Zhao, S. Sonsaat, A. O. Silpachai, I. Lucic, E. Chukharev- Khudilaynen, J. Levis, and R. Gutierrez-Osuna, "L2-arctic: A non-native english speech corpus," Perception Sensing Instrumen- tation Lab, 2018.
Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. R Yamamoto, E Song, J.-M Kim, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEICASSPR. Yamamoto, E. Song, and J.-M. Kim, "Parallel wavegan: A fast waveform generation model based on generative adversarial net- works with multi-resolution spectrogram," in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 6199-6203.
Speech enhancement using a minimum mean-square error log-spectral amplitude estimator. Y Ephraim, D Malah, IEEE transactions on acoustics, speech, and signal processing. 33Y. Ephraim and D. Malah, "Speech enhancement using a mini- mum mean-square error log-spectral amplitude estimator," IEEE transactions on acoustics, speech, and signal processing, vol. 33, no. 2, pp. 443-445, 1985.
Adadelta: an adaptive learning rate method. M D Zeiler, arXiv:1212.5701arXiv preprintM. D. Zeiler, "Adadelta: an adaptive learning rate method," arXiv preprint arXiv:1212.5701, 2012.
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic opti- mization," arXiv preprint arXiv:1412.6980, 2014.
Jasper: An end-to-end convolutional neural acoustic model. J Li, V Lavrukhin, B Ginsburg, R Leary, O Kuchaiev, J M Cohen, H Nguyen, R T Gadde, InterspeechJ. Li, V. Lavrukhin, B. Ginsburg, R. Leary, O. Kuchaiev, J. M. Cohen, H. Nguyen, and R. T. Gadde, "Jasper: An end-to-end con- volutional neural acoustic model," Interspeech, pp. 71-75, 2019.
| [] |
[
"A Review of Verbal and Non-Verbal Human-Robot Interactive Communication",
"A Review of Verbal and Non-Verbal Human-Robot Interactive Communication"
] | [
"Nikolaos Mavridis nmav@alum.mit.edu \nInteractive Robots and Media Lab\nNCSR Demokritos GR-15310\nAgia ParaskeviAthensGreece\n"
] | [
"Interactive Robots and Media Lab\nNCSR Demokritos GR-15310\nAgia ParaskeviAthensGreece"
] | [] | In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion. | 10.1016/j.robot.2014.09.031 | [
"https://arxiv.org/pdf/1401.4994v1.pdf"
] | 7,748,652 | 1401.4994 | 5fa3af339964e7b68e5849a22a72211e8a11aa47 |
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
20 Jan 2014
Nikolaos Mavridis nmav@alum.mit.edu
Interactive Robots and Media Lab
NCSR Demokritos GR-15310
Agia ParaskeviAthensGreece
A Review of Verbal and Non-Verbal Human-Robot Interactive Communication
20 Jan 2014
In this paper, an overview of human-robot interactive communication is presented, covering verbal as well as non-verbal aspects. Following a historical introduction, and motivation towards fluid human-robot communication, ten desiderata are proposed, which provide an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata are examined in detail, culminating to a unifying discussion, and a forward-looking conclusion.
I. INTRODUCTION: HISTORICAL OVERVIEW
While the first modern-day industrial robot, Unimate, began work on the General Motors assembly line in 1961, and was conceived in 1954 by George Devol [1], [2], the concept of a robot has a very long history, starting in mythology and folklore, and the first mechanical predecessors (automata) having been constructed in Ancient Times. For example, in Greek mythology, the God Hephaestus is reputed to have made mechanical servants from gold ( [3] in p. 114, and [4] verse 18.419). Furthermore, a rich tradition of designing and building mechanical, pneumatic or hydraulic automata also exists: from the automata of Ancient Egyptian temples, to the mechanical pigeon of the Pythagorean Archytas of Tarantum circa 400BC [5], to the accounts of earlier automata found in the Lie Zi text in China in 300BC [6], to the devices of Heron of Alexandria [7] in the 1st century. The Islamic world also plays an important role in the development of automata; Al-Jazari, an Arab inventor, designed and constructed numerous automatic machines, and is even reputed to have devised the first programmable humanoid robot in 1206AD [8]. The word "robot", a Slavic word meaning servitude, was first used in this context by the Czech author Karel Capek in 1921 [9].
However, regarding robots with natural-language conversational abilities, it wasnt until the 1990's that the first pioneering systems started to appear. Despite the long history of mythology and automata, and the fact that even the mythological handmaidens of Hephaestus were reputed to have been given a voice [3], and despite the fact that the first general-purpose electronic speech synthesizer was developed by Noriko Omeda in Japan in 1968 [10], it wasnt until the early 1990's that conversational robots such as MAIA [11], RHINO [12], and AESOP [13] appeared. These robots cover a range of intended application domains; for example, MAIA was intended to carry objects and deliver them, while RHINO is a museum guide robot, and AESOP a surgical robot.
In more detail, the early systems include Polly, a robotic guide that could give tours in offices [14], [15]. Polly had very simple interaction capacities; it could perceive human feet waving a "tour wanted" signal, and then it would just use pre-determined phrases during the tour itself. A slightly more advanced system was TJ [16]. TJ could verbally respond to simple commands, such as "go left", albeit through a keyboard. RHINO, on the other hand [12], could respond to tour-start commands, but then, again, just offered a pre-programmed tour with fixed programmer-defined verbal descriptions. Regarding mobile assistant robots with conversational capabilities in the 1990s, a classic system is MAIA [11], [17], obeying simple commands, and carrying objects around places, as well as the mobile office assistant which could not only deliver parcels but guide visitors described in [18], and the similar in functionality Japanese-language robot Jijo-2 [19], [20], [21]. Finally, an important book from the period is [22], which is characteristic of the traditional natural-language semantics-inspired theoretical approaches to the problem of human-robot communication, and also of the great gap between the theoretical proposals and the actual implemented systems of this early decade.
What is common to all the above early systems is that they share a number of limitations. First, all of them only accept a fixed and small number of simple canned commands, and they respond with a set of canned answers. Second, the only speech acts (in the sense of Searle [23]) that they can handle are requests. Third, the dialogue they support is clearly not flexibly mixed initiative; in most cases it is just humaninitiative. Four, they dont really support situated language, i.e. language about their physical situations and events that are happening around them; except for a fixed number of canned location names in a few cases. Five, they are not able to handle affective speech; i.e. emotion-carrying prosody is neither recognized nor generated. Six, their non-verbal communication [24] capabilities are almost non-existent; for example, gestures, gait, facial expressions, and head nods are neither recognized nor produced. And seventh, their dialogue systems are usually effectively stimulus-response or stimulusstate-response systems; i.e. no real speech planning or purposeful dialogue generation is taking place, and certainly not in conjunction with the motor planning subsystems of the robot. Last but quite importantly, no real learning, off-line or on-the-fly is taking place in these systems; verbal behaviors have to be prescribed.
All of these shortcomings of the early systems of the 1990s, effectively have become desiderata for the next two decades of research: the 2000s and 2010s, which we are in at the moment. Thus, in this paper, we will start by providing a discussion giving motivation to the need for existence of interactive robots with natural human-robot communication capabilities, and then we will enlist a number of desiderata for such systems, which have also effectively become areas of active research in the last decade. Then, we will examine these desiderata one by one, and discuss the research that has taken place towards their fulfillment. Special consideration will be given to the socalled "symbol grounding problem" [25], which is central to most endeavors towards natural language communication with physically embodied agents, such as robots. Finally, after a discussion of the most important open problems for the future, we will provide a concise conclusion.
II. MOTIVATION: INTERACTIVE ROBOTS WITH NATURAL LANGUAGE CAPABILITIES BUT WHY?
There are at least two avenues towards answering this fundamental question, and both will be attempted here. The first avenue will attempt to start from first principles and derive a rationale towards equipping robots with natural language. The second, more traditional and safe avenue, will start from a concrete, yet partially transient, base: application domains existing or potential. In more detail:
Traditionally, there used to be clear separation between design and deployment phases for robots. Application-specific robots (for example, manufacturing robots, such as [26]) were: (a) designed by expert designers, (b) possibly tailorprogrammed and occasionally reprogrammed by specialist engineers at their installation site, and (c) interacted with their environment as well as with specialized operators during actual operation. However, the phenomenal simplicity but also the accompanying inflexibility and cost of this traditional setting is often changing nowadays. For example, one might want to have broader-domain and less application-specific robots, necessitating more generic designs, as well as less effort by the programmer-engineers on site, in order to cover the various contexts of operation. Even better, one might want to rely less on specialized operators, and to have robots interact and collaborate with non-expert humans with little if any prior training. Ideally, even the actual traditional programming and re-programming might also be transferred over to non-expert humans; and instead of programming in a technical language, to be replaced by intuitive tuition by demonstration, imitation and explanation [27], [28], [29]. Learning by demonstration and imitation for robots already has quite some active research; but most examples only cover motor and aspects of learning, and language and communication is not involved deeply.
And this is exactly where natural language and other forms of fluid and natural human-robot communication enter the picture: Unspecialized non-expert humans are used to (and quite good at) teaching and interacting with other humans through a mixture of natural language as well as nonverbal signs. Thus, it makes sense to capitalize on this existing ability of non-expert humans by building robots that do not require humans to adapt to them in a special way, and which can fluidly collaborate with other humans, interacting with them and being taught by them in a natural manner, almost as if they were other humans themselves.
Thus, based on the above observations, the following is one classic line of motivation towards justifying efforts for equipping robots with natural language capabilities: Why not build robots that can comprehend and generate human-like interactive behaviors, so that they can cooperate with and be taught by non-expert humans, so that they can be applied in a wide range of contexts with ease? And of course, as natural language plays a very important role within these behaviors, why not build robots that can fluidly converse with humans in natural language, also supporting crucial non-verbal communication aspects, in order to maximize communication effectiveness, and enable their quick and effective application?
Thus, having presented the classical line of reasoning arriving towards the utility of equipping robots with natural language capabilities, and having discussed a space of possibilities regarding role assignment between human and robot, let us now move to the second, more concrete, albeit less general avenue towards justifying conversational robots: namely, specific applications, existing or potential. Such applications, where natural human-robot interaction capabilities with verbal and non-verbal aspects would be desirable, include: flexible manufacturing robots; lab or household robotic assistants [30], [31], [32], [33]; assistive robotics and companions for special groups of people [34]; persuasive robotics (for example, [35]); robotic receptionists [36], robotic educational assistants, robotic wheelchairs [37], companion robots [38], all the way to more exotic domains, such as robotic theatre actors [39], musicians [40], dancers [41] etc.
In all of the above applications, although there is quite some variation regarding requirements, one aspect at least is shared: the desirability of natural fluid interaction with humans supporting natural language and non-verbal communication, possibly augmented with other means. Of course, although this might be desired, it is not always justified as the optimum choice, given technico-economic constraints of every specific application setting. A thorough analysis of such constraints together with a set of guidelines for deciding when naturallanguage interaction is justified, can be found at [42]. Now, having examined justifications towards the need for natural language and other human-like communication capabilities in robots across two avenues, let us proceed and become more specific: natural language, indeed but what capabilities do we actually need? it serves as a good starting point for discussing the state of the art, as well as the potentials of each of the items: D1) Breaking the "simple commands only" barrier D2) Multiple speech acts D3) Mixed initiative dialogue D4) Situated language and the symbol grounding problem D5) Affective interaction D6) Motor correlates and Non-Verbal Communication D7) Purposeful speech and planning D8) Multi-level learning D9) Utilization of online resources and services D10) Miscellaneous abilities The particular order of the sequence of desiderata, was chosen for the purpose of illustration, as it provides partially for a building-up of key points, also allowing for some tangential deviations.
A. Breaking the "simple commands only" barrier
The traditional conception of conversational robots, as well as most early systems, is based on a clear human-master robotservant role assignment, and restricts the robots conversational competencies to simple "motor command requests" only in most cases. A classic example can be seen for example in systems such as [30], where a typical dialogue might be:
H: "Give me the red one" R: (Picks up the red ball, and gives to human) H: "Give me the green one" R: "Do you mean this one, or that one?" (robot points to two possible candidate objects) H: "The one on the left" R: (Picks up the green ball on the left, and hands over to human)
What are the main points noticing in this example? Well, first of all, (p1) this is primarily a single-initiative dialogue: the human drives the conversation, the robot effectively just producing motor and verbal responses to the human verbal stimulus. Second, (p2) apart from some disambiguating questions accompanied by deixis, there is not much that the robot says the robot primarily responds with motor actions to the human requests, and does not speak. And, (p3) regarding the human statements, we only have one type of speech acts [23]: RequestForMotorAction. Furthermore, (p4) usually such systems are quite inflexible regarding multiple surface realizations of the acceptable commands; i.e. the human is allowed to say "Give me the red one", but if he instead used the elliptical "the red object, please" he might have been misinterpreted and (p5) in most cases, the mapping of words-to-responses is arbitrarily chosen by the designer; i.e. motor verbs translate to what the designer thinks they should mean for the robot (normative meaning), instead of what an empirical investigation would show regarding what other humans would expect they mean (empirical meaning).
Historically, advanced theorization for such systems exists as early as [22], and there is still quite a stream of active research which, although based on beautiful and systematic formalizations and eloquent grammars, basically produces systems which would still fall within the three points mentioned above. Such an example is [43], in which a mobile robot in a multi-room environment, can handle commands such as: "Go to the breakroom and report the location of the blue box"
Notice that here we are not claiming that there is no importance in this research that falls within this strand; we are just mentioning that, as we shall see, there are many other aspects of natural language and robots, which are left unaccounted by such systems. Furthermore, it remains to be seen, how many of these aspects can later be effectively integrated with systems belonging to this strand of research.
B. Multiple speech acts
The limitations (p1)-(p5) cited above for the classic "simple commands only" systems provide useful departure points for extensions. Speech act theory was introduced by J.L.Austin [44], and a speech act is usually defined as an utterance that has performative function in language and communication. Thus, we are focusing on the function and purpose of the utterance, instead of the content and form. Several taxonomies of utterances can be derived according to such a viewpoint: for example, Searle [45], proposed a classification of illocutionary speech acts into assertives, directives, commisives, expressives, and declarations. Computational models of speech acts have been proposed for use in human-computer interaction [46].
In this light of speech acts, lets us start by extending upon point (p3) made in the previous section. In the short humanrobot dialogue presented in the previous section, the human utterances "Give me the red one" and "Give me the green one" could be classified as Request speech acts, and more specifically requests for motor action (one could also have requests for information, such as "What color is the object?" etc.). But what else might one desire in terms of speech act handling capabilities, apart from RequestForMotorAction (which we shall call SA1, a Directive according to [45])? Some possibilities follow below:
H: "How big is the green one?" (RequestForInformAct, SA2, Directive) H: "There is a red object at the left" (Inform, SA3, Assertive) H: "Let us call the small doll Daisy" (Declare, SA4, Declaration) And many more exist. Systems such as [47] are able to handle SA2 and SA3 apart from SA1-type acts; and one should also notice, that there are many classificatory systems for speech acts, across different axis of classification, and with multiple granularities. Also, it is worth starting at this stage to contemplate upon what might it mean to respond appropriately to different kinds of speech acts. For example, an appropriate response to a RequestForMotorAction (a Directive) is the motor action itself, if unambiguous and feasible; however, an appropriate response to an Assertive or a Declarative consists of a change to some form of a "mental model" [48] or "situation model" [49] [47] that the robot might be keeping;
i.e. creating an appropriate mental token for an object in the case of "There is a red object at the left", or changing the name label for a mental object token in the case of "Let us call this small doll Daisy"; i.e. both statements elicit primarily internal (mental) actions, instead of external (motor or verbal) actions.
Another relevant aspect of speech act theory is the handling of indirect speech acts. For example, consider the following utterance:
H: "Ah, it is quite hot in this room" (phenomenally, an Assertive), Which might actually be a polite way of saying: H: "Open the window" (essentially, a Directive). This substitution of an Assertive for an implied Directive (to be inferred by the listener) is a classic example of an indirect speech act. Usually, the analysis of such acts is based on the Gricean maxims of conversation [50]; and numerous computational implementations for handling such indirect speech acts have been proposed, such as [51].
Finally, yet another problem related to speech acts, is the issue of their classification from the robot, after hearing them. Classic techniques such as those described on [52] rely on liguistic information only; however, paralinguistic information (such as prosodic features) can also prove useful towards speech act classification; the interested reader is referred for example to [53].
C. Mixed Initiative Dialogue
Now, starting again from the shortcoming of the traditional "simple commands-only" systems, let us extend across another axis, namely (p1): human-initiative dialogue is not the only option; one could also have robot-initiative, or ideally, full mixed-initiative. Consider FaceBots [54] [55], a conversational robot utilizing facebook-derived information. A typical dialogue might include turns such as: R: "Hello! Are you Noura Dhaheri?" H: "Yes" R: "Hello Noura! Are you doing well today?" H: "Pretty well!" R: "Great! Do you know that our friend Wajahat Kazmi has posted a new photo?" H: "No!" R: "Well, I am sending you an inbox message for this, check it out!" This is a classic example of robot-initiative dialogue, and actually with very simple responses expected from the human mainly Yes or No. On the other hand, consider a dialogue such as (from the robot BIRON at Bielefeld, dialogue in [56]), which is an example of a dialogue with limited mixed initiative:
R: "Hello Human!" H: "Hello! Who are you?" R: "My name is BIRON, if you show me something I can remember it." H: "OK, um, look, this is a mug." R: "I beg your pardon?" H: "This is a mug, umm, here." R: "Sorry, can you use some other words?" H: "This is a mug!" R: "Sorry, I cannot find it. Are you pointing to it with your finger?" H: "Yes!" R: "I know it is sometimes difficult with me, but dont feel discouraged!" H: (laugh) R: "OK, I have found it, it is really nice!" Here, it is neither the robot nor the human driving the dialogue all of the time; for example, the opening pair, R-H is robot-initiative (R: "Hello Human!" H: "Hello!"). However, directly afterwards, the second half of H together with R create a second pair of exchanges, which is human-initiative (H: "Who are you?", R: "My names is BIRON..."). And thus the initiative can be reversed in multiple points throughout the dialogue.
For an investigation of the state of the art towards mixed initiative, the interested reader is referred to examples such as the Karlsruhe Humanoid [57]the Biron and Barthoc systems at Bielefeld [56], and also workshops such as [58].
D. Situated Language and Symbol Grounding
Yet another observation regarding shortcomings of the traditional command-only systems that is worth extending from, was point (p5) that was mentioned above: the meanings of the utterances were normatively decided by the designer, and not based on empirical observations. For example, a designer/coder could normatively pre-define the semantics of the color descriptor "red" as belonging to the range between two specific given values. Alternatively, one could empirically get a model of the applicability of the descriptor "red" based on actual human usage; by observing the human usage of the word in conjunction with the actual apparent color wavelength and the context of the situation. Furthermore, the actual vocabularies (red, "pink", etc.) or the classes of multiple surface realizations (p4) (quasi-synonyms or semantically equivalent parts of utterances, for example: "give me the red object", "hand me the red ball"), are usually hand-crafted in such systems, and again not based on systematic human observation or experiment.
There are a number of notable exceptions to this rule, and there is a growing tendancy to indeed overcome these two limitations recently. For example, consider [59], during which a wizard-of-oz experiment provided the collection of vocabulary from users desiring to verbally interact with a robotic arm, and examples such as [37], for which the actual context-depending action models corresponding to simple verbal commands like "go left" or "go right" (which might have quite different expected actions, depending on the surrounding environment) were learnt empirically through human experiments.
Embarking upon this avenue of thought, it slowly becomes apparent that the connection between local environment (and more generally, situational context) and procedural semantics of an utterance is quite crucial. Thus, when dealing with robots and language, it is impossible to isolate the linguistic subsystems from perception and action, and just plug-and-play with a simple speech-in speech-out black box chatterbot of some sort (such as the celebrated ELIZA [60] or even the more recent victors of the Loebner Prize [61]). Simply put, in such systems, there is no connection of what is being heard or said to what the robot senses and what the robot does. This is quite a crucial point; there is a fundamental need for closer integration of language with sensing, action, and purpose in conversational robots [30] [47], as we shall also see in the next sections.
1) Situated Language: Upon discussing the connection of language to the physical context, another important concept becomes relevant: situated language, and especially the language that children primarily use during their early years; i.e. language that is not abstract or about past or imagined events; but rather concrete, and about the physical here-and-now. But what is the relevance of this observation to conversational robots? One possibility is the following; given that there seems to be a progression of increasing complexity regarding human linguistic development, often in parallel to a progression of cognitive abilities, it seems reasonable to: First partially mimic the human developmental pathway, and thus start by building robots that can handle such situated language, before moving on to a wider spectrum of linguistic abilities. This is for example the approach taken at [47].
Choosing situated language as a starting point also creates a suitable entry point for discussing language grounding in the next section. Now, another question that naturally follows is: could one postulate a number of levels of extensions from language about the concrete here-and-now to wider domains? This is attempted in [47], and the levels of increasing detachment from the "here-and-now" postulated there are:
First level: limited only to the "here-and-now, existing concrete things". Words connect to things directly accessible to the senses at the present moment. If there is a chair behind me, although I might have seen it before, I cannot talk about it -"out of sight" means "non-existing" in this case. For example, such a robotic system is [62] Second level: ("now, existing concrete things"); we can talk about the "now", but we are not necessarily limited to the "here" -where here means currently accessible to the senses. We can talk about things that have come to our senses previously, that we conjecture still exist through some form of psychological "object permanence" [63] -i.e., we are keeping some primitive "mental map" of the environment. For example, this was the state of the robot Ripley during [64] Third level: ("past or present, existing concrete things"), we are also dropping the requirement of the "now" -in this case, we also posses some form of episodic memory [65] enabling us to talk about past states. An example robot implementation can be found in [66] Fourth level: ("imagined or predicted concrete things"); we are dropping the requirement of actual past or present existence, and we can talk about things with the possibility of actual existence -either predicted (connectible to the present) or imagined. [47] Fifth level: ("abstract things") we are not talking about potentially existing concrete things any more, but about entities that are abstract. But what is the criterion of "concreteness"? A rough possibility is the following: a concrete thing is a firstorder entity (one that is directly connected to the senses); an "abstract" thing is built upon first order entities, and does not connect directly to the senses, as it deals with relationships between them. Take, for example, the concept of the "number three": it can be found in an auditory example ("threeness" in the sound of three consecutive ticks); it can also be found in a visual example ("threeness" in the snapshot of three birds sitting on a wire). Thus, threeness seems to be an abstract thing (not directly connected to the senses).
Currently, there exist robots and methodologies [47] that can create systems handling basic language corresponding to the first four stages of detachment from situatedness; however, the fifth seems to still be out of reach. If what we are aiming towards is a robot with a deeper understanding of the meaning of words referring to abstract concepts, although related work on computational analogy making (such as [67]), could prove to provide some starting points for extensions towards such domains, we are still beyond the current state-of-the-art.
Nevertheless, there are two interesting points that have arisen in the previous sections: first, that when discussing natural language and robots, there is a need to connect language not only to sensory data, but also to internalized "mental models" of the world in order for example to deal with detachment from the immediate "here-and-now". And second, that one needs to consider not only phonological and syntactical levels of language but also questions of semantics and meaning; and pose the question: "what does it mean for a robot to understand a word that it hears or utters"? And also, more practically: what are viable computational models of the meaning of words, suitable to embodied conversational robots? We will try to tackle these questions right now, in the next subsection.
2) Symbol Grounding: One of the main philosophical problems that arises when trying to create embodied conversational robots is the so-called "symbol grounding problem" [25]. In simple terms, the problem is the following: imagine a robot, having an apple in front of it, and hearing the word "apple" a verbal label which is a conventional sign (in semiotic terms [68] [69]), and which is represented by a symbol within the robots cognitive system. Now this sign is not irrelevant to the actual physical situation; the human that uttered the word "apple" was using it to refer to the physical apple that is in front of the robot. Now the problem that arises is the following: how can we connect the symbol standing for "apple" in the robots cognitive system, with the physical apple that it refers to? Or, in other words, how can we ground out the meaning of the symbol to the world? In simple terms, this is an example of the symbol grounding problem. Of course, it extends not only to objects signified by nouns, but to properties, relations, events etc., and there are many other extensions and variations of it.
So, what are solutions relevant to the problem? In the case of embodied robots, the connection between the internal cognitive system of the robot (where the sign is) and the external world (where the referent is) is mediated through the sensory system, for this simple case described above. Thus, in order to ground out the meaning, one needs to connect the symbol to the sensory data say, to vision. Which is at least, to find a mechanism through which, achieves the following bidirectional connection: first, when an apple appears in the visual stream, instantiates an apple symbol in the cognitive system (which can later for example trigger the production of the word "apple" by the robot), and second, when an apple symbol is instantiated in the cognitive system (for example, because the robot heard that "there is an apple"), creates an expectation regarding the contents of the sensory stream given that an apple is reported to be present. This bidirectional connection can be succinctly summarized as: external referent > sensory stream > internal symbol > produced utterance external referent < sensory expectation < internal symbol < heard utterance This bidirectional connection we will refer to as "full grounding", while its first unidirectional part as "half grounding". Some notable papers presenting computational solutions of the symbol grounding problem for the case of robots are: half-grounding of color and shapes for the Toco robot [62], and full-grounding of multiple properties for the Ripley robot [30]. Highly relevant work includes: [70] and also Steels [71], [72], [73], and also [74] from a child lexical perspective.
The case of grounding of spatial relations (such as "to the left of", "inside" etc.) reserves special attention, as it is a significant field on its own. A classic paper is [75], presenting an empirical study modeling the effect of central and proximal distance on 2D spatial relations; regarding the generation and interpretation of referring expressions on the basis of landmarks for a simple rectangle world, there is [76], while the book by [77] extends well into illustrating the inadequacy of geometrical models and the need for functional models when grounding terms such as "inside", and covers a range of relevant interesting subjects. Furthermore, regarding the grounding of attachment and support relations in videos, there is the classic work by [78]. For an overview of recent spatial semantics research, the interested reader is referred to [79], and a sampler of important current work in robotics includes [80], [81], [82], and the most recent work of Tellex on grounding with probabilistic graphical models [83], and for learning word meanings from unaligned parallel data [84].
Finally, an interesting question arises when trying to ground out personal pronouns, such as "me, my, you, your". Regarding their use as modifiers of spatial terms ("my left"), relevant work on a real robot is [64], and regarding more general models of their meaning, the reader is referred to [85], where a system learns the semantics of the pronouns through examples.
A number of papers has recently also appeared claiming to have provided a solution to the "symbol grounding problem", such as [86]. There is a variety of different opinions regarding what an adequate solution should accomplish, though. A stream of work around an approach dealing with the evolution of language and semiotics, is outlined in [87]. From a more applied and practical point of view though, one would like to be able to have grounded ontologies [88] [89] or even robotusable lexica augmented with computational models providing such grounding: and this is the ultimate goal of the EU projects POETICON [90] [91], and the follow-up project POETICON II.
Another important aspect regarding grounding is the set of qualitatively different possible target meaning spaces for a concept. For example, [47] proposes three different types of meaning spaces: sensory, sensorymotor, and teleological. A number of other proposals exists for meaning spaces in cognitive science, but not directly related to grounding; for example, the geometrical spaces Gardenfors [92]. Furthermore, any long-ranging agenda towards extending symbol grounding to an ever-increasing range of concepts, needs to address yet another important point: semantic composition, i.e. for a very simple example, consider how a robot could combine a model of "red" with a model of "dark" in order to derive a model of "dark red". Although this is a fundamental issue, as discussed in [47], it has yet to be addressed properly.
Last but not least, regarding the real-world acquisition of large-scale models of grounding in practice, special datadriven models are required, and the quantities of empirical data required would make collection of such data from non-experts (ideally online) highly desirable. Towards that direction, there exists the pioneering work of Gorniak [73] where a specially modified computer game allowed the collection of referential and functional models of meaning of the utterances used by the human players. This was followed up by [93] [94] [95], in which specially designed online games allowed the acquisition of scripts for situationally appropriate dialogue production. These experiments can be seen as a special form of crowdsourcing, building upon the ideas started by pioneering systems such as Luis Von Ahns peekaboom game [96], but especially targeting the situated dialogic capabilities of embodied agents. Much more remains to be done in this promising direction in the future.
3) Meaning Negotiation: Having introduced the concept of non-logic-like grounded models of meaning, another interesting complication arises. Given that different conversational partners might have different models of meaning, say for the lexical semantics of a color term such as "pink", how is communication possible? A short, yet minimally informative answer, would be: given enough overlap of the particular models, there should be enough shared meaning for communication. But if one examines a number of typical cases of misalignment across models, he will soon reach to the realization that models of meaning, or even second-level models (beliefs about the models that others hold), are very often being negotiated and adjusted online, during a conversation. For example:
(Turquoise object on robot table, in front of human and robot) H: "Give me the blue object!" R: "No such object exists" H: "Give me the blue one!" R: "No such object exists" But why is this surreal human-robot dialog taking place, and why it would not have taken place for the case of two humans in a similar setting? Let us analyze the situation. The object on the table is turquoise, a color which some people might classify as "blue", and others as "green". The robots color classifier has learnt to treat turquoise as green; the human classifies the object as "blue". Thus, we have a categorical misalignment error, as defined in [47]. For the case of two humans interacting instead of a human and a robot, given the non-existence of another unique referent satisfying the "blue object" description, the second human would have readily assumed that most probably the first human is classifying turquoise as "blue"; and, thus, he would have temporarily adjusted his model of meaning for "blue" in order to be able to include turquoise as "blue", and thus to align his communication with his conversational partner. Thus, ideally we would like to have conversational robots that can gracefully recover from such situations, and fluidly negotiate their models of meaning online, in order to be able to account for such situations. Once again, this is a yet unexplored, yet crucial and highly promising avenue for future research.
E. Affective Interaction
An important dimension of cognition is the affective/emotional. In the german psychological tradition of the 18th century, the affective was part of the tripartite classification of mental activities into cognition, affection, and conation; and apart from the widespread use of the term, the influence of the tri-partite division extended well into the 20th century [97].
The affective dimension is very important in human interaction [98], because it is strongly intertwined with learning [99], persuasion [100], and empathy, among many other functions. Thus, it carries over its high significance for the case of human-robot interaction. For the case of speech, affect is marked both in the semantic/pragmatic content as well as in the prosody of speech: and thus both of these ideally need to be covered for effective human-robot interaction, and also from both the generation as well as recognition perspectives. Furthermore, other affective markers include facial expressions, body posture and gait, as well as markers more directly linked to physiology, such as heart rate, breathing rate, and galvanic skin response.
Pioneering work towards affective human-robot interaction includes [101] where, extending upon analogous research from virtual avatars such as Rea [102], Steve [103], and Greta [104], Cynthia Breazeal presents an interactive emotion and drive system for the Kismet robot [105], which is capable of multiple facial expressions. An interesting cross-linguistic emotional speech corpus arising from childrens interactions with the Sony AIBO robot is presented in [106]. Another example of preliminary work based on a Wizard-of-Oz approach, this time regarding childrens interactions with the ATR Robovie robot in Japan, is presented in [107]. In this paper, automatic recognition of embarrassment or pleasure of the children is demonstrated. Regarding interactive affective storytelling with robots with generation and recognition of facial expressions, [108] presents a promising starting point. Recognition of human facial expressions is accomplished through SHORE [109], as well as the Seeing Machines product FaceAPI. Other available facial expression recognition systems include [110], which has also been used as an aid for autistic children, as well as [111], and [112], where the output of the system is at the level of facial action coding (FACS). Regarding generation of facial expressions for robots, some examples of current research include [113], [114], [115] . Apart from static poses, the dynamics of facial expressions are also very important towards conveying believability; for empirical research on dynamics see for example [116]. Still, compared to the wealth of available research on the same subject with virtual avatars, there is still a lag both in empirical evaluations of human-robot affective interaction, as well as in importing existing tools from avatar animation towards their use for robots.
Regarding some basic supporting technologies of affectenabled text-to-speech and speech recognition, the interested reader can refer to the general reviews by Schroeder [117] on TTS, and by Ververidis and Kotropoulos [118] on recognition. A wealth of other papers on the subject exist; with some notable developments for affective speech-enabled real-world robotic systems including [119] [120]. Furthermore, if one moves beyond prosodic affect, to semantic content, the wide literature on sentiment analysis and shallow identification of affect applies directly; for example [121] [122] [123]. Finally, regarding physiological measurables, products such as Affectivas Q sensor [124], or techniques for measuring heart rate, breathing rate, galvanic skin response and more, could well become applicable to the human-robot affective interaction domain, of course under the caveats of [125]. Finally, it is worth noting that significant cross-culture variation exists regarding affect; both at the generation, as well as at the understanding and situational appropriateness levels [126]. In general, affective human-robot interaction is a growing field with promising results, which is expected to grow even more in the near future.
F. Motor corellates of speech and non-verbal communication
Verbal communication in humans doesnt come isolated from non-verbal signs; in order to achieve even the most basic degree of naturalness, any humanoid robot needs for example at least some lip-movement-like feature to accompany speech production. Apart from lip-syncing, many other human motor actions are intertwined with speech and natural language; for example, head nods, deictic gestures, gaze movements etc. Also, note that the term corellates is somewhat misleading; for example, the gesture channel can be more accurately described as being a complementary channel rather than a channel correlated with or just accompanying speech [127]. Furthermore, we are not interested only in the generation of such actions; but also on their combination, as well as on dialogic / interactional aspects. Let us start by examining the generation of lip syncing. The first question that arises is: should lip sync actions be generated from phoneme-level information, or is the speech soundtrack adequate? Simpler techniques, rely on the speech soundtrack only; the simplest solution being to utilize only the loudness of the soundtrack, and map directly from loudness to mouth opening. There are many shortcomings in this approach; for example, a nasal "m" usually has large apparent loudness, although in humans it is being produced with a closed mouth. Generally, the resulting lip movements of this method are perceivable unnatural. As an improvement to the above method, one can try to use spectrum matching of the soundtrack to a set of reference sounds, such as at [128], [129], or even better, a linear prediction speech model, such as [130]. Furthermore, apart from the generation of lip movements, their recognition can be quite useful regarding the improvement of speech recognition performance under low signal-to-noise ratio conditions [131]. There is also ample evidence that humans utilize lip information during recognition; a celebrated example is the McGurk effect [132]. The McGurk effect is an instance of so-called multi-sensory perception phenomena [133], which also include other interesting cases such as the rubber hand illusion [134].
Now, let us move on to gestures. The simplest form of gestures which are also directly relevant to natural language are deictic gestures, pointing towards an object and usually accompanied with indexicals such as "this one!". Such gestures have long been utilized in human-robot interaction; starting from virtual avatar systems such as Kris Thorissons Gandalf [135] , and continuing all the way to robots such as ACE (Autonomous City Explorer) [136], a robot that was able to navigate through Munich by asking pedestrians for directions. There exists quite a number of other types of gestures, depending on the taxonomy one adopts; such as iconic gestures, symbolic gestures etc. Furthermore, gestures are highly important towards teaching and learning in humans [137]. Apart from McNeills seminal psychological work [127], a definitive reference to gestures, communication, and their relation to language, albeit regarding virtual avatar Embodied Conversational Assistants (ECA), can be found in Justine Cassells work, including [138], [139]. Many open questions exist in this area; for example, regarding the synchronization between speech and the different non-verbal cues [140], , and socio-pragmatic influences on the non-verbal repertoire.
Another important topic for human-robot interaction is eye gaze coordination and hared attention. Eye gaze cues are important for coordinating collaborative tasks [141], [142], and also, eye gazes are an important subset of non-verbal communication cues that can increase efficiency and robustness in human-robot teamwork [143]. Furthermore, eye gaze is very important in disambiguating referring expressions, without the need for hand deixis [144], [145]. Shared attention mechanisms develop in humans during infancy [146], and Scasellati authored the pioneering work on shared attention in robots in 1996 [147], followed up by [148]. A developmental viewpoint is also taken in [149], as well as in [150]. A well-cited probabilistic model of gaze imitation and shared attention is given in [151], In virtual avatars, considerable work has also taken place; such as [152], [153].
Eye-gaze observations are also very important towards mind reading and theory of mind [154] for robots; i.e. being able to create models of the mental content and mental functions of other agents (human or robots) minds through observation. Children develop a progressively more complicated theory of mind during their childhood [155]. Elemental forms of theory of mind are very important also towards purposeful speech generation; for example, in creating referring expressions, one should ideally take into account the second-order beliefs of his conversational partner-listener; i.e. he should use his beliefs regarding what he thinks the other person believes, in order to create a referring expression that can be resolved uniquely by his listener. Furthermore, when a robot is purposefully issuing an inform statement ("there is a tomato behind you") it should know that the human does not already know that; i.e. again an estimated model of second-order beliefs is required (i.e. what the robot believes the human believes). A pioneering work in theory of mind for robots is Scasellatis [156], [157]. An early implementation of perspective-shifting synthetic-cameradriven second-order belief estimation for the Ripley robot is given in [47]. Another example of perspective shifting with geometric reasoning for the HRP-2 humanoid is given in [158].
Finally, a quick note on a related field, which is recently growing. Children with Autistic Spectrum Disorders (ASD) face special communication challenges. A prominent theory regarding autism is hypothesizing theory-of-mind deficiencies for autistic individuals [159], [160]. However, recent research [161], [162], [163], [164] has indicated that specially-designed robots that interact with autistic children could potentially help them towards improving their communication skills, and potentially transferring over these skills to communicating not only with robots, but also with other humans.
Last but not least, regarding a wider overview of existing work on non-verbal communication between humans, which could readily provide ideas for future human-robot experiments, the interested reader is referred to [24].
G. Purposeful speech and planning
Traditionally, simple command-only canned-response conversational robots had dialogue systems that could be construed as stimulus-response tables: a set of verbs or command utterances were the stimuli, the responses being motor actions, with a fixed mapping between stimuli and responses. Even much more advanced systems, that can support situated language, multiple speech acts, and perspective-shifting theoryof-mind, such as Ripley [47], can be construed as effectively being (stimulus, state) to response maps, where the state of the system includes the contents of the situation model of the robots. What is missing in all of these systems is an explicit modeling of purposeful behavior towards goals.
Since the early days of AI, automated planning algorithms such as the classic STRIPS [165] and purposeful action selection techniques have been a core research topic In traditional non-embodied dialogue systems practice, approaches such as Belief-Desire-Intention (BDI) have existed for a while [166], and theoretical models for purposeful generation of speech acts [167] and computation models towards speech planning [BookSpeechPlanning] exist since more than two decades. Also, in robotics, specialized modified planning algorithms have mainly been applied towards motor action planning and path planning [165], such as RRT [168] and Fast-Marching Squares [169].
However, the important point to notice here is that, although considerable research exists for motor planning or dialogue planning alone, there are almost no systems and generic frameworks either for effectively combining the two, or for having mixed speech-and motor-act planning, or even better agentand object-interaction-directed planners. Notice that motor planning and speech planning cannot be isolated from one another in real-world systems; both types of actions are often interchangeable with one another towards achieving goals, and thus should not be planned by separate subsystems which are independent of one another. For example, if a robot wants to lower its temperature, it could either say: can you kindly open the window? to a human partner (speech action), or could move its body, approach the window, and close it (motor action). An exemption to this research void of mixed speechmotor planning is [170], where a basic purposeful action selection system for question generation or active sensing act generation is described, implemented on a real conversation robot. However, this is an early and quite task-specific system, and thus much more remains to be done towards real-world general mixed speech act and motor act action selection and planning for robots.
H. Multi-level learning
Yet another challenge towards fluid verbal and non-verbal human-robot communication is concerned with learning [171]. But when could learning take place, and what could be and should be learnt? Let us start by examining the when. Datadriven learning can happen at various stages of the lifetime of a system: it could either take place a) initially and offline, at design time; or, it could take place b) during special learning sessions, where specific aspects and parameters of the system are renewed; or, c) it could take place during normal operation of the system, in either a human-directed manner, or ideally d) through robot-initiated active learning during normal operation. Most current systems that exhibit learning, are actually involving offline learning, i.e. case a) from above. No systems in the literature have exhibited non-trivial online, real-world continuous learning of communications abilities.
The second aspect beyond the when, is the what of learning. What could be ideally, what could be practically, and what should be learnt, instead of pre-coded, when it comes to human-robot communication? For example, when it comes to natural-language communication, multiple layers exist: the phonological, the morphological, the syntactic, the semantic, the pragmatic, the dialogic. And if one adds the complexity of having to address the symbol grounding problem, a robot needs to have models of grounded meaning, too, in a certain target space, for example in a sensorymotor or a teleological target space. This was already discussed in the previous sections of normative vs. empirical meaning and on symbol grounding. Furthermore, such models might need to be adjustable on the fly; as discussed in the section on online negotiation of meaning. Also, many different aspects of nonverbal communication, from facial expressions to gestures to turn-taking, could ideally be learnable in real operation, even more so for the future case of robots needing to adapt to cultural and individual variations in non-verbal communications. Regarding motor aspects of such non-verbal cues, existing methods in imitation and demonstration learning [28] have been and could further be readily adapted; see for example the imitation learning of human facial expressions for the Leonardo robot [172].
Finally, another important caveat needs to be spelled out at this point. Real-world learning and real-world data collection towards communicative behavior learning for robots, depending on the data set size required, might require many hours of uninterrupted operation daily by numerous robots: a requirement which is quite unrealistic for todays systems. Therefore, other avenues need to be sought towards acquiring such data sets; and crowdsourcing through specially designed online games offers a realistic potential solution, as mentioned in the previous paragraph on real-world acquisition of largescale models of grounding. And of course, the learning content of such systems can move beyond grounded meaning models, to a wider range of the what that could be potentially learnable. A relevant example from a non-embodied setting comes from [173], where a chatterbot acquired interaction capabilities through massive observation and interaction with humans in chat rooms. Of course, there do exist inherent limitations in such online systems, even for the case of the robot-tailored online games such as [95]; for example, the non-physicality of the interaction presents specific obstacles and biases. Being able to extend this promising avenue towards wider massive data-driven models, and to demonstrate massive transfer of learning from the online systems to real-world physical robots, is thus an important research avenue for the future.
I. Utilization of online resources and services
Yet another interesting avenue towards enhanced humanrobot communication that has opened up recently is the following: as more and more robots nowadays can be constantly connected to the internet, not all data and programs that the robot uses need to be onboard its hardware. Therefore, a robot could potentially utilize online information as well as online services, in order to enhance its communication abilities. Thus, the intelligence of the robot is partially offloaded to the internet; and potentially, thousands of programs and/or humans could be providing part of its intelligence, even in real-time. For example, going much beyond traditional cloud robotics [174], in the human-robot cloud proposal [175], one could construct on-demand and on-the-fly distributed robots with human and machine sensing, actuation, and processing components.
Beyond these highly promising glimpses of a possible future, there exist a number of implemented systems that utilize information and/or services from the internet. A prime example is Facebots, which are physical robots that utilize and publish information on Facebook towards enhancing long-term human-robot interaction, are described in [54] [55],. Facebots are creating shared memories and shared friends with both their physical as well as their online interaction partners, and are utilizing this information towards creating dialogues that enable the creation of a longer-lasting relationship between the robot and its human partners, thus reversing the quick withdrawal of the novelty effects of long-term HRI reported in [176]. Also, as reported in [177], the multilingual conversational robot Ibn Sina [39], has made use of online google translate services, as well as wikipedia information for its dialogues. Furthermore, one could readily utilize online high-quality speech recognition and text-to-speech services for human-robot communication, such as [Sonic Cloud online services], in order not to sacrifice onboard computational resources.
Also, quite importantly, there exists the European project Roboearth [178], which is described as a World Wide Web for robots: a giant network and database repository where robots can share information and learn from each other about their behavior and their environment. Bringing a new meaning to the phrase experience is the best teacher, the goal of RoboEarth is to allow robotic systems to benefit from the experience of other robots, paving the way for rapid advances in machine cognition and behaviour, and ultimately, for more subtle and sophisticated human-machine interaction. Rapyuta [179], which is the cloud engine of Roboearth, claims to make immense computational power available to robots connected to it. Of course, beyond what has been utilized so far, there are many other possible sources of information and/or services on the internet to be exploited; and thus much more remains to be done in the near future in this direction.
J. Miscellaneous abilities
Beyond the nine desiderata examined so far, there exist a number of other abilities that are required towards fluid and general human-robot communication. These have to do with dealing with multiple conversational partners in a discussion, with support for multilingual capabilities, and with generating and recognizing natural language across multiple modalities: for example not only acoustic, but also in written form. In more detail:
1) Multiple conversational partners: Regarding conversational turn-taking, in the words of Sacks [180], The organization of taking turns to talk is fundamental to conversation, as well as to other speech-exchange systems, and this readily carries over to human-robot conversations, and becomes especially important in the case of dialogues with multiple conversation partners. Recognition of overlapping speech is also quite important towards turn-taking [181]. Regarding turn-taking in robots, a computational strategy for robots participating in group conversation is presented in [182], and the very important role of gaze cues in turn taking and participant role assignment in human-robot conversations is examined in [183]. In [184], an experimental study using the robot Simon is reported, which is aiming towards showing that the implementation of certain turn-taking cues can make interaction with a robot easier and more efficient for humans. Head movements are also very important in turn-taking; the role of which in keeping engagement in an interaction is explored in [185].
Yet another requirement for fluid multi-partner conversations is sound-source localization and speaker identification. Sound source localization is usually accomplished using microphone arrays, such as the robotic system in [186]. An approach utilizing scattering theory for sound source localization in robots is described in [187] and approaches using beamforming for multiple moving sources are presented in [188] and [189]. Finally, HARK, an open-source robot audition system supporting three simultaneous speakers, is presented in [190]. Speaker identification is an old problem; classic approaches utilize Gaussian mixture models, such as [191] and [192]. Robotic systems able to identify their speakers identity include [193], [52], as well as the well-cited [194]. Also, an important idea towards effective signal separation between multiple speaker sources in order to aid in recognition, is to utilize both visual as well as auditory information towards that goal. Classic examples of such approaches include [195], as well as [196].
2) Multilingual capabilities and Mutimodal natural language: Yet another desirable ability for human-robot communication is multilinguality. Multilingual robots could not only communicate with a wider range of people, especially in multicultural societies and settings such as museums, but could very importantly also act as translators and mediators. Although there has been considerable progress towards non-embodied multilingual dialogue systems [197], and multi-lingual virtual avatars do exist [198] [199], the only implemented real-world multilingual physical android robot so far reported in the literature is [177].
Finally, let us move on to examining multiple modalities for the generation and recognition of natural language. Apart from a wealth of existing research on automated production and recognition of sign language for the deaf (ASL) [200] [201] [202], systems directly adaptable to robots also exist [203]. One could also investigate the intersection between human writing and robotics. Again, a wealth of approaches exist for the problem of optical character recognition and handwriting recognition [204] [205], even for languages such as Arabic [206], the only robotic system that has demonstrated limited OCR capabilities is [177]. Last but not least, another modality available for natural language communication for robots is internet chat. The only reported system so far that could perform dialogues both physically as well as through facebook chat is [54] [55].
As a big part of human knowledge, information, as well as real-world communication is taking place either through writing or through such electronic channels, inevitably more and more systems in the future will have corresponding abilities. Thus, robots will be able to more fluidly integrate within human societies and environments, and ideally will be enabled to utilize the services offered within such networks for humans. Most importantly, robots might also one day become able to help maintain and improve the physical human-robot social networks they reside within towards the benefit of the common good of all, as is advocated in [207].
IV. DISCUSSION
From our detailed examination of the ten desiderata, what follows first is that although we have moved beyond the canned-commands-only, canned responses state-of-affairs of the ninetees, we seem to be still far from our goal of fluid and natural verbal and non-verbal communication between humans and robots. But what is missing?
Many promising future directions were mentioned in the preceeding sections. Apart from clearly open avenues for projects in a number of areas, such as composition of grounded semantics, online negotiation of meaning, affective interaction and closed-loop affective dialogue, mixed speech-motor planning, massive acquisition of data-driven models for humanrobot communication through crowd-sourced online games, real-time exploitation of online information and services for enhanced human-robot communication, many more open areas exist.
What we speculate might really make a difference, though, is the availability of massive real-world data, in order to drive further data-driven models. And in order to reach that state, a number of robots need to start getting deployed, even if in partially autonomous partially remote-human-operated mode, in real-world interactive application settings with round-theclock operation: be it shopping mall assistants, receptionists, museum robots, or companions, the application domains that will bring out human-robot communication to the world in more massive proportions, remains yet to be discovered. However, given recent developments, it does not seem to be so far away anymore; and thus, in the coming decades, the days might well come when interactive robots will start being part of our everyday lives, in seemless harmonious symbiosis, hopefully helping create a better and exciting future.
V. CONCLUSIONS
An overview of research in human-robot interactive communication was presented, covering verbal as well as non-verbal aspects. Following a historical introduction reaching from roots in antiquity to well into the ninetees, and motivation towards fluid human-robot communication, ten desiderata were proposed, which provided an organizational axis both of recent as well as of future research on human-robot communication. Then, the ten desiderata were explained, relevant research was examined in detail, culminating to a unifying discussion. In conclusion, although almost twenty-five years in human-robot interactive communication exist, and significant progress has been achieved in many fronts, many sub-problems towards fluid verbal and non-verbal human-robot communication remain yet unsolved, and present highly promising and exciting avenues towards research in the near future.
Robotics'founding father george c. devol-serial entrepreneur and inventor. L Ballard, 58Robot-CongersL. Ballard, "Robotics'founding father george c. devol-serial en- trepreneur and inventor," Robot-Congers, no. 31, p. 58, 2011.
Encoding apparatus. G C Devol, 970uS Patent 4,427G. C. Devol, "Encoding apparatus," Jan. 24 1984, uS Patent 4,427,970.
Ancient Greek Ideas on Speech, Language, and Civilization. D.-L Gera, Oxford University PressD.-L. Gera, Ancient Greek Ideas on Speech, Language, and Civiliza- tion. Oxford University Press, 2003.
The Iliad of Homer. R Lattimore, R Martin, University of Chicago PressR. Lattimore and R. Martin, The Iliad of Homer. University of Chicago Press, 2011.
C Huffman, Archytas of Tarentum: Pythagorean, Philosopher and Mathematician King. Cambridge University PressC. Huffman, Archytas of Tarentum: Pythagorean, Philosopher and Mathematician King. Cambridge University Press, 2005.
. J Needham, Science and Civilisation in China. 2Cambridge University PressJ. Needham, Science and Civilisation in China: Volume 2. Cambridge University Press, 1959.
The programmable robot of ancient greece. N Sharkey, New Scientist. N. Sharkey, "The programmable robot of ancient greece," New Scien- tist, pp. 32-35, jul 2007.
Robot Evolution: The Development of Anthrobotics. M E Rosheim, John Wiley & Sons, IncNew York, NY, USA1st edM. E. Rosheim, Robot Evolution: The Development of Anthrobotics, 1st ed. New York, NY, USA: John Wiley & Sons, Inc., 1994.
A history of robots: from science fiction to surgical robotics. N Hockstein, C Gourin, R Faust, D Terris, Journal of Robotic Surgery. 12N. Hockstein, C. Gourin, R. Faust, and D. Terris, "A history of robots: from science fiction to surgical robotics," Journal of Robotic Surgery, vol. 1, no. 2, pp. 113-118, 2007.
Review of text-to-speech conversion for English. D H Klatt, Journal of the Acoustical Society of America. 823D. H. Klatt, "Review of text-to-speech conversion for English," Journal of the Acoustical Society of America, vol. 82, no. 3, pp. 737-793, 1987.
Robust speech understanding for robot telecontrol. G Antoniol, R Cattoni, M Cettolo, M Federico, Proceedings of the 6th International Conference on Advanced Robotics. the 6th International Conference on Advanced RoboticsG. Antoniol, R. Cattoni, M. Cettolo, and M. Federico, "Robust speech understanding for robot telecontrol," in In Proceedings of the 6th International Conference on Advanced Robotics, 1993, pp. 205-209.
The Interactive Museum Tour-Guide Robot. W Burgard, A B Cremers, D Fox, D , G Lakemeyer, D Schulz, W Steiner, S Thrun, Proc. of the Fifteenth National Conference on Artificial Intelligence (AAAI-98). of the Fifteenth National Conference on Artificial Intelligence (AAAI-98)W. Burgard, A. B. Cremers, D. Fox, D. H?nel, G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun, "The Interactive Museum Tour- Guide Robot," in Proc. of the Fifteenth National Conference on Artificial Intelligence (AAAI-98), 1998.
Voice-controlled surgical robot ready to assist in minimally invasive heart surgery. L Versweyveld, Virtual Medicine World Monthly. L. Versweyveld, "Voice-controlled surgical robot ready to assist in minimally invasive heart surgery," Virtual Medicine World Monthly, March 1998.
Polly: A vision-based artificial agent. I Horswill, Proceedings of the Eleventh National Conference on Artificial Intelligence. the Eleventh National Conference on Artificial IntelligenceAAAI-93. PressI. Horswill, "Polly: A vision-based artificial agent," in Proceedings of the Eleventh National Conference on Artificial Intelligence (AAAI-93. Press, 1993, pp. 824-829.
The design of the polly system. The Institute for the Learning Sciences. Northwestern University, Tech. Rep.--, "The design of the polly system," The Institute for the Learning Sciences, Northwestern University, Tech. Rep., september 1996.
Natural communication with mobile robots. M Torrance, MIT Department of Electrical Engineering and Computer ScienceMaster's thesisM. Torrance, "Natural communication with mobile robots," Master's thesis, MIT Department of Electrical Engineering and Computer Sci- ence, January 1994.
Experiencing real-life interactions with the experimental platform of maia. G Antoniol, B Caprile, A Cimatti, R Fiutem, Proceedings of the 1st European Workshop on Human Comfort and Security. the 1st European Workshop on Human Comfort and SecurityG. Antoniol, B. Caprile, A. Cimatti, and R. Fiutem, "Experiencing real-life interactions with the experimental platform of maia," in In Proceedings of the 1st European Workshop on Human Comfort and Security, 1994.
A principled framework for constructing natural language interfaces to temporal databases. I Androutsopoulos, University of EdinburghPh.D. dissertationDepartment of Artificial IntelligenceI. Androutsopoulos, "A principled framework for constructing natural language interfaces to temporal databases," Ph.D. dissertation, Depart- ment of Artificial Intelligence, University of Edinburgh,, 1996.
A spoken dialog system for a mobile office robot. H Asoh, T Matsui, J Fry, F Asano, S Hayamizu, Proceedings of the European Conference on Speech Communication and Technology, EUROSPEECH. ISCA. the European Conference on Speech Communication and Technology, EUROSPEECH. ISCAH. Asoh, T. Matsui, J. Fry, F. Asano, and S. Hayamizu, "A spo- ken dialog system for a mobile office robot," in Proceedings of the European Conference on Speech Communication and Technology, EUROSPEECH. ISCA, 1999.
Natural dialogue with the jijo-2 office robot. J Fry, H Asoh, T Matsui, Intelligent Robots and Systems. J. Fry, H. Asoh, and T. Matsui, "Natural dialogue with the jijo-2 office robot," in Intelligent Robots and Systems, 1998. Proceedings., 1998
IEEE/RSJ International Conference on. 2IEEE/RSJ International Conference on, vol. 2, 1998, pp. 1278-1283 vol.2.
Integrated natural spoken dialogue system of jijo-2 mobile robot for office services. T Matsui, H Asoh, J Fry, Y Motomura, F Asano, T Kurita, I Hara, N Otsu, Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence, ser. AAAI '99/IAAI '99. the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence, ser. AAAI '99/IAAI '99Menlo Park, CA, USAAmerican Association for Artificial IntelligenceT. Matsui, H. Asoh, J. Fry, Y. Motomura, F. Asano, T. Kurita, I. Hara, and N. Otsu, "Integrated natural spoken dialogue system of jijo-2 mobile robot for office services," in Proceedings of the sixteenth na- tional conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence, ser. AAAI '99/IAAI '99. Menlo Park, CA, USA: American Association for Artificial Intelligence, 1999, pp. 621- 627.
Language and Learning for Robots, ser. CSLI lecture notes. Center for the Study of Language and Information. C Crangle, P Suppes, C , U.S.for the Study of Language, and IC. Crangle, P. Suppes, C. for the Study of Language, and I. (U.S.), Language and Learning for Robots, ser. CSLI lecture notes. Center for the Study of Language and Information, 1994. [Online]. Available: http://books.google.gr/books?id=MlMQ11Pqz10C
J R Searle, Speech Acts: An Essay in the Philosophy of Language. CambridgeCambridge University PressJ. R. Searle, Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University Press, 1969.
artificial humans: Psychology and neuroscience perspectives on embodiment and nonverbal communication. K Vogeley, G Bente, Neural Networks. 238K. Vogeley and G. Bente, "artificial humans: Psychology and neuro- science perspectives on embodiment and nonverbal communication," Neural Networks, vol. 23, no. 8, pp. 1077-1090, 2010.
The symbol grounding problem. S Harnad, Physica D: Nonlinear Phenomena. 421S. Harnad, "The symbol grounding problem," Physica D: Nonlinear Phenomena, vol. 42, no. 1, pp. 335-346, 1990.
The fast research interface for the kuka lightweight robot. G Schreiber, A Stemmer, R Bischoff, IEEE Conference on Robotics and Automation (ICRA). G. Schreiber, A. Stemmer, and R. Bischoff, "The fast research interface for the kuka lightweight robot," in IEEE Conference on Robotics and Automation (ICRA), 2010.
A user study on kinesthetic teaching of redundant robots in task and configuration space. S Wrede, C Emmerich, R Grünberg, A Nordmann, A Swadzba, J Steil, Journal of Human-Robot Interaction. 21S. Wrede, C. Emmerich, R. Grünberg, A. Nordmann, A. Swadzba, and J. Steil, "A user study on kinesthetic teaching of redundant robots in task and configuration space," Journal of Human-Robot Interaction, vol. 2, no. 1, pp. 56-81, 2013.
A survey of robot learning from demonstration. B D Argall, S Chernova, M Veloso, B Browning, Robotics and Autonomous Systems. 575B. D. Argall, S. Chernova, M. Veloso, and B. Browning, "A survey of robot learning from demonstration," Robotics and Autonomous Systems, vol. 57, no. 5, pp. 469-483, 2009.
C L Nehaniv, K Dautenhahn, Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions. Cambridge University PressC. L. Nehaniv and K. Dautenhahn, Imitation and social learning in robots, humans and animals: behavioural, social and communicative dimensions. Cambridge University Press, 2007.
Grounded situation models for robots: Where words and percepts meet. N Mavridis, D Roy, Intelligent Robots and Systems. N. Mavridis and D. Roy, "Grounded situation models for robots: Where words and percepts meet," in Intelligent Robots and Systems, 2006
. IEEE. IEEE/RSJ International Conference on, 2006, pp. 4690-4697.
Robocup@ home: Creating and benchmarking tomorrows service robot applications. T Van Der Zant, T Wisspeintner, Robotic SoccerT. van der Zant and T. Wisspeintner, "Robocup@ home: Creating and benchmarking tomorrows service robot applications," Robotic Soccer, pp. 521-528, 2007.
Human-robot dialogue for joint construction tasks. M E Foster, T By, M Rickert, A Knoll, Proceedings of the 8th international conference on Multimodal interfaces, ser. ICMI '06. the 8th international conference on Multimodal interfaces, ser. ICMI '06New York, NY, USAACMM. E. Foster, T. By, M. Rickert, and A. Knoll, "Human-robot dialogue for joint construction tasks," in Proceedings of the 8th international conference on Multimodal interfaces, ser. ICMI '06. New York, NY, USA: ACM, 2006, pp. 68-71.
Evaluating supportive and instructive robot roles in human-robot interaction. M Giuliani, A Knoll, Social Robotics. SpringerM. Giuliani and A. Knoll, "Evaluating supportive and instructive robot roles in human-robot interaction," in Social Robotics. Springer, 2011, pp. 193-203.
Living with seal robotsits sociopsychological and physiological influences on the elderly at a care house. K Wada, T Shibata, IEEE Transactions on. 235RoboticsK. Wada and T. Shibata, "Living with seal robotsits sociopsychological and physiological influences on the elderly at a care house," Robotics, IEEE Transactions on, vol. 23, no. 5, pp. 972-980, 2007.
Recommendation from robots in a real-world retail shop. K Kamei, K Shinozawa, T Ikeda, A Utsumi, T Miyashita, N Hagita, International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction. ACM19K. Kamei, K. Shinozawa, T. Ikeda, A. Utsumi, T. Miyashita, and N. Hagita, "Recommendation from robots in a real-world retail shop," in International Conference on Multimodal Interfaces and the Work- shop on Machine Learning for Multimodal Interaction. ACM, 2010, p. 19.
Dialogue patterns of an arabic robot receptionist. M Makatchev, I Fanaswala, A Abdulsalam, B Browning, W Ghazzawi, M Sakr, R Simmons, Human-Robot Interaction (HRI), 2010 5th ACM/IEEE International Conference on. M. Makatchev, I. Fanaswala, A. Abdulsalam, B. Browning, W. Ghaz- zawi, M. Sakr, and R. Simmons, "Dialogue patterns of an arabic robot receptionist," in Human-Robot Interaction (HRI), 2010 5th ACM/IEEE International Conference on, 2010, pp. 167-168.
Spatial routines for a simulated speechcontrolled vehicle. S Tellex, D Roy, Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. the 1st ACM SIGCHI/SIGART conference on Human-robot interactionACMS. Tellex and D. Roy, "Spatial routines for a simulated speech- controlled vehicle," in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction. ACM, 2006, pp. 156-163.
How may i serve you?: a robot companion approaching a seated person in a helping context. K Dautenhahn, M Walters, S Woods, K L Koay, C L Nehaniv, A Sisbot, R Alami, T Siméon, Proceedings of the 1st ACM SIGCHI/SIGART conference on Humanrobot interaction. the 1st ACM SIGCHI/SIGART conference on Humanrobot interactionACMK. Dautenhahn, M. Walters, S. Woods, K. L. Koay, C. L. Nehaniv, A. Sisbot, R. Alami, and T. Siméon, "How may i serve you?: a robot companion approaching a seated person in a helping context," in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human- robot interaction. ACM, 2006, pp. 172-179.
The ibnsina center: An augmented reality theater with intelligent robotic and virtual characters. N Mavridis, D Hanson, Robot and Human Interactive Communication. IEEEThe 18th IEEE International Symposium onN. Mavridis and D. Hanson, "The ibnsina center: An augmented reality theater with intelligent robotic and virtual characters," in Robot and Human Interactive Communication, 2009. RO-MAN 2009. The 18th IEEE International Symposium on. IEEE, 2009, pp. 681-686.
Musical-based interaction system for the waseda flutist robot. K Petersen, J Solis, A Takanishi, Autonomous Robots. 284K. Petersen, J. Solis, and A. Takanishi, "Musical-based interaction system for the waseda flutist robot," Autonomous Robots, vol. 28, no. 4, pp. 471-488, 2010.
Dance partner robot-ms dancer. K Kosuge, T Hayashi, Y Hirata, R Tobiyama, Intelligent Robots and Systems. K. Kosuge, T. Hayashi, Y. Hirata, and R. Tobiyama, "Dance partner robot-ms dancer," in Intelligent Robots and Systems, 2003.(IROS 2003).
IEEE/RSJ International Conference on. IEEE4ProceedingsProceedings. 2003 IEEE/RSJ International Conference on, vol. 4. IEEE, 2003, pp. 3459-3464.
On natural language dialogue with assistive robots. V A Kulyukin, Proceedings of the 1st ACM SIGCHI/SIGART conference on Humanrobot interaction. the 1st ACM SIGCHI/SIGART conference on Humanrobot interactionACMV. A. Kulyukin, "On natural language dialogue with assistive robots," in Proceedings of the 1st ACM SIGCHI/SIGART conference on Human- robot interaction. ACM, 2006, pp. 164-171.
What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution. J Dzifcak, M Scheutz, C Baral, P Schermerhorn, Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA '09). the 2009 IEEE International Conference on Robotics and Automation (ICRA '09)Kobe, JapanJ. Dzifcak, M. Scheutz, C. Baral, and P. Schermerhorn, "What to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution," in Proceedings of the 2009 IEEE International Conference on Robotics and Automation (ICRA '09), Kobe, Japan, May 2009.
How to Do Things with Words. J Austin, OxfordJ. Austin, How to Do Things with Words. Oxford, 1962.
A taxonomy of illocutionary acts. J Searle, Language, Mind and Knowledge, K. GundersonUniversity of Minnesota PressJ. Searle, "A taxonomy of illocutionary acts," in Language, Mind and Knowledge, K. Gunderson, Ed. University of Minnesota Press, 1975, pp. 344-369.
Towards conversational human-computer interaction. J F Allen, D K Byron, M Dzikovska, G Ferguson, L Galescu, A Stent, AI MAGAZINE. 22J. F. Allen, D. K. Byron, M. Dzikovska, G. Ferguson, L. Galescu, and A. Stent, "Towards conversational human-computer interaction," AI MAGAZINE, vol. 22, pp. 27-37, 2001.
Grounded situation models for situated conversational assistants. N Mavridis, Massachusetts Institute of TechnologyPh.D. dissertationN. Mavridis, "Grounded situation models for situated conversational assistants," Ph.D. dissertation, Massachusetts Institute of Technology, 2007.
Mental models: Towards a cognitive science of language, inference, and consciousness. P N Johnson-Laird, Harvard University Press6P. N. Johnson-Laird, Mental models: Towards a cognitive science of language, inference, and consciousness. Harvard University Press, 1983, vol. 6.
Situation models in language comprehension and memory. R A Zwaan, G A Radvansky, Psychological bulletin. 1232162R. A. Zwaan and G. A. Radvansky, "Situation models in language comprehension and memory." Psychological bulletin, vol. 123, no. 2, p. 162, 1998.
Logic and conversation. H P Grice, H. P. Grice, "Logic and conversation," 1975, pp. 41-58, 1975.
Service robots dealing with indirect speech acts. S Wilske, G.-J Kruijff, Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. IEEES. Wilske and G.-J. Kruijff, "Service robots dealing with indirect speech acts," in Intelligent Robots and Systems, 2006 IEEE/RSJ In- ternational Conference on. IEEE, 2006, pp. 4698-4703.
Have we met? mdp based speaker id for robot dialogue. F Krsmanovic, C Spencer, D Jurafsky, A Y Ng, INTERSPEECH. F. Krsmanovic, C. Spencer, D. Jurafsky, and A. Y. Ng, "Have we met? mdp based speaker id for robot dialogue." in INTERSPEECH, 2006.
Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts. C T Ishi, H Ishiguro, N Hagita, INTERSPEECH. C. T. Ishi, H. Ishiguro, and N. Hagita, "Analysis of prosodic and linguistic cues of phrase finals for turn-taking and dialog acts." in INTERSPEECH, 2006.
Facebots: Steps towards enhanced long-term human-robot interaction by utilizing and publishing online social information. N Mavridis, M Petychakis, A Tsamakos, P Toulis, S Emami, W Kazmi, C Datta, C Benabdelkader, A Tanoto, Paladyn. 13N. Mavridis, M. Petychakis, A. Tsamakos, P. Toulis, S. Emami, W. Kazmi, C. Datta, C. BenAbdelkader, and A. Tanoto, "Facebots: Steps towards enhanced long-term human-robot interaction by utilizing and publishing online social information," Paladyn, vol. 1, no. 3, pp. 169-178, 2010.
Facebots: robots utilizing and publishing social information in facebook. N Mavridis, C Datta, S Emami, A Tanoto, C Benabdelkader, T Rabie, 4th ACM/IEEE International Conference on. IEEEHuman-Robot InteractionN. Mavridis, C. Datta, S. Emami, A. Tanoto, C. BenAbdelkader, and T. Rabie, "Facebots: robots utilizing and publishing social information in facebook," in Human-Robot Interaction (HRI), 2009 4th ACM/IEEE International Conference on. IEEE, 2009, pp. 273-274.
How People Talk to Computers, Robots, and Other Artificial Communication Partners. B Wrede, S Buschkaemper, C Muhl, K J Rohlfing, 38Analyses of feedback in hriB. Wrede, S. Buschkaemper, C. Muhl, and K. J. Rohlfing, "Analyses of feedback in hri," How People Talk to Computers, Robots, and Other Artificial Communication Partners, p. 38, 2006.
Enabling multimodal human-robot interaction for the karlsruhe humanoid robot. R Stiefelhagen, H K Ekenel, C Fugen, P Gieselmann, H Holzapfel, F Kraft, K Nickel, M Voit, A Waibel, IEEE Transactions on. 235RoboticsR. Stiefelhagen, H. K. Ekenel, C. Fugen, P. Gieselmann, H. Holzapfel, F. Kraft, K. Nickel, M. Voit, and A. Waibel, "Enabling multimodal human-robot interaction for the karlsruhe humanoid robot," Robotics, IEEE Transactions on, vol. 23, no. 5, pp. 840-851, 2007.
Improving humanrobot communication with mixed-initiative and context-awareness colocated with ro-man. D Ertl, A Green, H Hüttenrauch, F Lerasle, D. Ertl, A. Green, H. Hüttenrauch, and F. Lerasle, "Improving human- robot communication with mixed-initiative and context-awareness co- located with ro-man 2009."
Toward a natural language interface for transferring grasping skills to robots. M Ralph, M A Moussa, IEEE Transactions on. 242RoboticsM. Ralph and M. A. Moussa, "Toward a natural language interface for transferring grasping skills to robots," Robotics, IEEE Transactions on, vol. 24, no. 2, pp. 468-475, 2008.
Elizaa computer program for the study of natural language communication between man and machine. J Weizenbaum, Communications of the ACM. 91J. Weizenbaum, "Elizaa computer program for the study of natural language communication between man and machine," Communications of the ACM, vol. 9, no. 1, pp. 36-45, 1966.
Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. M L Mauldin, AAAI. 94M. L. Mauldin, "Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition," in AAAI, vol. 94, 1994, pp. 16-21.
A computational model of word learning from multimodal sensory input. D Roy, Proceedings of the International Conference of Cognitive Modeling (ICCM2000). the International Conference of Cognitive Modeling (ICCM2000)Groningen, Netherlands. CiteseerD. Roy, "A computational model of word learning from multimodal sensory input," in Proceedings of the International Conference of Cognitive Modeling (ICCM2000), Groningen, Netherlands. Citeseer, 2000.
Object permanence in five-month-old infants. R Baillargeon, E S Spelke, S Wasserman, Cognition. 203R. Baillargeon, E. S. Spelke, and S. Wasserman, "Object permanence in five-month-old infants," Cognition, vol. 20, no. 3, pp. 191-208, 1985.
Mental imagery for a conversational robot. D Roy, K.-Y Hsiao, N Mavridis, Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on. 34D. Roy, K.-Y. Hsiao, and N. Mavridis, "Mental imagery for a conver- sational robot," Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 34, no. 3, pp. 1374-1383, 2004.
Elements of episodic memory. E Tulving, Clarendon Press OxfordE. Tulving, Elements of episodic memory. Clarendon Press Oxford, 1983.
Human-like memory systems for interactive robots: Desiderata and two case studies utilizing groundedsituation models and online social networking. N Mavridis, M Petychakis, N. Mavridis and M. Petychakis, "Human-like memory systems for interactive robots: Desiderata and two case studies utilizing ground- edsituation models and online social networking."
Computational models of analogy. D Gentner, K D Forbus, Wiley Interdisciplinary Reviews: Cognitive Science. 23D. Gentner and K. D. Forbus, "Computational models of analogy," Wiley Interdisciplinary Reviews: Cognitive Science, vol. 2, no. 3, pp. 266-276, 2011.
Logic as semiotic: The theory of signs. C S Pierce, The philosophical writings of PierceC. S. Pierce, "Logic as semiotic: The theory of signs," The philosoph- ical writings of Pierce, pp. 98-119, 1955.
Collected papers of charles sanders peirce. C S Peirce, Harvard University Press3C. S. Peirce, Collected papers of charles sanders peirce. Harvard University Press, 1974, vol. 3.
Biron, where are you? enabling a robot to learn new places in a real home environment by integrating spoken dialog and visual localization. T Spexard, S Li, B Wrede, J Fritsch, G Sagerer, O Booij, Z Zivkovic, B Terwijn, B Krose, Intelligent Robots and Systems. T. Spexard, S. Li, B. Wrede, J. Fritsch, G. Sagerer, O. Booij, Z. Zivkovic, B. Terwijn, and B. Krose, "Biron, where are you? enabling a robot to learn new places in a real home environment by integrating spoken dialog and visual localization," in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. IEEE, 2006, pp. 934-940.
Evolving grounded communication for robots. L Steels, Trends in cognitive sciences. 77L. Steels, "Evolving grounded communication for robots," Trends in cognitive sciences, vol. 7, no. 7, pp. 308-312, 2003.
Intrinsic representation: Bootstrapping symbols from experience. S D Larson, SpringerS. D. Larson, Intrinsic representation: Bootstrapping symbols from experience. Springer, 2004.
The affordance-based concept. P J Gorniak, Massachusetts Institute of TechnologyPh.D. dissertationP. J. Gorniak, "The affordance-based concept," Ph.D. dissertation, Massachusetts Institute of Technology, 2005.
Grounding word learning in multimodal sensorimotor interaction. C Yu, L B Smith, A F Pereira, Proceedings of the 30th annual conference of the cognitive science society. the 30th annual conference of the cognitive science societyC. Yu, L. B. Smith, and A. F. Pereira, "Grounding word learning in multimodal sensorimotor interaction," in Proceedings of the 30th annual conference of the cognitive science society, 2008, pp. 1017- 1022.
Grounding spatial language in perception: an empirical and computational investigation. T Regier, L A Carlson, Journal of Experimental Psychology: General. 1302273T. Regier and L. A. Carlson, "Grounding spatial language in perception: an empirical and computational investigation." Journal of Experimental Psychology: General, vol. 130, no. 2, p. 273, 2001.
Learning visually grounded words and syntax for a scene description task. D K Roy, Computer Speech & Language. 163D. K. Roy, "Learning visually grounded words and syntax for a scene description task," Computer Speech & Language, vol. 16, no. 3, pp. 353-385, 2002.
Saying, seeing and acting: The psychological semantics of spatial prepositions. K R Coventry, S C Garrod, Psychology PressK. R. Coventry and S. C. Garrod, Saying, seeing and acting: The psychological semantics of spatial prepositions. Psychology Press, 2004.
Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic. J M Siskind, arXiv:1106.0256arXiv preprintJ. M. Siskind, "Grounding the lexical semantics of verbs in visual perception using force dynamics and event logic," arXiv preprint arXiv:1106.0256, 2011.
Spatial semantics. J Zlatev, Handbook of Cognitive Linguistics. J. Zlatev, "Spatial semantics," Handbook of Cognitive Linguistics, pp. 318-350, 2007.
Spatial language for human-robot dialogs. M Skubic, D Perzanowski, S Blisard, A Schultz, W Adams, M Bugajska, D Brock, Systems, Man, and Cybernetics, Part C: Applications and Reviews. 34M. Skubic, D. Perzanowski, S. Blisard, A. Schultz, W. Adams, M. Bugajska, and D. Brock, "Spatial language for human-robot di- alogs," Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, vol. 34, no. 2, pp. 154-167, 2004.
Conceptual spatial representations for indoor mobile robots. H Zender, O Mozos, P Jensfelt, G.-J Kruijff, W Burgard, Robotics and Autonomous Systems. 566H. Zender, O. Martínez Mozos, P. Jensfelt, G.-J. Kruijff, and W. Bur- gard, "Conceptual spatial representations for indoor mobile robots," Robotics and Autonomous Systems, vol. 56, no. 6, pp. 493-502, 2008.
Understanding natural language commands for robotic navigation and mobile manipulation. S Tellex, T Kollar, S Dickerson, M R Walter, A G Banerjee, S J Teller, N Roy, AAAI. S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. J. Teller, and N. Roy, "Understanding natural language commands for robotic navigation and mobile manipulation." in AAAI, 2011.
Approaching the symbol grounding problem with probabilistic graphical models. S Tellex, T Kollar, S Dickerson, M R Walter, A G Banerjee, S Teller, N Roy, AI magazine. 324S. Tellex, T. Kollar, S. Dickerson, M. R. Walter, A. G. Banerjee, S. Teller, and N. Roy, "Approaching the symbol grounding problem with probabilistic graphical models," AI magazine, vol. 32, no. 4, pp. 64-76, 2011.
Learning perceptually grounded word meanings from unaligned parallel data. S Tellex, P Thaker, J Joseph, N Roy, Machine Learning. S. Tellex, P. Thaker, J. Joseph, and N. Roy, "Learning perceptually grounded word meanings from unaligned parallel data," Machine Learning, pp. 1-17, 2013.
Grounded pronoun learning and pronoun reversal. K Gold, B Scassellati, Proceedings of the 5th International Conference on Development and Learning. the 5th International Conference on Development and LearningK. Gold and B. Scassellati, "Grounded pronoun learning and pronoun reversal," in Proceedings of the 5th International Conference on Development and Learning, 2006.
The symbol grounding problem has been solved. so whats next. L Steels, Symbols and embodiment: Debates on meaning and cognitionL. Steels, "The symbol grounding problem has been solved. so whats next," Symbols and embodiment: Debates on meaning and cognition, pp. 223-244, 2008.
Semiotic dynamics for embodied agents. Intelligent Systems, IEEE. 213--, "Semiotic dynamics for embodied agents," Intelligent Systems, IEEE, vol. 21, no. 3, pp. 32-38, 2006.
Symbol grounding for semantic image interpretation: from image data to semantics. C Hudelot, N Maillot, M Thonnat, Computer Vision Workshops, 2005. ICCVW'05. Tenth IEEE International Conference on. IEEEC. Hudelot, N. Maillot, and M. Thonnat, "Symbol grounding for semantic image interpretation: from image data to semantics," in Com- puter Vision Workshops, 2005. ICCVW'05. Tenth IEEE International Conference on. IEEE, 2005, pp. 1875-1875.
Symbol grounding for the semantic web. A M Cregan, The Semantic Web: Research and Applications. SpringerA. M. Cregan, "Symbol grounding for the semantic web," in The Semantic Web: Research and Applications. Springer, 2007, pp. 429- 442.
The poeticon enacted scenario corpusa tool for human and computational experiments on action understanding. C Wallraven, M Schultze, B Mohler, A Vatakis, K Pastra, Automatic Face & Gesture Recognition and Workshops. IEEE2011 IEEE International Conference onC. Wallraven, M. Schultze, B. Mohler, A. Vatakis, and K. Pastra, "The poeticon enacted scenario corpusa tool for human and computational experiments on action understanding," in Automatic Face & Gesture Recognition and Workshops (FG 2011), 2011 IEEE International Conference on. IEEE, 2011, pp. 484-491.
The poeticon corpus: Capturing language use and sensorimotor experience in everyday interaction. K Pastra, C Wallraven, M Schultze, A Vataki, K Kaulard, LREC. Citeseer. K. Pastra, C. Wallraven, M. Schultze, A. Vataki, and K. Kaulard, "The poeticon corpus: Capturing language use and sensorimotor experience in everyday interaction." in LREC. Citeseer, 2010.
Conceptual Spaces: The Geometry of Throught. P Gärdenfors, MIT pressP. Gärdenfors, Conceptual Spaces: The Geometry of Throught. MIT press, 2004.
The restaurant game: Learning social behavior and language from thousands of players online. J Orkin, D Roy, Journal of Game Development. 31J. Orkin and D. Roy, "The restaurant game: Learning social behavior and language from thousands of players online," Journal of Game Development, vol. 3, no. 1, pp. 39-60, 2007.
Crowdsourcing hri through online multiplayer games. S Chernova, J Orkin, C Breazeal, Proc. Dialog with Robots: AAAI fall symposium. Dialog with Robots: AAAI fall symposiumS. Chernova, J. Orkin, and C. Breazeal, "Crowdsourcing hri through online multiplayer games," in Proc. Dialog with Robots: AAAI fall symposium, 2010.
Leveraging online virtual agents to crowdsource human-robot interaction. N Depalma, S Chernova, C Breazeal, Proceedings of CHI Workshop on Crowdsourcing and Human Computation. CHI Workshop on Crowdsourcing and Human ComputationN. DePalma, S. Chernova, and C. Breazeal, "Leveraging online virtual agents to crowdsource human-robot interaction," in Proceedings of CHI Workshop on Crowdsourcing and Human Computation, 2011.
Peekaboom: a game for locating objects in images. L Von Ahn, R Liu, M Blum, Proceedings of the SIGCHI conference on Human Factors in computing systems. the SIGCHI conference on Human Factors in computing systemsACML. Von Ahn, R. Liu, and M. Blum, "Peekaboom: a game for locating objects in images," in Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, 2006, pp. 55-64.
The trilogy of mind: Cognition, affection, and conation. E R Hilgard, Journal of the History of the Behavioral Sciences. 162E. R. Hilgard, "The trilogy of mind: Cognition, affection, and cona- tion," Journal of the History of the Behavioral Sciences, vol. 16, no. 2, pp. 107-117, 1980.
Affective computing: challenges. R W Picard, International Journal of Human-Computer Studies. 591R. W. Picard, "Affective computing: challenges," International Journal of Human-Computer Studies, vol. 59, no. 1, pp. 55-64, 2003.
Affective learninga manifesto. R Picard, S Papert, W Bender, B Blumberg, C Breazeal, D Cavallo, T Machover, M Resnick, D Roy, C Strohecker, BT Technology Journal. 224R. Picard, S. Papert, W. Bender, B. Blumberg, C. Breazeal, D. Cavallo, T. Machover, M. Resnick, D. Roy, and C. Strohecker, "Affective learninga manifesto," BT Technology Journal, vol. 22, no. 4, pp. 253- 269, 2004.
Should persuasion be affective or cognitive? the moderating effects of need for affect and need for cognition. G Haddock, G R Maio, K Arnold, T Huskinson, Personality and Social Psychology Bulletin. 346G. Haddock, G. R. Maio, K. Arnold, and T. Huskinson, "Should persuasion be affective or cognitive? the moderating effects of need for affect and need for cognition," Personality and Social Psychology Bulletin, vol. 34, no. 6, pp. 769-778, 2008.
Emotion and sociable humanoid robots. C Breazeal, International Journal of Human-Computer Studies. 591C. Breazeal, "Emotion and sociable humanoid robots," International Journal of Human-Computer Studies, vol. 59, no. 1, pp. 119-155, 2003.
Embodied conversational interface agents. J Cassell, Communications of the ACM. 434J. Cassell, "Embodied conversational interface agents," Communica- tions of the ACM, vol. 43, no. 4, pp. 70-78, 2000.
Animated pedagogical agents: Face-to-face interaction in interactive learning environments. W L Johnson, J W Rickel, J C Lester, International Journal of Artificial intelligence in education. 111W. L. Johnson, J. W. Rickel, and J. C. Lester, "Animated pedagogical agents: Face-to-face interaction in interactive learning environments," International Journal of Artificial intelligence in education, vol. 11, no. 1, pp. 47-78, 2000.
From greta's mind to her face: modelling the dynamics of affective states in a conversational embodied agent. F D Rosis, C Pelachaud, I Poggi, V Carofiglio, B D Carolis, International Journal of Human-Computer Studies. 591F. d. Rosis, C. Pelachaud, I. Poggi, V. Carofiglio, and B. D. Carolis, "From greta's mind to her face: modelling the dynamics of affective states in a conversational embodied agent," International Journal of Human-Computer Studies, vol. 59, no. 1, pp. 81-118, 2003.
Toward teaching a robot infantusing emotive communication acts. C Breazeal, J Velásquez, Proceedings of the 1998 Simulated Adaptive Behavior Workshop on Socially Situated Intelligence. the 1998 Simulated Adaptive Behavior Workshop on Socially Situated IntelligenceC. Breazeal and J. Velásquez, "Toward teaching a robot infantusing emotive communication acts," in Proceedings of the 1998 Simulated Adaptive Behavior Workshop on Socially Situated Intelligence, 1998, pp. 25-40.
you stupid tin box"-children interacting with the aibo robot: A cross-linguistic emotional speech corpus. A Batliner, C Hacker, S Steidl, E Nöth, S D'arcy, M J Russell, M Wong, LREC. A. Batliner, C. Hacker, S. Steidl, E. Nöth, S. D'Arcy, M. J. Russell, and M. Wong, "" you stupid tin box"-children interacting with the aibo robot: A cross-linguistic emotional speech corpus." in LREC, 2004.
Recognition of emotional states in spoken dialogue with a robot. K Komatani, R Ito, T Kawahara, H G Okuno, Innovations in Applied Artificial Intelligence. SpringerK. Komatani, R. Ito, T. Kawahara, and H. G. Okuno, "Recognition of emotional states in spoken dialogue with a robot," in Innovations in Applied Artificial Intelligence. Springer, 2004, pp. 413-423.
Towards an empathizing and adaptive storyteller system. B.-C Bae, A Brunete, U Malik, E Dimara, J Jermsurawong, N Mavridis, Eighth Artificial Intelligence and Interactive Digital Entertainment Conference. B.-C. Bae, A. Brunete, U. Malik, E. Dimara, J. Jermsurawong, and N. Mavridis, "Towards an empathizing and adaptive storyteller system," in Eighth Artificial Intelligence and Interactive Digital Entertainment Conference, 2012.
Face detection with the sophisticated high-speed object recognition engine (shore)," in Microelectronic Systems. T Ruf, A Ernst, C Küblbeck, SpringerT. Ruf, A. Ernst, and C. Küblbeck, "Face detection with the sophisti- cated high-speed object recognition engine (shore)," in Microelectronic Systems. Springer, 2011, pp. 243-252.
Real-time inference of complex mental states from facial expressions and head gestures," in Real-time vision for human-computer interaction. R El Kaliouby, P Robinson, SpringerR. El Kaliouby and P. Robinson, "Real-time inference of complex mental states from facial expressions and head gestures," in Real-time vision for human-computer interaction. Springer, 2005, pp. 181-200.
Facial expression recognition based on local binary patterns: A comprehensive study. C Shan, S Gong, P W Mcowan, Image and Vision Computing. 276C. Shan, S. Gong, and P. W. McOwan, "Facial expression recognition based on local binary patterns: A comprehensive study," Image and Vision Computing, vol. 27, no. 6, pp. 803-816, 2009.
Fully automatic facial action recognition in spontaneous behavior. M S Bartlett, G Littlewort, M Frank, C Lainscsek, I Fasel, J Movellan, Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on. IEEEM. S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J. Movellan, "Fully automatic facial action recognition in spontaneous behavior," in Automatic Face and Gesture Recognition, 2006. FGR 2006. 7th International Conference on. IEEE, 2006, pp. 223-230.
Learning to make facial expressions. T Wu, N J Butko, P Ruvulo, M S Bartlett, J R Movellan, IEEE 8th International Conference on. IEEEDevelopment and LearningT. Wu, N. J. Butko, P. Ruvulo, M. S. Bartlett, and J. R. Movellan, "Learning to make facial expressions," in Development and Learning, 2009. ICDL 2009. IEEE 8th International Conference on. IEEE, 2009, pp. 1-6.
Robotic emotional expression generation based on mood transition and personality model. M.-J Han, C.-H Lin, K.-T Song, IEEE Transactions on. 434CyberneticsM.-J. Han, C.-H. Lin, and K.-T. Song, "Robotic emotional expression generation based on mood transition and personality model," Cyber- netics, IEEE Transactions on, vol. 43, no. 4, pp. 1290-1303, 2013.
Synthesizing expressions using facial feature point tracking: How emotion is conveyed. T Baltrušaitis, L D Riek, P Robinson, Proceedings of the 3rd international workshop on Affective interaction in natural environments. the 3rd international workshop on Affective interaction in natural environmentsACMT. Baltrušaitis, L. D. Riek, and P. Robinson, "Synthesizing expressions using facial feature point tracking: How emotion is conveyed," in Proceedings of the 3rd international workshop on Affective interaction in natural environments. ACM, 2010, pp. 27-32.
Dynamics of facial expression extracted automatically from video. G Littlewort, M S Bartlett, I Fasel, J Susskind, J Movellan, Image and Vision Computing. 246G. Littlewort, M. S. Bartlett, I. Fasel, J. Susskind, and J. Movellan, "Dynamics of facial expression extracted automatically from video," Image and Vision Computing, vol. 24, no. 6, pp. 615-625, 2006.
Expressive speech synthesis: Past, present, and possible futures. M Schröder, Affective information processing. SpringerM. Schröder, "Expressive speech synthesis: Past, present, and possible futures," in Affective information processing. Springer, 2009, pp. 111- 126.
Emotional speech recognition: Resources, features, and methods. D Ververidis, C Kotropoulos, Speech communication. 489D. Ververidis and C. Kotropoulos, "Emotional speech recognition: Resources, features, and methods," Speech communication, vol. 48, no. 9, pp. 1162-1181, 2006.
Towards expressive speech synthesis in english on a robotic platform. S Roehling, B Macdonald, C Watson, Proceedings of the Australasian International Conference on Speech Science and Technology. the Australasian International Conference on Speech Science and TechnologyS. Roehling, B. MacDonald, and C. Watson, "Towards expressive speech synthesis in english on a robotic platform," in Proceedings of the Australasian International Conference on Speech Science and Technology, 2006, pp. 130-135.
An emotional storyteller robot. A Chella, R E Barone, G Pilato, R Sorbello, AAAI Spring Symposium: Emotion, Personality, and Social Behavior. A. Chella, R. E. Barone, G. Pilato, and R. Sorbello, "An emotional storyteller robot." in AAAI Spring Symposium: Emotion, Personality, and Social Behavior, 2008, pp. 17-22.
Foundations and trends in information retrieval. B Pang, L Lee, 2Opinion mining and sentiment analysisB. Pang and L. Lee, "Opinion mining and sentiment analysis," Foun- dations and trends in information retrieval, vol. 2, no. 1-2, pp. 1-135, 2008.
Recognizing contextual polarity: An exploration of features for phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, Computational linguistics. 353T. Wilson, J. Wiebe, and P. Hoffmann, "Recognizing contextual po- larity: An exploration of features for phrase-level sentiment analysis," Computational linguistics, vol. 35, no. 3, pp. 399-433, 2009.
Lexiconbased methods for sentiment analysis. M Taboada, J Brooke, M Tofiloski, K Voll, M Stede, Computational linguistics. 372M. Taboada, J. Brooke, M. Tofiloski, K. Voll, and M. Stede, "Lexicon- based methods for sentiment analysis," Computational linguistics, vol. 37, no. 2, pp. 267-307, 2011.
Measuring affect in the wild. R W Picard, Affective Computing and Intelligent Interaction. SpringerR. W. Picard, "Measuring affect in the wild," in Affective Computing and Intelligent Interaction. Springer, 2011, pp. 3-3.
Fundamentals of physiological computing. S H Fairclough, Interacting with computers. 211S. H. Fairclough, "Fundamentals of physiological computing," Inter- acting with computers, vol. 21, no. 1, pp. 133-145, 2009.
On the universality and cultural specificity of emotion recognition: a meta-analysis. H A Elfenbein, N Ambady, Psychological bulletin. 1282203H. A. Elfenbein and N. Ambady, "On the universality and cultural specificity of emotion recognition: a meta-analysis." Psychological bulletin, vol. 128, no. 2, p. 203, 2002.
Hand and mind: What gestures reveal about thought. D Mcneill, University of Chicago PressD. McNeill, Hand and mind: What gestures reveal about thought. University of Chicago Press, 1992.
About face, computergraphic synthesis and manipulation of facial imagery. P Weil, Massachusetts Institute of TechnologyPh.D. dissertationP. Weil, "About face, computergraphic synthesis and manipulation of facial imagery," Ph.D. dissertation, Massachusetts Institute of Technol- ogy, 1982.
Soft machine: a personable interface. J Lewis, P Purcell, Proc. of Graphics Interface. of Graphics InterfaceCiteseer84J. Lewis and P. Purcell, "Soft machine: a personable interface," in Proc. of Graphics Interface, vol. 84. Citeseer, 1984, pp. 223-226.
Automated lip-synch and speech synthesis for character animation. J P Lewis, F I Parke, ACM SIGCHI Bulletin. 17ACMSI.J. P. Lewis and F. I. Parke, "Automated lip-synch and speech synthesis for character animation," in ACM SIGCHI Bulletin, vol. 17, no. SI. ACM, 1987, pp. 143-147.
eigenlips for robust speech recognition. C Bregler, Y Konig, Acoustics, Speech, and Signal Processing. IEEE2669IEEE International Conference onC. Bregler and Y. Konig, "eigenlips for robust speech recognition," in Acoustics, Speech, and Signal Processing, 1994. ICASSP-94., 1994 IEEE International Conference on, vol. 2. IEEE, 1994, pp. II-669.
Hearing lips and seeing voices. H Mcgurk, J Macdonald, Nature. H. McGurk and J. MacDonald, "Hearing lips and seeing voices," Nature, pp. 746-748, 1976.
The handbook of multisensory processes. G A Calvert, C Spence, B E Stein, MIT pressG. A. Calvert, C. Spence, and B. E. Stein, The handbook of multisen- sory processes. MIT press, 2004.
The rubber hand illusion revisited: visuotactile integration and self-attribution. M Tsakiris, P Haggard, Journal of Experimental Psychology: Human Perception and Performance. 31180M. Tsakiris and P. Haggard, "The rubber hand illusion revisited: visuotactile integration and self-attribution." Journal of Experimental Psychology: Human Perception and Performance, vol. 31, no. 1, p. 80, 2005.
Communicative humanoids: a computational model of psychosocial dialogue skills. K R Thorisson, Massachusetts Institute of TechnologyPh.D. dissertationK. R. Thorisson, "Communicative humanoids: a computational model of psychosocial dialogue skills," Ph.D. dissertation, Massachusetts Institute of Technology, 1996.
The autonomous city explorer (ace) projectmobile robot navigation in highly populated urban environments. G Lidoris, F Rohrmuller, D Wollherr, M Buss, Robotics and Automation. ICRA'09G. Lidoris, F. Rohrmuller, D. Wollherr, and M. Buss, "The autonomous city explorer (ace) projectmobile robot navigation in highly populated urban environments," in Robotics and Automation, 2009. ICRA'09. IEEE International Conference on. IEEE, 2009, pp. 1416-1422.
Gestures: Their role in teaching and learning. W.-M Roth, Review of Educational Research. 713W.-M. Roth, "Gestures: Their role in teaching and learning," Review of Educational Research, vol. 71, no. 3, pp. 365-392, 2001.
Embodiment in conversational interfaces: Rea. J Cassell, T Bickmore, M Billinghurst, L Campbell, K Chang, H Vilhjálmsson, H Yan, Proceedings of the SIGCHI conference on Human factors in computing systems. the SIGCHI conference on Human factors in computing systemsACMJ. Cassell, T. Bickmore, M. Billinghurst, L. Campbell, K. Chang, H. Vilhjálmsson, and H. Yan, "Embodiment in conversational inter- faces: Rea," in Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 1999, pp. 520-527.
Beat: the behavior expression animation toolkit. J Cassell, H H Vilhjálmsson, T Bickmore, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, ser. SIGGRAPH '01. the 28th annual conference on Computer graphics and interactive techniques, ser. SIGGRAPH '01New York, NY, USAACMJ. Cassell, H. H. Vilhjálmsson, and T. Bickmore, "Beat: the behavior expression animation toolkit," in Proceedings of the 28th annual conference on Computer graphics and interactive techniques, ser. SIGGRAPH '01. New York, NY, USA: ACM, 2001, pp. 477-486.
Patterns of synchronization of non-verbal cues and speech in ecas: Towards a more natural conversational agent," in Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. Theoretical and Practical Issues. N Rossini, SpringerN. Rossini, "Patterns of synchronization of non-verbal cues and speech in ecas: Towards a more natural conversational agent," in Toward Autonomous, Adaptive, and Context-Aware Multimodal Interfaces. The- oretical and Practical Issues. Springer, 2011, pp. 96-103.
Coordination of communication: Effects of shared visual context on collaborative work. S R Fussell, R E Kraut, J Siegel, Proceedings of the 2000 ACM conference on Computer supported cooperative work. the 2000 ACM conference on Computer supported cooperative workACMS. R. Fussell, R. E. Kraut, and J. Siegel, "Coordination of commu- nication: Effects of shared visual context on collaborative work," in Proceedings of the 2000 ACM conference on Computer supported cooperative work. ACM, 2000, pp. 21-30.
Coordinating cognition: The costs and benefits of shared gaze during collaborative search. S E Brennan, X Chen, C A Dickinson, M B Neider, G J Zelinsky, Cognition. 1063S. E. Brennan, X. Chen, C. A. Dickinson, M. B. Neider, and G. J. Zelinsky, "Coordinating cognition: The costs and benefits of shared gaze during collaborative search," Cognition, vol. 106, no. 3, pp. 1465- 1477, 2008.
Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. C Breazeal, C D Kidd, A L Thomaz, G Hoffman, M Berlin, IEEE/RSJ International Conference on. IEEEIntelligent Robots and SystemsC. Breazeal, C. D. Kidd, A. L. Thomaz, G. Hoffman, and M. Berlin, "Effects of nonverbal communication on efficiency and robustness in human-robot teamwork," in Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on. IEEE, 2005, pp. 708-713.
Speakers eye gaze disambiguates referring expressions early during face-to-face conversation. J E Hanna, S E Brennan, Journal of Memory and Language. 574J. E. Hanna and S. E. Brennan, "Speakers eye gaze disambiguates referring expressions early during face-to-face conversation," Journal of Memory and Language, vol. 57, no. 4, pp. 596-615, 2007.
Pragmatic effects on reference resolution in a collaborative task: Evidence from eye movements. J E Hanna, M K Tanenhaus, Cognitive Science. 281J. E. Hanna and M. K. Tanenhaus, "Pragmatic effects on reference resolution in a collaborative task: Evidence from eye movements," Cognitive Science, vol. 28, no. 1, pp. 105-115, 2004.
The development of shared attention during infancy. L B Adamson, R Bakeman, Annals of child development. 8L. B. Adamson and R. Bakeman, "The development of shared attention during infancy." Annals of child development, vol. 8, pp. 1-41, 1991.
Mechanisms of shared attention for a humanoid robot. B Scassellati, Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symposium. 421B. Scassellati, "Mechanisms of shared attention for a humanoid robot," in Embodied Cognition and Action: Papers from the 1996 AAAI Fall Symposium, vol. 4, no. 9, 1996, p. 21.
Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot. Computation for metaphors, analogy, and agents. Springer--, "Imitation and mechanisms of joint attention: A developmental structure for building social skills on a humanoid robot," in Computa- tion for metaphors, analogy, and agents. Springer, 1999, pp. 176-195.
The emergence of shared attention: Using robots to test developmental theories. G O Deák, I Fasel, J Movellan, Proceedings 1st International Workshop on Epigenetic Robotics: Lund University Cognitive Studies. 1st International Workshop on Epigenetic Robotics: Lund University Cognitive Studies85G. O. Deák, I. Fasel, and J. Movellan, "The emergence of shared attention: Using robots to test developmental theories," in Proceedings 1st International Workshop on Epigenetic Robotics: Lund University Cognitive Studies, vol. 85, 2001, pp. 95-104.
Combining embodied models and empirical research for understanding the development of shared attention. I Fasel, G O Deák, J Triesch, J Movellan, The 2nd International Conference on. IEEEin Development and LearningI. Fasel, G. O. Deák, J. Triesch, and J. Movellan, "Combining embodied models and empirical research for understanding the development of shared attention," in Development and Learning, 2002. Proceedings. The 2nd International Conference on. IEEE, 2002, pp. 21-27.
A probabilistic model of gaze imitation and shared attention. M W Hoffman, D B Grimes, A P Shon, R P Rao, Neural Networks. 193M. W. Hoffman, D. B. Grimes, A. P. Shon, and R. P. Rao, "A probabilistic model of gaze imitation and shared attention," Neural Networks, vol. 19, no. 3, pp. 299-310, 2006.
Towards a realtime gaze-based shared attention for a virtual agent. C Peters, S Asteriadis, K Karpouzis, E De Sevin, International Conference on Multimodal Interfaces. C. Peters, S. Asteriadis, K. Karpouzis, and E. de Sevin, "Towards a real- time gaze-based shared attention for a virtual agent," in International Conference on Multimodal Interfaces, 2008.
Investigating shared attention with a virtual agent using a gaze-based interface. C Peters, S Asteriadis, K Karpouzis, Journal on Multimodal User Interfaces. 31-2C. Peters, S. Asteriadis, and K. Karpouzis, "Investigating shared attention with a virtual agent using a gaze-based interface," Journal on Multimodal User Interfaces, vol. 3, no. 1-2, pp. 119-130, 2010.
Does the chimpanzee have a theory of mind?. D Premack, G Woodruff, Behavioral and brain sciences. 104D. Premack and G. Woodruff, "Does the chimpanzee have a theory of mind?" Behavioral and brain sciences, vol. 1, no. 04, pp. 515-526, 1978.
The child's theory of mind. H M Wellman, H. M. Wellman, "The child's theory of mind," 2011.
Foundations for a theory of mind for a humanoid robot. B M Scassellati, Massachusetts Institute of TechnologyPh.D. dissertationB. M. Scassellati, "Foundations for a theory of mind for a humanoid robot," Ph.D. dissertation, Massachusetts Institute of Technology, 2001.
Theory of mind for a humanoid robot. B Scassellati, Autonomous Robots. 121B. Scassellati, "Theory of mind for a humanoid robot," Autonomous Robots, vol. 12, no. 1, pp. 13-24, 2002.
Towards shared attention through geometric reasoning for human robot interaction. L F Marin-Urias, E A Sisbot, A K Pandey, R Tadakuma, R Alami, Humanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS International Conference on. IEEEL. F. Marin-Urias, E. A. Sisbot, A. K. Pandey, R. Tadakuma, and R. Alami, "Towards shared attention through geometric reasoning for human robot interaction," in Humanoid Robots, 2009. Humanoids 2009. 9th IEEE-RAS International Conference on. IEEE, 2009, pp. 331-336.
Mindblindness: An essay on autism and theory of mind. S Baron-Cohen, MIT pressS. Baron-Cohen, Mindblindness: An essay on autism and theory of mind. MIT press, 1997.
Understanding other minds: Perspectives from developmental cognitive neuroscience. S E Baron-Cohen, H E Tager-Flusberg, D J Cohen, Oxford University PressS. E. Baron-Cohen, H. E. Tager-Flusberg, and D. J. Cohen, Un- derstanding other minds: Perspectives from developmental cognitive neuroscience . Oxford University Press, 2000.
Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills?. B Robins, K Dautenhahn, R Te Boekhorst, A Billard, Universal Access in the Information Society. 42B. Robins, K. Dautenhahn, R. Te Boekhorst, and A. Billard, "Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills?" Universal Access in the Information Society, vol. 4, no. 2, pp. 105-120, 2005.
Intact automatic imitation of human and robot actions in autism spectrum disorders. G Bird, J Leighton, C Press, C Heyes, Proceedings of the Royal Society B: Biological Sciences. 2741628G. Bird, J. Leighton, C. Press, and C. Heyes, "Intact automatic imitation of human and robot actions in autism spectrum disorders," Proceedings of the Royal Society B: Biological Sciences, vol. 274, no. 1628, pp. 3027-3031, 2007.
Robotmediated joint attention in children with autism: A case study in robothuman interaction. B Robins, P Dickerson, P Stribling, K Dautenhahn, Interaction studies. 52B. Robins, P. Dickerson, P. Stribling, and K. Dautenhahn, "Robot- mediated joint attention in children with autism: A case study in robot- human interaction," Interaction studies, vol. 5, no. 2, pp. 161-198, 2004.
From isolation to communication: a case study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot. B Robins, K Dautenhahn, P Dickerson, Advances in Computer-Human Interactions. IEEEACHI'09. Second International Conferences onB. Robins, K. Dautenhahn, and P. Dickerson, "From isolation to communication: a case study evaluation of robot assisted play for children with autism with a minimally expressive humanoid robot," in Advances in Computer-Human Interactions, 2009. ACHI'09. Second International Conferences on. IEEE, 2009, pp. 205-211.
Artificial intelligence: A modern approach author: Stuart russell, peter norvig, publisher: Prentice hall pa. S Russell, S. Russell, "Artificial intelligence: A modern approach author: Stuart russell, peter norvig, publisher: Prentice hall pa," 2009.
Speech and language processing an introduction to natural language processing, computational linguistics, and speech. D Jurafsky, H James, D. Jurafsky and H. James, "Speech and language processing an introduction to natural language processing, computational linguistics, and speech," 2000.
Elements of a plan-based theory of speech acts. P R Cohen, C R Perrault, Cognitive science. 33P. R. Cohen and C. R. Perrault, "Elements of a plan-based theory of speech acts," Cognitive science, vol. 3, no. 3, pp. 177-212, 1979.
Rrt-connect: An efficient approach to single-query path planning. J J KuffnerJr, S M Lavalle, Robotics and Automation, 2000. Proceedings. ICRA'00. IEEE International Conference on. IEEE2J. J. Kuffner Jr and S. M. LaValle, "Rrt-connect: An efficient approach to single-query path planning," in Robotics and Automation, 2000. Proceedings. ICRA'00. IEEE International Conference on, vol. 2. IEEE, 2000, pp. 995-1001.
Path planning for mobile robot navigation using voronoi diagram and fast marching," in Intelligent Robots and Systems. S Garrido, L Moreno, M Abderrahim, F Martin, IEEE. S. Garrido, L. Moreno, M. Abderrahim, and F. Martin, "Path planning for mobile robot navigation using voronoi diagram and fast march- ing," in Intelligent Robots and Systems, 2006 IEEE/RSJ International Conference on. IEEE, 2006, pp. 2376-2381.
To ask or to sense? planning to integrate speech and sensorimotor acts. N Mavridis, H Dong, Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2012 4th International Congress on. IEEEN. Mavridis and H. Dong, "To ask or to sense? planning to integrate speech and sensorimotor acts," in Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT), 2012 4th International Congress on. IEEE, 2012, pp. 227-233.
Human-robot communication and machine learning. V Klingspor, J Demiris, M Kaiser, Applied Artificial Intelligence. 117V. Klingspor, J. Demiris, and M. Kaiser, "Human-robot communication and machine learning," Applied Artificial Intelligence, vol. 11, no. 7, pp. 719-746, 1997.
Imitation as social exchange between humans and robots. C Breazeal, Proceedings of the AISB99 Symposium on Imitation in Animals and Artifacts. the AISB99 Symposium on Imitation in Animals and ArtifactsC. Breazeal, "Imitation as social exchange between humans and robots," in Proceedings of the AISB99 Symposium on Imitation in Animals and Artifacts, 1999, pp. 96-104.
Cobot in lambdamoo: An adaptive social statistics agent. C L IsbellJr, M Kearns, S Singh, C R Shelton, P Stone, D Kormann, Autonomous Agents and Multi-Agent Systems. 133C. L. Isbell Jr, M. Kearns, S. Singh, C. R. Shelton, P. Stone, and D. Kormann, "Cobot in lambdamoo: An adaptive social statistics agent," Autonomous Agents and Multi-Agent Systems, vol. 13, no. 3, pp. 327-354, 2006.
Robots with their heads in the clouds. E Guizzo, Spectrum, IEEE. 483E. Guizzo, "Robots with their heads in the clouds," Spectrum, IEEE, vol. 48, no. 3, pp. 16-18, 2011.
The human-robot cloud: Situated collective intelligence on demand. N Mavridis, T Bourlai, D Ognibene, Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2012 IEEE International Conference on. IEEEN. Mavridis, T. Bourlai, and D. Ognibene, "The human-robot cloud: Situated collective intelligence on demand," in Cyber Technology in Automation, Control, and Intelligent Systems (CYBER), 2012 IEEE International Conference on. IEEE, 2012, pp. 360-365.
What makes people accept a robot in a social environment-discussion from six-week study in an office. N Mitsunaga, Z Miyashita, K Shinozawa, T Miyashita, H Ishiguro, N Hagita, Intelligent Robots and Systems. IEEEN. Mitsunaga, Z. Miyashita, K. Shinozawa, T. Miyashita, H. Ishiguro, and N. Hagita, "What makes people accept a robot in a social environment-discussion from six-week study in an office," in Intel- ligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on. IEEE, 2008, pp. 3336-3343.
Transforming ibnsina into an advanced multilingual interactive android robot. N Mavridis, A Aldhaheri, L Aldhaheri, M Khanii, N Aldarmaki, GCC Conference and Exhibition (GCC). IEEEN. Mavridis, A. AlDhaheri, L. AlDhaheri, M. Khanii, and N. AlDar- maki, "Transforming ibnsina into an advanced multilingual interactive android robot," in GCC Conference and Exhibition (GCC), 2011 IEEE. IEEE, 2011, pp. 120-123.
M Waibel, M Beetz, J Civera, R Andrea, J Elfring, D Galvez-Lopez, K Haussermann, R Janssen, J Montiel, A Perzylo, Roboearth. 18IEEEM. Waibel, M. Beetz, J. Civera, R. D'Andrea, J. Elfring, D. Galvez- Lopez, K. Haussermann, R. Janssen, J. Montiel, A. Perzylo, et al., "Roboearth," Robotics & Automation Magazine, IEEE, vol. 18, no. 2, pp. 69-82, 2011.
Rapyuta: The roboearth cloud engine. D Hunziker, M Gajamohan, M Waibel, R Dandrea, Proc. IEEE Int. Conf. on Robotics and Automation (ICRA). IEEE Int. Conf. on Robotics and Automation (ICRA)Karlsruhe, GermanyD. Hunziker, M. Gajamohan, M. Waibel, and R. DAndrea, "Rapyuta: The roboearth cloud engine," in Proc. IEEE Int. Conf. on Robotics and Automation (ICRA), Karlsruhe, Germany, 2013.
A simplest systematics for the organization of turn-taking for conversation. H Sacks, E A Schegloff, G Jefferson, LanguageH. Sacks, E. A. Schegloff, and G. Jefferson, "A simplest systematics for the organization of turn-taking for conversation," Language, pp. 696-735, 1974.
Overlapping talk and the organization of turn-taking for conversation. E A Schegloff, Language in society. 291E. A. Schegloff, "Overlapping talk and the organization of turn-taking for conversation," Language in society, vol. 29, no. 1, pp. 1-63, 2000.
Modeling of conversational strategy for the robot participating in the group conversation. Y Matsusaka, S Fujie, T Kobayashi, INTERSPEECH. 1Y. Matsusaka, S. Fujie, and T. Kobayashi, "Modeling of conversational strategy for the robot participating in the group conversation." in INTERSPEECH, vol. 1, 2001, pp. 2173-2176.
Footing in human-robot conversations: how robots might shape participant roles using gaze cues. B Mutlu, T Shiwa, T Kanda, H Ishiguro, N Hagita, Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. the 4th ACM/IEEE international conference on Human robot interactionACMB. Mutlu, T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita, "Footing in human-robot conversations: how robots might shape participant roles using gaze cues," in Proceedings of the 4th ACM/IEEE international conference on Human robot interaction. ACM, 2009, pp. 61-68.
Turn taking for human-robot interaction. C Chao, A L Thomaz, AAAI fall symposium on dialog with robots. C. Chao and A. L. Thomaz, "Turn taking for human-robot interaction," in AAAI fall symposium on dialog with robots, 2010, pp. 132-134.
Explorations in engagement for humans and robots. C L Sidner, C Lee, C D Kidd, N Lesh, C Rich, Artificial Intelligence. 1661C. L. Sidner, C. Lee, C. D. Kidd, N. Lesh, and C. Rich, "Explorations in engagement for humans and robots," Artificial Intelligence, vol. 166, no. 1, pp. 140-164, 2005.
Robust sound source localization using a microphone array on a mobile robot. J.-M Valin, F Michaud, J Rouat, D Létourneau, Intelligent Robots and Systems. ProceedingsJ.-M. Valin, F. Michaud, J. Rouat, and D. Létourneau, "Robust sound source localization using a microphone array on a mobile robot," in Intelligent Robots and Systems, 2003.(IROS 2003). Proceedings. 2003
IEEE/RSJ International Conference on. IEEE2IEEE/RSJ International Conference on, vol. 2. IEEE, 2003, pp. 1228- 1233.
Applying scattering theory to robot audition system: Robust sound source localization and extraction. K Nakadai, D Matsuura, H G Okuno, H Kitano, Intelligent Robots and Systems. IROSK. Nakadai, D. Matsuura, H. G. Okuno, and H. Kitano, "Applying scat- tering theory to robot audition system: Robust sound source localization and extraction," in Intelligent Robots and Systems, 2003.(IROS 2003).
IEEE/RSJ International Conference on. IEEE2ProceedingsProceedings. 2003 IEEE/RSJ International Conference on, vol. 2. IEEE, 2003, pp. 1147-1152.
Localization of simultaneous moving sound sources for mobile robot using a frequencydomain steered beamformer approach. J.-M Valin, F Michaud, B Hadjou, J Rouat, Proceedings. ICRA'04. 2004 IEEE International Conference on. ICRA'04. 2004 IEEE International Conference onIEEE1Robotics and AutomationJ.-M. Valin, F. Michaud, B. Hadjou, and J. Rouat, "Localization of si- multaneous moving sound sources for mobile robot using a frequency- domain steered beamformer approach," in Robotics and Automation, 2004. Proceedings. ICRA'04. 2004 IEEE International Conference on, vol. 1. IEEE, 2004, pp. 1033-1038.
Robust localization and tracking of simultaneous moving sound sources using beamforming and particle filtering. J.-M Valin, F Michaud, J Rouat, Robotics and Autonomous Systems. 553J.-M. Valin, F. Michaud, and J. Rouat, "Robust localization and tracking of simultaneous moving sound sources using beamforming and particle filtering," Robotics and Autonomous Systems, vol. 55, no. 3, pp. 216- 228, 2007.
Design and implementation of robot audition system'hark'open source software for listening to three simultaneous speakers. K Nakadai, T Takahashi, H G Okuno, H Nakajima, Y Hasegawa, H Tsujino, Advanced Robotics. 245-6K. Nakadai, T. Takahashi, H. G. Okuno, H. Nakajima, Y. Hasegawa, and H. Tsujino, "Design and implementation of robot audition sys- tem'hark'open source software for listening to three simultaneous speakers," Advanced Robotics, vol. 24, no. 5-6, pp. 739-761, 2010.
Robust text-independent speaker identification using gaussian mixture speaker models. D A Reynolds, R C Rose, Speech and Audio Processing. 3D. A. Reynolds and R. C. Rose, "Robust text-independent speaker identification using gaussian mixture speaker models," Speech and Audio Processing, IEEE Transactions on, vol. 3, no. 1, pp. 72-83, 1995.
Speaker verification using adapted gaussian mixture models. D A Reynolds, T F Quatieri, R B Dunn, Digital signal processing. 101D. A. Reynolds, T. F. Quatieri, and R. B. Dunn, "Speaker verification using adapted gaussian mixture models," Digital signal processing, vol. 10, no. 1, pp. 19-41, 2000.
Text-independent speaker identification using soft channel selection in home robot environments. M Ji, S Kim, H Kim, H.-S Yoon, Consumer Electronics. 54M. Ji, S. Kim, H. Kim, and H.-S. Yoon, "Text-independent speaker identification using soft channel selection in home robot environments," Consumer Electronics, IEEE Transactions on, vol. 54, no. 1, pp. 140- 144, 2008.
Multi-person conversation via multimodal interface-a robot who communicate with multi-user. Y Matsusaka, T Tojo, S Kubota, K Furukawa, D Tamiya, K Hayata, Y Nakano, T Kobayashi, EU-ROSPEECH. 99Y. Matsusaka, T. Tojo, S. Kubota, K. Furukawa, D. Tamiya, K. Hayata, Y. Nakano, and T. Kobayashi, "Multi-person conversation via multi- modal interface-a robot who communicate with multi-user-." in EU- ROSPEECH, vol. 99, 1999, pp. 1723-1726.
Improvement of recognition of simultaneous speech signals using av integration and scattering theory for humanoid robots. K Nakadai, D Matsuura, H G Okuno, H Tsujino, Speech Communication. 441K. Nakadai, D. Matsuura, H. G. Okuno, and H. Tsujino, "Improvement of recognition of simultaneous speech signals using av integration and scattering theory for humanoid robots," Speech Communication, vol. 44, no. 1, pp. 97-112, 2004.
Identifying the addressee in human-human-robot interactions based on head pose and speech. M Katzenmaier, R Stiefelhagen, T Schultz, Proceedings of the 6th international conference on Multimodal interfaces. the 6th international conference on Multimodal interfacesACMM. Katzenmaier, R. Stiefelhagen, and T. Schultz, "Identifying the addressee in human-human-robot interactions based on head pose and speech," in Proceedings of the 6th international conference on Multimodal interfaces. ACM, 2004, pp. 144-151.
Towards development of multilingual spoken dialogue systems. H Holzapfel, Proceedings of the 2nd Language and Technology Conference. the 2nd Language and Technology ConferenceH. Holzapfel, "Towards development of multilingual spoken dialogue systems," in Proceedings of the 2nd Language and Technology Con- ference, 2005.
Reusable, interactive, multilingual online avatars. C Cullen, C Goodman, P Mcgloin, A Deegan, E Mccarthy, Visual Media Production, 2009. CVMP'09. C. Cullen, C. Goodman, P. McGloin, A. Deegan, and E. McCarthy, "Reusable, interactive, multilingual online avatars," in Visual Media Production, 2009. CVMP'09. Conference for. IEEE, 2009, pp. 152- 158.
Multilingual virtual city guides. K R Echavarria, M Genereux, D B Arnold, A M Day, J R Glauert, Proceedings Graphicon. K. R. Echavarria, M. Genereux, D. B. Arnold, A. M. Day, and J. R. Glauert, "Multilingual virtual city guides," Proceedings Graphicon, Novosibirsk, Russia, 2005.
Real-time american sign language recognition using desk and wearable computer based video. T Starner, J Weaver, A Pentland, IEEE Transactions on. 2012Pattern Analysis and Machine IntelligenceT. Starner, J. Weaver, and A. Pentland, "Real-time american sign language recognition using desk and wearable computer based video," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 12, pp. 1371-1375, 1998.
Handshapes and movements: Multiplechannel american sign language recognition. C Vogler, D Metaxas, Gesture-Based Communication in Human-Computer Interaction. SpringerC. Vogler and D. Metaxas, "Handshapes and movements: Multiple- channel american sign language recognition," in Gesture-Based Com- munication in Human-Computer Interaction. Springer, 2004, pp. 247- 258.
A review of vision based hand gestures recognition. G Murthy, R Jadon, International Journal of Information Technology and Knowledge Management. 22G. Murthy and R. Jadon, "A review of vision based hand gestures recognition," International Journal of Information Technology and Knowledge Management, vol. 2, no. 2, pp. 405-410, 2009.
Using multiple sensors for mobile sign language recognition. H Brashear, T Starner, P Lukowicz, H Junker, H. Brashear, T. Starner, P. Lukowicz, and H. Junker, "Using multiple sensors for mobile sign language recognition," 2003.
Online and off-line handwriting recognition: a comprehensive survey. R Plamondon, S N Srihari, IEEE Transactions on. 221Pattern Analysis and Machine IntelligenceR. Plamondon and S. N. Srihari, "Online and off-line handwriting recognition: a comprehensive survey," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 22, no. 1, pp. 63-84, 2000.
Markov models for offline handwriting recognition: a survey. T Plötz, G A Fink, International Journal on Document Analysis and Recognition (IJDAR). 124T. Plötz and G. A. Fink, "Markov models for offline handwriting recognition: a survey," International Journal on Document Analysis and Recognition (IJDAR), vol. 12, no. 4, pp. 269-298, 2009.
Offline arabic handwriting recognition: a survey. L M Lorigo, V Govindaraju, IEEE Transactions on. 285Pattern Analysis and Machine IntelligenceL. M. Lorigo and V. Govindaraju, "Offline arabic handwriting recog- nition: a survey," Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 28, no. 5, pp. 712-724, 2006.
Autonomy, isolation, and collective intelligence. N Mavridis, Journal of Artificial General Intelligence. 31N. Mavridis, "Autonomy, isolation, and collective intelligence," Journal of Artificial General Intelligence, vol. 3, no. 1, pp. 1-9, 2011.
| [] |
[
"MOROCCO: Model Resource Comparison Framework",
"MOROCCO: Model Resource Comparison Framework"
] | [
"Valentin Malykh valentin.malykh@huawei.com \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Ekaterina Artemova artemova.ekaterina@huawei.com \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Alexander Kukushkin \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Alexander Kukushkin \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Tatiana Shavrina \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Vladislav Mikhailov \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Maria Tikhonova m_tikhonova94@mail.ru \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Shavrina T O@sberbank \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Mikhaylov V Ru \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Nikola@sberbank \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Ru \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Valentin Malykh \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Ekaterina Artemova \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Alexander Kukushkin \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Tatiana Shav-Rina \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Vladislav Mikhailov \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n",
"Maria Tikhonova \nData Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia\n"
] | [
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia",
"Data Science Laboratory Moscow\nSberbank Moscow\nRussia, Russia"
] | [] | The new generation of pre-trained NLP models push the SOTA to the new limits, but at the cost of computational resources, to the point that their use in real production environments is often prohibitively expensive. We tackle this problem by evaluating not only the standard quality metrics on downstream tasks but also the memory footprint and inference time. We present MOROCCO, a framework to compare language models compatible with jiant environment which supports over 50 NLU tasks, including Super-GLUE benchmark and multiple probing suites. We demonstrate its applicability for two GLUE-like suites in different languages. 1 | null | [
"https://arxiv.org/pdf/2104.14314v1.pdf"
] | 233,443,955 | 2104.14314 | 7e271419face49bafaf75a5b16d5e2666f2222ab |
MOROCCO: Model Resource Comparison Framework
Valentin Malykh valentin.malykh@huawei.com
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Ekaterina Artemova artemova.ekaterina@huawei.com
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Alexander Kukushkin
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Alexander Kukushkin
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Tatiana Shavrina
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Vladislav Mikhailov
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Maria Tikhonova m_tikhonova94@mail.ru
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Shavrina T O@sberbank
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Mikhaylov V Ru
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Nikola@sberbank
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Ru
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Valentin Malykh
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Ekaterina Artemova
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Alexander Kukushkin
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Tatiana Shav-Rina
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Vladislav Mikhailov
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
Maria Tikhonova
Data Science Laboratory Moscow
Sberbank Moscow
Russia, Russia
MOROCCO: Model Resource Comparison Framework
10.1145/nnnnnnn.nnnnnnnHuawei Noah's Ark lab Moscow, Russia ACM Reference Format: 2021. MOROCCO: Model Resource Comparison Framework. In Proceedings of ACM Conference (Con-ference'17). ACM, New York, NY, USA, 5 pages. https://doi.org/10.1145/ nnnnnnn.nnnnnnnCCS CONCEPTS • Computing methodologies → Model verification and vali- dationNatural language processing• Information systems → Document representation KEYWORDS model evaluation, resource consumption
The new generation of pre-trained NLP models push the SOTA to the new limits, but at the cost of computational resources, to the point that their use in real production environments is often prohibitively expensive. We tackle this problem by evaluating not only the standard quality metrics on downstream tasks but also the memory footprint and inference time. We present MOROCCO, a framework to compare language models compatible with jiant environment which supports over 50 NLU tasks, including Super-GLUE benchmark and multiple probing suites. We demonstrate its applicability for two GLUE-like suites in different languages. 1
INTRODUCTION
A new paradigm in natural language processing (NLP) has emerged in recent years. At the core of this paradigm is the notion of a pretrained language model. Such models are usually pre-trained on a large number of unannotated texts using unsupervised objectives and only next fine-tuned for downstream tasks in a supervised or semi-supervised fashion. Pre-trained language models can be used 1 https://github.com/RussianNLP/MOROCCO Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. Conference'17, July 2017, Washington, DC, USA © 2021 Association for Computing Machinery. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM. . . $15.00 https://doi.org/10.1145/nnnnnnn.nnnnnnn either as sentence embeddings by producing vector representation for an input text or a sequence of vector representations for the text tokens. The expressive power of pre-trained language models allows establishing new state-of-the-art solutions for the majority of existing tasks, such as text classification [19], part-of-speech tagging (POS tagging) [20], machine translation [25], and other. Simultaneously, the pre-trained language models require significant amounts of processing power, as they are mostly built from transformer blocks and number millions of parameters.
Several benchmarks allow drawing a comparison between various language models in terms of the solution quality for downstream tasks or their capability to express linguistic information.
To the best of our knowledge, none of the existing benchmarks account for computational efficiency and such characteristics as the memory fingerprint and the inference time of a language model at the same time. To this end, we propose a new evaluation methodology, which is aimed at measuring both model's performance and computational efficiency in downstream tasks. Thus we introduce a novel MOdel ResOurCe COmparison framework (MOROCCO). We also provide a testbed for model evaluation in a fixed environment. Both the methodology and testbed are discussed in Section 2. The evaluation results of the models on GLUE-like benchmarks and discussion on the methodology design are presented in Section 3.
Related Work
NLP benchmarks. Recently, multiple benchmarks aimed at natural language understanding (NLU) tasks have been established. The most prominent ones, GLUE [22] and SuperGLUE [21], set the trend for model-agnostic evaluation format. These benchmarks provide evaluation datasets and a public leaderboard. A submission to the leaderboard consists of predictions made on publicly available test sets. Thus, any model-specific parameters are not taken into consideration intentionally. GLUE and SuperGLUE benchmarks do not support any form of interaction with the model used to prepare the submission. The benchmarks offer nine general-domain NLU tasks in English. The more recent benchmarks follow same evaluation procedure but aim at domain-specific areas, such as dialogue systems [13] and biomedical NLU and reasoning [3], or in the cross-lingual setting [8,11]. Finally, at the beginning of 2021, a BERT-like model, DeBERTa [5], surpassed human performance on the SuperGLUE benchmark. This remarkable breakthrough was achieved with an architecture consisting of 48 Transformer layers counting 1.5 billion parameters. However, the comparison of DeBERTa's computational efficiency with other less performative models is left outside the SuperGLUE leaderboard.
Efficient NLP. The trade-off between model performance and computational efficiency has been explored in multiple shared tasks and competitions. The series of Efficient Neural Machine Translation challenges [1,4,6] measured machine translation inference performance on CPUs and GPUs with standardized training data and hardware. The performance was evaluated by the BLUE score, while the computational efficiency was measured by multiple parameters, including real-time, which the model used to translate the private test set, peak RAM and GPU RAM consumption, size of the model on disk, and the total size of the docker image, which could have included rule-based and hard-coded approaches. The organizers did not set up any restrictions on the measured parameters. Finally, the organizers selected the Pareto-optimal submission, i.e., those that need less computational resources, delivering though comparable to other systems high quality.
The EfficientQA challenge [14] challenged the participants to create an effective NLP-system for a single task, the open-domain question answering. The competition committee, however, limited the submissions to a few different restrictions based on the Dockercontainer size: participants could compete in creating the most accurate self-contained QA-system under 6Gb, or under 500Mb, or in training the smallest system that achieves 25% accuracy, or, finally, in building the most accurate question answering system, regardless of size. Such restrictions have drawn the community's attention to studying the trade-off between storing the parameters of the pre-trained models + retrieval data or making smaller systems with model compression techniques + less redundant data.
The SustaiNLP challenge [23] was aimed at measuring inference on the SuperGLUE benchmark. The efficiency is estimated by the power consumed throughout the course of inference. Submitted systems were run on standardized hardware environments. Experiment impact tracker [7] measures energy consumption in kWh for submitted systems. The submitted systems improve total energy consumption over the BERT-base as much as 20×, but the results on average around 2 absolute points lower. The goal of the Sus-taiNLP challenge was to develop efficient but yet accurate models. Although using the same testbed, the MOROCCO framework was developed with the opposite goal in mind. It provides adequate estimates of how many resources consume the models that reach human-level performance. The MOROCCO framework aims at the latter. As MOROCCO supports Docker images, it can be easily integrated into any benchmark or probing task, built upon jiant framework 2 described in [15].
EVALUATION FRAMEWORK
We present framework for the evaluation and a testbed 3 , where we guarantee the compatibility of achieved results. For the testbed a person (or a team) should prepare their submission as an Docker container and send it to the testbed. The testbed platform runs the solution Docker container with limited memory, CPU/GPU, and running time. The container is expected to read the texts from the standard input channel and output the answers to standard output. During the inference, the running time is recorded and later used for the submission scoring. To eliminate the running time and memory footprint dispersion caused by technical reasons, we perform several runs and compute the median values. The output of the container is evaluated with the task-specific metric. The resulting metric values are then used to compute the final evaluation score for the whole submission. To ensure the comparability of the collected metrics, we fix the hardware used for the computation. We use Yandex.Cloud 4 virtual instances, where the following hardware is guaranteed: 1 × Intel Broadwell CPU, 1 × NVIDIA Tesla V100 GPU. The Docker container OS we use is Ubuntu 20.04. Our framework is designed to comprise with jiant framework, alongside with simple requirements for the evaluation containers built upon other frameworks, and can be run locally avoiding our testbed usage.
Metrics
As previous works mostly consider only the quality of the solutions, there are two important characteristics, namely, memory footprint and inference speed (throughput), which reflect the computational efficiency of a model.
Memory footprint: to measure model GPU RAM usage we run a container with a single record as input, measure maximum GPU RAM consumption, repeat the procedure 5 times and compute a median value.
Inference speed: to measure throughput we run a container with records as input, with batch size 32 and measure . On all tasks batch size 32 utilizes GPU almost at 100%. We also estimate initialization time init with running a container with an input of size 1. Inference speed (throughput)
is computed as follows:
= − init .
In our experiments we use = 2000. We repeat the procedure 5 times to compute a median value. We propose to use these three characteristics, namely , , and , in the following way: we comprise a 2-dimensional plot with horizontal axis being a quality for a downstream task (this metric is specific to the task) and vertical axis being a throughput for the model. To visualize memory footprint we propose to use circles of different sizes instead of a mere point on the plot. The example of such a plot is presented at Figure 1.
We propose to take into account these three characteristics of a model and make an integral measure of its "fitness" as follows:
= × log( ) ,
where is the metric-based measurement of the task-solving ability of a model, is measured in bytes, is measured in samples per second. We logarithm memory consumption since the model size increase is exponential for the modern models [17]. This measure is motivated by the common idea that memory consumption should be lowered, while the achieved quality and processing speed should be increased. Thus it allows us to have the single value describing the efficacy of the model resource consumption.
Datasets
In our work we run MOROCCO evaluation on SuperGLUE, and Russian SuperGLUE 5 [18] benchmarks for English and Russian respectively. The latter is a Russian counterpart of the English language SuperGLUE benchmark. Its tasks are organized analogously to the SuperGLUE. Namely, both benchmarks comprise 9 downstream tasks 6 : Recognizing Textual Entailment task is aimed to capture textual entailment in a binary classification form; Commitment Bank, which belongs to the natural language inference (NLI) group of tasks type with classification into 3 classes (entailment, contradiction, and neutral); Diagnostic dataset, which is in fact another test set for recognizing textual entailment task additionally supplied with vast linguistic and semantic annotation; Words in Context task is based on word sense disambiguation problem in a binary classification form; Choice of Plausible Alternatives is a binary classification problem, which is aimed at accessing commonsense causal reasoning; Yes/No Questions is a question answering task for closed (binary) questions; Multi-Sentence Reading Comprehension is a task on machine reading, where the goal is to choose the correct answers for the questions based on the text paragraph; Reading Comprehension with Commonsense Reasoning is a task on machine reading, where it is required to fill in the masked gaps in the sentence with the entities from the given text paragraph; Winograd Schema Challenge is devoted to co-reference resolution in a binary classification form. Aggregated information about the tasks is presented in Table 1.
Models
We run the experiments on the following publicly available models that achieved competitive performance on both SuperGLUE and Russian SuperGLUE benchmarks. Models for English include monolingual (en_bert_base) and multilingual BERT (bert) [2], both in "base" variant, RoBERTa [12] in "base" variant (en_roberta_base), ALBERT [10] in "base" variant (albert), and GPT-2 in "large" variant [16] (en_gpt2). Models for Russian involve multilingual BERT in "base" variant (bert-multilingual), 3 variants of ruGPT-3 7 (rugpt3small, rugpt3-medium, and rugpt3-large), Russian BERT (rubert) [9] in "base" variant, and its derived version Conversational Ru-BERT 8 in "base" variant (rubert-conversational). All of the models are released as a part of HuggingFace Transformers framework described in [24].
RESULTS
We have measured , , and for the models listed in the previous section and have drawn two figures demonstrating the results: Fig. 2 for SuperGLUE evaluation and Fig. 1 for RussianSuperGLUE one. We also evaluated , the results are presented in Tab. 2. As one could see RoBERTa model had shown the best fitness for English language, while RuBERT is the best fit for Russian among the tested models. Overall these evaluations allowed us to separate better models by the means of quality, memory footprint, and throughput from the models showing worse performance and greater resource consumption.
Discussion
Our methodology has some limitations: we use averaging to estimate the values of , , and . While computation is least questionable, since the memory consumption for a single sample is more or less stable for any reasonable sample size, the other two measures require more attention.
We compare the mean and maximal quality values, as the latter is used on most of the leaderboards. The results of different models comparison on RussianSuperGLUE are presented in Fig. 3. We show run results for ten evaluations for each of five different initializations of each model, with an exception for rugpt3-large, where we used only one initialization. 9 The ordering of the best and mean scores is keeping the same for mean (pale red) and maximal (full red) results, again with an exception for rugpt3-large. Another evaluation is presented at Fig. 4. We compare different sets used for averaging in RussianSuperGLUE by the synthetic value of normalized throughput. The normalization is done alongside the horizontal axis, thus one can compare the ordering for the models in different task sets. As one could see, the ordering is mostly keeping the same, with some occasional switches between the top models. 9 We add small random noise in vertical axis for better readability. Based on this additional evaluation, we suppose that our methodology is stable regarding the choice between averaging schemes and the deviation in the max quality estimation process while being informative for the model comparison.
CONCLUSION
In this work, we presented the MOROCCO framework, which allows comparing NLP-models not only based on their overall quality metrics, but also on their resource consumption: the memory footprint and inference time. The proposed fitness metric (see section 2.1) allows us to compose the model leaderboard in a new way: to order them so that the most high-precision, smallest and fastest models are in the top, the accurate ones, but bigger and slower models are in the middle, and the most imprecise, largest and slowest ones are at the very bottom. Thus, to obtain a higher place on the leaderboard researchers need to strive not for accuracy per cent fractions on the individual tasks, but for an overall improvement in both the performance and size of the model. A similar conditional assessment of the results has been adopted in computer vision, and since the last year in question answering.
The presented framework is compatible with the jiant framework and transformer models, making it easily applicable to evaluate a wide range of popular architectures, both multilingual and monolingual.
We hope that our work will initiate a more intensive search for a compromise evaluation of the overall performance of NLP-models, which could be an alternative to the existing dominant "bigger is better" methodology and would take into account the problems of overfitting, over-parametrization, data redundancy, and others.
As part of future work, we are considering closer cooperation with NLP-developers and enthusiasts to further search for the best industrial solutions, including organizing the competition of multilingual NLP-models on existing benchmarks as a possible step.
Figure 1 :
1Models comparison on RussianSuperGLUE benchmark.
Figure 2 :
2Models comparison on SuperGLUE benchmark (English).
Figure 3 :
3Quality comparison for mean and best results.
Figure 4 :
4Throughput comparison for different datasets scores being averaged.
Table 2 :
2Fitness evaluation for the models in two languages.
Conference'17, July 2017, Washington, DC, USA Valentin Malykh, Ekaterina Artemova, Alexander Kukushkin, Tatiana Shavrina, Vladislav Mikhailov, and Maria Tikhonova
https://github.com/nyu-mll/jiant3 We will provide the links to the testbed website and the framework source code once the review process is over.
https://cloud.yandex.com/
The https://russiansuperglue.com/ 6 SuperGLUE benchmark also includes additional Winogender Schema Diagnostics task which is a dataset designed to test for the presence of gender bias in automated coreference resolution systems. However, as long as it is not included in Russian SuperGLUE we did not run MOROCCO evaluation on it.
https://github.com/sberbank-ai/ru-gpts 8 https://huggingface.co/DeepPavlov/rubert-base-cased-conversational
Findings of the Second Workshop on Neural Machine Translation and Generation. Alexandra Birch, Andrew Finch, Minh-Thang Luong, Graham Neubig, Yusuke Oda, Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. the 2nd Workshop on Neural Machine Translation and GenerationAlexandra Birch, Andrew Finch, Minh-Thang Luong, Graham Neubig, and Yusuke Oda. 2018. Findings of the Second Workshop on Neural Machine Translation and Generation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. 1-10.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. (2019), 4171-4186.
Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, arXiv:2007.15779Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing. arXiv preprintYu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing. arXiv preprint arXiv:2007.15779 (2020).
Findings of the Third Workshop on Neural Generation and Translation. Hiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, Katsuhito Sudoh, Proceedings of the 3rd Workshop on Neural Generation and Translation. the 3rd Workshop on Neural Generation and TranslationHiroaki Hayashi, Yusuke Oda, Alexandra Birch, Ioannis Konstas, Andrew Finch, Minh-Thang Luong, Graham Neubig, and Katsuhito Sudoh. 2019. Findings of the Third Workshop on Neural Generation and Translation. In Proceedings of the 3rd Workshop on Neural Generation and Translation. 1-14.
DeBERTa: Decoding-enhanced BERT with Disentangled Attention. Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2021. DeBERTa: Decoding-enhanced BERT with Disentangled Attention. (2021).
Findings of the fourth workshop on neural generation and translation. Kenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xian Li, Alexandra Birch, Proceedings of the Fourth Workshop on Neural Generation and Translation. the Fourth Workshop on Neural Generation and TranslationKenneth Heafield, Hiroaki Hayashi, Yusuke Oda, Ioannis Konstas, Andrew Finch, Graham Neubig, Xian Li, and Alexandra Birch. 2020. Findings of the fourth workshop on neural generation and translation. In Proceedings of the Fourth Workshop on Neural Generation and Translation. 1-9.
Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, Joelle Pineau, Journal of Machine Learning Research. 21Peter Henderson, Jieru Hu, Joshua Romoff, Emma Brunskill, Dan Jurafsky, and Joelle Pineau. 2020. Towards the Systematic Reporting of the Energy and Carbon Footprints of Machine Learning. Journal of Machine Learning Research 21, 248 (2020), 1-43.
XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson, PMLRInternational Conference on Machine Learning. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A Massively Multilingual Multi-task Bench- mark for Evaluating Cross-lingual Generalisation. In International Conference on Machine Learning. PMLR, 4411-4421.
Adaptation of deep bidirectional multilingual transformers for russian language. Yuri Kuratov, Mikhail Arkhipov, arXiv:1905.07213arXiv preprintYuri Kuratov and Mikhail Arkhipov. 2019. Adaptation of deep bidirectional multilingual transformers for russian language. arXiv preprint arXiv:1905.07213 (2019).
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. (2019).
XGLUE: A New Benchmark Datasetfor Cross-lingual Pre-training, Understanding and Generation. Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP. the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLPYaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, et al. 2020. XGLUE: A New Benchmark Datasetfor Cross-lingual Pre-training, Understanding and Generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 6008-6018.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019).
DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue. S Mehri, M Eric, D Hakkani-Tur, abs/2009.13570S. Mehri, M. Eric, and D. Hakkani-Tur. 2020. DialoGLUE: A Natural Language Understanding Benchmark for Task-Oriented Dialogue. ArXiv abs/2009.13570 (2020).
Sewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik ; Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, Wen Tau, Yih , arXiv:2101.00133NeurIPS 2020 EfficientQA Competition: Systems, Analyses and Lessons Learned. Martin Docekal, Karel Ondrej, Pavel Smrz,. cs.CLSewon Min, Jordan Boyd-Graber, Chris Alberti, Danqi Chen, Eunsol Choi, Michael Collins, Kelvin Guu, Hannaneh Hajishirzi, Kenton Lee, Jennimaria Palomaki, Colin Raffel, Adam Roberts, Tom Kwiatkowski, Patrick Lewis, Yuxiang Wu, Heinrich Küttler, Linqing Liu, Pasquale Minervini, Pontus Stenetorp, Sebastian Riedel, Sohee Yang, Minjoon Seo, Gautier Izacard, Fabio Petroni, Lucas Hosseini, Nicola De Cao, Edouard Grave, Ikuya Yamada, Sonse Shimaoka, Masatoshi Suzuki, Shumpei Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Mar- tin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Barlas Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen tau Yih. 2021. NeurIPS 2020 EfficientQA Com- petition: Systems, Analyses and Lessons Learned. arXiv:2101.00133 [cs.CL]
2020. jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models. Yada Pruksachatkun, Phil Yeres, Haokun Liu, Jason Phang, Alex Phu Mon Htut, Ian Wang, Samuel Tenney, Bowman, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsYada Pruksachatkun, Phil Yeres, Haokun Liu, Jason Phang, Phu Mon Htut, Alex Wang, Ian Tenney, and Samuel Bowman. 2020. jiant: A Software Toolkit for Research on General-Purpose Text Understanding Models. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 109-117.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI blog. 19Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9.
Distil-BERT, a distilled version of BERT: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distil- BERT, a distilled version of BERT: smaller, faster, cheaper and lighter. In The 5th Workshop on Energy Efficient Machine Learning and Cognitive Computing.
RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark. Tatiana Shavrina, Alena Fenogenova, Emelyanov Anton, Denis Shevelev, Ekaterina Artemova, Valentin Malykh, Vladislav Mikhailov, Maria Tikhonova, Andrey Chertok, Andrey Evlampiev, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP. the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLPTatiana Shavrina, Alena Fenogenova, Emelyanov Anton, Denis Shevelev, Ekate- rina Artemova, Valentin Malykh, Vladislav Mikhailov, Maria Tikhonova, Andrey Chertok, and Andrey Evlampiev. 2020. RussianSuperGLUE: A Russian Language Understanding Evaluation Benchmark. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 4717-4726.
How to fine-tune BERT for text classification. Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang, China National Conference on Chinese Computational Linguistics. SpringerChi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune BERT for text classification?. In China National Conference on Chinese Computational Linguistics. Springer, 194-206.
Small and Practical BERT Models for Sequence Labeling. Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, Amelia Archer, EMNLP/IJCNLP. Henry Tsai, Jason Riesa, Melvin Johnson, Naveen Arivazhagan, Xin Li, and Amelia Archer. 2019. Small and Practical BERT Models for Sequence Labeling. In EMNLP/IJCNLP (1).
SuperGLUE: A stickier benchmark for general-purpose language understanding systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel R Bowman, Advances in Neural Information Processing Systems. 32Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. Advances in Neural Information Processing Systems 32 (2019).
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, 10.18653/v1/W18-5446Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. (Nov. 2018), 353-355. https://doi.org/10. 18653/v1/W18-5446
Overview of the SustaiNLP 2020 Shared Task. Alex Wang, Thomas Wolf, Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing. SustaiNLP: Workshop on Simple and Efficient Natural Language ProcessingAlex Wang and Thomas Wolf. 2020. Overview of the SustaiNLP 2020 Shared Task. In Proceedings of SustaiNLP: Workshop on Simple and Efficient Natural Language Processing. 174-178.
Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Gugger, 10.18653/v1/2020.emnlp-demos.6Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Transformers: State-of-the-Art Natural Language Processing. (Oct. 2020), 38-45. https://doi.org/10.18653/v1/2020.emnlp-demos.6
Incorporating BERT into Neural Machine Translation. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, Tieyan Liu, International Conference on Learning Representations. Jinhua Zhu, Yingce Xia, Lijun Wu, Di He, Tao Qin, Wengang Zhou, Houqiang Li, and Tieyan Liu. 2019. Incorporating BERT into Neural Machine Translation. In International Conference on Learning Representations.
| [
"https://github.com/RussianNLP/MOROCCO",
"https://github.com/nyu-mll/jiant3",
"https://github.com/sberbank-ai/ru-gpts"
] |
[
"MATAWS: A Multimodal Approach for Automatic WS Semantic Annotation",
"MATAWS: A Multimodal Approach for Automatic WS Semantic Annotation"
] | [
"Cihan Aksoy caksoy@uekae.tubitak.gov.tr \nComputer Science Department\nGalatasaray University\nOrtaköy/İstanbulTurkey\n\nTÜBİTAK\nGebze/KocaeliTurkey\n",
"Vincent Labatut \nComputer Science Department\nGalatasaray University\nOrtaköy/İstanbulTurkey\n",
"Chantal Cherifi \nComputer Science Department\nGalatasaray University\nOrtaköy/İstanbulTurkey\n\nUniversity of Corsica\nCorteFrance\n",
"Jean-François Santucci \nUniversity of Corsica\nCorteFrance\n"
] | [
"Computer Science Department\nGalatasaray University\nOrtaköy/İstanbulTurkey",
"TÜBİTAK\nGebze/KocaeliTurkey",
"Computer Science Department\nGalatasaray University\nOrtaköy/İstanbulTurkey",
"Computer Science Department\nGalatasaray University\nOrtaköy/İstanbulTurkey",
"University of Corsica\nCorteFrance",
"University of Corsica\nCorteFrance"
] | [] | Many recent works aim at developing methods and tools for the processing of semantic Web services. In order to be properly tested, these tools must be applied to an appropriate benchmark, taking the form of a collection of semantic WS descriptions. However, all of the existing publicly available collections are limited by their size or their realism (use of randomly generated or resampled descriptions). Larger and realistic syntactic (WSDL) collections exist, but their semantic annotation requires a certain level of automation, due to the number of operations to be processed. In this article, we propose a fully automatic method to semantically annotate such large WS collections. Our approach is multimodal, in the sense it takes advantage of the latent semantics present not only in the parameter names, but also in the type names and structures. Concept-to-word association is performed by using Sigma, a mapping of WordNet to the SUMO ontology. After having described in details our annotation method, we apply it to the larger collection of real-world syntactic WS descriptions we could find, and assess its efficiency. | 10.1007/978-3-642-22185-9_27 | [
"https://arxiv.org/pdf/1305.0194v1.pdf"
] | 8,417,528 | 1305.0194 | c89a3e9976e409d016550f0bc9da15aa0a0b245d |
MATAWS: A Multimodal Approach for Automatic WS Semantic Annotation
Cihan Aksoy caksoy@uekae.tubitak.gov.tr
Computer Science Department
Galatasaray University
Ortaköy/İstanbulTurkey
TÜBİTAK
Gebze/KocaeliTurkey
Vincent Labatut
Computer Science Department
Galatasaray University
Ortaköy/İstanbulTurkey
Chantal Cherifi
Computer Science Department
Galatasaray University
Ortaköy/İstanbulTurkey
University of Corsica
CorteFrance
Jean-François Santucci
University of Corsica
CorteFrance
MATAWS: A Multimodal Approach for Automatic WS Semantic Annotation
Web ServiceSemantic WebSemantic AnnotationOntologyWSDLOWL-S
Many recent works aim at developing methods and tools for the processing of semantic Web services. In order to be properly tested, these tools must be applied to an appropriate benchmark, taking the form of a collection of semantic WS descriptions. However, all of the existing publicly available collections are limited by their size or their realism (use of randomly generated or resampled descriptions). Larger and realistic syntactic (WSDL) collections exist, but their semantic annotation requires a certain level of automation, due to the number of operations to be processed. In this article, we propose a fully automatic method to semantically annotate such large WS collections. Our approach is multimodal, in the sense it takes advantage of the latent semantics present not only in the parameter names, but also in the type names and structures. Concept-to-word association is performed by using Sigma, a mapping of WordNet to the SUMO ontology. After having described in details our annotation method, we apply it to the larger collection of real-world syntactic WS descriptions we could find, and assess its efficiency.
Introduction
The semantic Web encompasses technologies which can make possible the generation of the kind of intelligent documents imagined ten years ago [1]. It proposes to associate semantic metadata taking the form of concepts with Web resources. The goal is to give a formal representation of the meaning of these resources, in order to allow their automatic processing. The process of defining such associations is known as semantic annotation (or annotation for short), and generally relies on libraries of concepts collectively described and structured under the form of ontologies. The result is Web documents with machine interpretable mark-up that provide the source material for software agents to operate. The annotation of Web resources is obviously fundamental to the building of the semantic Web.
WSMO [6], WSDL-S [7] and SAWSDL [8]. While OWL-S and WSMO define their own rich semantic models for WS, WSDL-S and SAWSDL work in a bottom-up fashion by preserving the information already present in WSDL. Those description languages are used in many research projects focusing on various semantic-related applications like automatic discovery and composition. In order to test these applications, one needs a benchmark, i.e. a large collection of annotated WS [9]. Such collections exist, but are limited in terms of size, realism, and representativity. These limitations are due to the fact the annotation process is generally performed manually, and is therefore costly. The use of an appropriate annotation tool can help decrease this cost, especially if it is automated. However, because of the specific structure of this kind of document, automatically annotating a WS description is much different, from the natural language processing perspective, than annotating other Web documents such as plain text. It consequently requires to perform a particular form of text mining, leading to dedicated tools such as ASSAM [10] or MWSAF [11]. But those tools also have their own limitations, the main one being they are only partially automated and require human intervention, which is a problem when annotating a large collection of WS descriptions.
In this paper we present the first version of MATAWS (Multimodal Automatic Tool for the Annotation of WS), a new semantic WS annotator, whose purpose is to solve some of these limitations. MATAWS was designed with the objective of batch annotating a large collection of syntactic descriptions and generating a benchmark usable to test semantic-related approaches. It focuses on data semantics (i.e. the annotation of input and output parameters) contained in WSDL files, and currently generates OWL-S files (other output formats will shortly be included). Our main contributions are: 1) a full automation of the annotation process and 2) the use of a multimodal approach. We consider not only the parameter names, but also the names present in the XSD types used in the WSDL descriptions: type names, and names of the fields defined in complex types.
The rest of this paper is organized as follows. Section 2 presents both existing ways of retrieving a collection of semantic WS descriptions: recover a publicly available collection and annotate a syntactic collection using one of the existing annotation tools. In section 3, we introduce MATAWS and describe our multimodal approach. In section 4 we apply MATAWS to the annotation of a publicly available collection of syntactic WS descriptions. Finally, in section 5 we discuss the limitations of our tool and explain how we plan to solve them.
Solutions to Access an Annotated Collection
When looking for a collection of semantic WS descriptions, one can consider two possibilities: either using a predefined collection, or creating his own one. In this section, we first review the main existing collections and their properties. The creation of a collection can be performed either by using a random model to generate artificial descriptions, or by semantically annotating a collection of real-world syntactical descriptions. The usual goal when looking for a semantic collection is to test WS-related tools on realistic data. To our opinion, the WS collections properties are not known well enough to allow the definition of a realistic generative model, which is why we favor the second solution. For this reason, in the second part of this section, we also review the main tools allowing to annotate WS descriptions.
Collections of Semantic Descriptions
The main publicly available collections of semantic WS are those provided by the ASSAM WSDL Annotator project, SemWebCentral and OPOSSum. Their major features are gathered in Table 1.
The ASSAM WSDL Annotator project (Automated Semantic Service Annotation with Machine learning) [12] includes two collections of WS descriptions named Full Dataset and Dataset2. Full Dataset is a collection of categorized WSDL files, which contains 816 WSDL files describing real-world WS. Dataset2 is a collection of OWL-S files, obtained by annotating a subset of the WSDL files using the ASSAM Annotator (cf. section 2.2). 164 descriptions were fully labeled, assigning ontology references to the WS itself, its operations and their inputs and outputs. [13] is a community whose purpose is to gather efforts from people working in the semantic Web area. Three semantic collections are available: OWLS-TC (OWL-S Test Collection), SAWSDL-TC (SAWSDL Test Collection) and SWS-TC (Semantic WS Test Collection). OWLS-TC3 is the third version of this test collection. It provides 1007 semantic descriptions written in OWL-S from seven different domains. Part of the descriptions were retrieved from public IBM UDDI registries, and semi-automatically transformed from WSDL to OWL-S. SAWSDL-TC originates in the OWLS-TC collection. It was subsequently resampled to increase its size, and converted to SAWSDL. The collection provides 894 semantic WS descriptions. The descriptions are distributed over the same seven thematic domains than OWLS-TC. SWS-TC is a collection of 241 OWL-S descriptions. There is not much information about this collection.
OPOSSum (Online POrtal for Semantic Services) [14] is a joint community initiative for developing a large collection of real-world WS with semantic descriptions. Its aim is to create a suitable test bed for semantically enabled WS technologies. OPOSSum gathered the three semantic collections of SemWebCentral, plus the Jena Geography Dataset collection, explicitly collected within OPOSSum. The collection contains 201 real-world WS descriptions retrieved from public. All the described WS belong to the domains of geography and geocoding. Unfortunately, for now, no semantic descriptions are available for the services of the Jena Geography Dataset, which is why this collection is absent from Table 1.
These collections have been widely used in semantic WS-related works [15,16]. As shown in Table 1, they all focus on the annotation of the data elements, which corresponds to our objective. However, one can notice some limitations. SWS-TC description is insufficient, it is not even clear if the WS descriptions are real-world. Dataset2 contains only real-world WS descriptions but it is very small, which can raise questions about its representativity. On the contrary, OWLS-TC3 and SAWSDL-TC contain a substantial number of descriptions. Nevertheless, these have been partially resampled in an undocumented way, which raises important questions concerning their realism.
Annotation Tools
From our point of view, WS annotation is considered as a one-time task, aiming at annotating legacy WS, which are described only syntactically. Newly created or modified WS should be (re)annotated manually by their authors, which is much preferable in terms of quality than any automatic processing. For this reason, and due to the specific nature of WS annotation, we are not concerned by all the 7 requirements stated by Uren et al. [3] for general annotation tools. It is of course necessary to use standard formats for input and output (R1). A polyvalent environment is not necessary, since we do not want to modify existing descriptions or create any new ones (R2). The support of multiple or changing ontologies is relevant (R3), but it is not the most important point, so we chose to ignore it in this first work. The input format is constrained to WSDL (R4), since it is the de facto standard for syntactical WS description. As stated before, we do not plan to maintain annotations if WS are modified (R5). The model of annotation storage (R6) is constrained by the output format: separate form for OWL-S and integrated for WSDL-S and SAWSDL. Finally, the level of automation is of great interest to us, given our context (R7). Only a few publicly available tools exist to semantically annotate WS descriptions. Table 2 presents the main ones and summarizes their properties. They all take a set of WSDL files as input (R1 and R4), but differ on several properties such as their level of automation (R7) and the language used to output the semantic descriptions (R1). The tools are described in details in the rest of this subsection.
Radiant is an open source tool created at the Georgia University [17]. It takes the form of an Eclipse plug-in and can output both SAWSDL and WSDL-S files. It provides a GUI which presents the elements constituting the WS description and allows to select the concepts one wants to associate to parameters or operations, by browsing in the selected ontologies. This interface makes the annotation process easier, but the annotation is nevertheless fully manual.
ASSAM is an open source Java program developed at the University College Dublin [12], able to output OWL-S files. It provides assistance during the annotation process. First, the user starts manually annotating parameters and/or operations using an existing ontology. Meanwhile, ASSAM identifies the most appropriate concepts using machine learning methods. After enough information has been provided, the software is able to propose a few selected and supposedly relevant concepts when the user annotates a new WS.
MWSAF is another open source Java tool created at the Georgia University [11]. It outputs WSDL-S files, and like ASSAM it has a machine learning capability allowing it to assist the user during the annotation process. It is able to annotate not only parameters and operations, but also non-functional elements.
WSMO Studio is an Eclipse plug-in initially designed to edit semantic WS based on the WSMO model. An extension allows annotating WS parameters and operations, and outputting the result under the form of SAWSDL files [18]. However, the tool does not provide any assistance to the user and the process is fully manual. Besides these annotation tools, several softwares allow to convert WSDL files to OWL-S files, but without performing any semantic annotation: they only apply a syntactic transformation and present the information contained in the original WSDL file under a form compatible with the OWL-S recommendation. WSDL2OWLS is an open source Java application created at the Carnegie Mellon University [19]. OWL-S Editor is a plug-in for Protégé (itself an ontology development environment) created at SRI [20]. Another software performing the same task is also called OWL-S Editor, but was developed at Malta University [21].
From this review, we can conclude the existing annotation tools present various limitations relatively to our goals. First, from a practical perspective, some of these tools are old and not supported anymore, which can cause installation and/or use problems. For instance, Radiant and ASAM are not compatible anymore with the current versions of some of the Eclipse plug-ins, libraries or API they rely on; meanwhile MWSAF installs and runs fine, but generates files without any of the annotations defined by the user. More importantly, these tools require important human intervention: Radiant and WSMO Studio are fully manual, whereas ASSAM and MWSAF only assist the user, after a compulsory learning phase. This justifies the development of our own tool, which we present in the next section.
Proposed Annotation Method
The absence of an existing solution fulfilling our needs compelled us to develop our own tool to semantically annotate WS descriptions. The main differences with the other annotation tools are the exploitation of several sources of information and the automation of the annotation process. In this section, we first describe the general architecture of our tool, which is made up of several independent components. We then focus separately on the components of interest, explaining their design and functioning.
General Architecture
MATAWS takes a collection of WSDL files as input and generates a collection of OWL-S files as output. Fig. 1 gives an insight of its modular structure, which includes five different components. Among these components, two are using external APIs (Associator and Output Component), whereas the three remaining ones were developed by us in Java. The Input and Output components are not of great interest with regards to the topic of this article, which is why we describe them shortly here. The other components are described in details in the following subsections. The Input Component is in charge for extracting the set of all operation parameters defined in the considered collection of WSDL files. We designed a parser able (among other things) to retrieve the parameter names, type names and type structures (in the case of complex types) [22]. The Output Component is used after the annotation process to generate a collection of OWL-S files corresponding to annotated versions of the input WSDL files. For this purpose, we selected the Java OWL-S API, which provides a programmatic read/write access to OWL-S service descriptions [5]. Note we plan to add support for WSDL-S and SAWSDL by using other appropriate APIs.
The three remaining components correspond to the core of the annotation process. After the input component has parsed the WSDL files, it fetches parameters information to the Preprocessor. This one originally focuses on the parameter names, decomposing, normalizing and cleaning them so that they can be treated by the Associator. This component is based on the inference engine Sigma [23], whose role is to associate an ontological concept to a word. If Sigma is successful and manages to return a concept, this one is associated to the considered parameter. After all the parameters of a WS have been annotated, the Output Component is used to generate an OWL-S file with both the information contained in the original WSDL file and the selected concepts. However, for various reason explained later, it is not always possible for Sigma to find a suitable concept for every parameter. In this case, the Type Explorer accesses some properties related to the parameter data type, to obtain what we call subparameters. These are then fetched to the Preprocessor and the core processing starts again. In case of repeated annotation failure, this process can be repeated recursively until success or lack of subparameters.
Preprocessor
In order to work properly and propose a suitable concept, the Associator needs to process clear and normalized words. However, the names defined in real-world WS certainly do not meet this criterion. First, the meaning of an operation, parameter or type can hardly be described using a single word. For this reason, most names are made up of several concatenated words, separated either by alternating upper and lower cases or by using special characters such as dots, underscores, hyphens, etc. Second, sometimes the result is too long and abbreviations are used instead of the complete words. Finally, an analysis of any collection quickly shows different additional characters such as digits or seemingly useless separators can also appear. Of course, there is no way to define an exhaustive list of the various forms a name can take in a WS description, but WS programmers actually follow only a few conventions, which allows performing very efficient preprocessing by applying a set of simple transformations to break a name into usable words. We distinguish three steps during name preprocessing: decomposition, normalization and filtering. Parameter -Filtering
Body -
The decomposition consists in taking advantage of the different types of concatenations we identified to break a name into several parts. It also involves some cleaning, in the sense all characters which are not letters are removed and diacritical marks are deleted. Table 3 shows some examples involving case alternation, and digit and underscore used as separators.
The normalization role is first to provide the Associator a clean version of the word, typographically speaking, by setting each word to lower case. Moreover, the normalization handles abbreviations, by replacing them with the corresponding fulllength words. Table 3 gives an example of the name no being replaced by the word number. However, this last task is very context-dependent, because some strings are both full words and common abbreviations. For instance, no could simply mean the opposite of "yes", used to negate the following concatenated word, e.g. no_limit.
For this reason, human intervention can be necessary to set up this preprocessing, and adapt it to the considered collection. We chose to allow the user to define a list of common abbreviations.
Finally, we added a filtering step to deal with stop-words, i.e. words with no particular semantic information relatively to their context. For instance, the string parameter commonly appears in parameter names, without bringing any significant information, since the syntax of the WSDL file already allows to know if a certain name points out at a parameter. For this reason, it can be considered as noise and ignored. Even more than before, the nature of the stop-words is closely linked to the domain of application, and requires human intervention to adapt the list of stop-words we defined.
Let us consider as an example the preprocessing of the name ASessionId_02. First it will be broken down to the words A, Session and Id while the numeric end of the name (02) will be ignored. The normalization step will transform them in a, session (lowercase) and identity (replacing an abbreviation). Finally, the filter will remove the article a, because it is a stop-word. Eventually, for this name ASessionId_02, the Preprocessor will output the two words session and identity.
Associator
As mentioned before, we use an existing tool called Sigma to associate a concept to a word. It is written in Java and allows to create, test, modify and infer ontologies [23]. It is provided with the Suggested Upper Merged Ontology (SUMO), which (unlike its name suggests) contains also mid-level and domain ontologies [24]. SUMO is free, covers a wide range of fields, and it has been mapped to the whole WordNet lexicon [25]. It was initially defined using the SUO-KIF language [26], and it is currently being converted in OWL [27]. Although its main purpose is to work on ontologies, Sigma also offers a programmatic access to this mapping under the form of a method taking an English word as input and outputting a SUMO concept. Table 4 gives a few examples of such associations. The names we processed are most of the time not plain English words, which justifies our preprocessing.
Type Explorer
Although our focus is primarily on parameter names, we described the two previous components in general terms, because they can be applied to any kind of names. Indeed, different difficulties can arise, making it impossible to associate a concept to a parameter name. First, the Preprocessor might fail to break the name down to relevant words, hence fetching the Associator strings it cannot map to appropriate concepts. Second, the Preprocessor might filter all the words resulting from the name decomposition, meaning it will not be able to provide the Associator any word to process. This can be the case, for instance, when a name is composed of a single stopword or several concatenated ones (e.g.: SomeParameter_08). Third, even if at least one correct English word can be fetched to the Associator, it is possible this one simply does not find any associated concept. All three cases, or any combination of these three cases, result in the fact no concept could be associated to the considered parameter. To overcome this problem, we propose a multimodal approach taking advantage of latent semantics contained in the data type information available through WSDL files. First, in real-world WS, a large proportion of types have a user-defined name, whose meaning can be considered as complementary to the parameter name. Additionally, many of these custom types are complex in the XSD sense, i.e. they can be compared to the structured data types used in programming languages. A parameter whose type is complex is made up of several subparameters, which can recursively be composed themselves of other subparameters, if they have a complex type too. Therefore, by taking advantage of the data types, one can access the semantic information implicitly contained in the type names and subparameter names and types.
Fig. 2.
Excerpt from a real-world WSDL file: parameter with a complex XSD type. Fig. 2 gives an example of a complex type extracted from a real-world WSDL file. A parameter named category has a complex type called categoryDetail, defined as a sequence of two strings: a singer and a composer. If we suppose the word <message name="GetCategories"> <part name="category" type="categoryDetail" /> </message> ... <complexType name="categoryDetail"> <sequence> <element name="singer" type="xsd:string" /> <element name="composer" type="xsd:string" /> </sequence> </complexType> ...
...
category is a stop-word, the Associator will not be able to provide any concept for this parameter. However, considering the words singer and composer gives access to additional information usable by the Associator.
The principle of our Type Explorer component is as follows. It is activated when the processing of the parameter name could not be used to successfully identify any concept. We start with the type name: if it is custom, we process it exactly like the parameter name, going through the preprocessing and association steps. In case of failure to associate any concept, we go further and consider the type structure. If it is complex, we access the first level of subparameters. For now, we only consider XSD sequences, because these are the most widespread, however the same approach can be extended to the other kinds of XSD types. We first focus on the subparameter names, and if the association is inconclusive, on their type names. In case of failure, the process recursively goes on by analyzing the structure of the subparameter types to access the second level of subparameters. The recursion stops when there is no more level to process (permanent failure) or as soon as concept can be associated (success).
Application to Real-World Descriptions
To assess its performance, we applied MATAWS to a collection of syntactic WS descriptions. We wanted to use a large collection of real-world descriptions, in order to avoid specific cases and to get consistent results. Given these criteria, the best collection we could found is the Full Dataset collection from the ASSAM project [12], previously mentioned in our review of WS descriptions collections (section 2.1). It contains 7877 operation distributed over 816 real-world WS descriptions. In this section, we present the results we obtained on this collection. First we adopt a quantitative point of view and distinguish parameters only in terms of annotated or non-annotated. Second, we analyze the results qualitatively and discuss the relevance of the concept associated to the parameters.
Quantitative Aspect
We first focus on the proportion of parameters from the Full Dataset collection which could be automatically annotated by MATAWS. In this section, we consider a parameter to be successfully annotated if our tool was able to associate it to at least one concept. Table 5 displays several values, corresponding to the progressive use of the different components described in section 3. Each row represents the performance obtained when using simultaneously the specified functionality and those mentioned in the previous rows. The first line corresponds to the direct application of the Associator, with no significant preprocessing. The only transformation consists in setting parameter names to lowercase, which is compulsory to apply Sigma. Under these conditions, MATAWS can propose a concept for 39.63% of the parameters. This means close to 40% of the parameters names are single words, which can be retrieved directly in WordNet. The rest needs more preprocessing to be successfully annotated.
The second row corresponds to the introduction of the decomposition step. The small improvement in the success rate (around +2%) allows us to think compound names do not contain directly recognizable words. By adding the normalization step, the improvement is extremely large (almost +48%). Further analysis shows this is only marginally caused by the replacement of abbreviations by full words. Among the remaining 10%, one can found specific parameter forms we plan to introduce in our preprocessing, and word variations such as plural forms, also easily integrable in our approach. A strong decrease (-21%) can be observed when introducing the filtering step. This means that, among the associated words, many correspond to stop-words, or concatenations of stop-words. In this case, the Annotator might be able to retrieve a concept, but this one is useless in this context (e.g. parameter). The introduction of the Type Explorer allows improving slightly our success rate (+3%), but its effect is not as strong as we expected. This can be justified by the fact most parameters with a custom type where annotated using only their names. Moreover, the type structure is difficult to exploit in this collection, because some types defined as complex surprisingly do not actually have any content (i.e. no subparameters at all).
Qualitative Aspect
The quantitative analysis reflects the fact a large proportion of parameters could be associated to a concept. The question is now to know if these associations, which were automatically retrieved, are relevant relatively to the context. For this matter, we isolated all the words detected in the whole set of parameters, thanks to our Preprocessor and Type Explorer. Table 6 shows the first most frequent words with their associated concept.
Overall, most of the annotated words are associated to relevant concepts, leading to an approximate success rate of 83%. Words like computer, month, numeric, password, customer are perfectly recognized, but this is not the case of several widespread words such as name, user, address or value.
Irrelevant concepts are due to the fact some words have several meanings and can therefore be associated to several concepts. Such ambiguity can be raised directly when the considered word has most probably a unique meaning in the context of WS. For instance, when the word user is submitted to Sigma, it outputs three concepts, including the one expected in this case, i.e. "someone employing something".
However, the top result corresponds to "someone who does drugs", which explains the associated concept (DiseaseOrSyndrome). Similarly, the appropriate concept for name is among the concepts returned by Sigma, but the top result correspond to its meaning in the expression "in the name of the law", hence the concept (HoldsRight). The quality of the annotation could be improved for such common words by simply selecting a priori the appropriate concepts, like we defined lists of stop-words and abbreviations. The selection of an accurate concept can also be context dependent, which makes it impossible or difficult to perform it a priori. For instance, the word value corresponds to many concepts equally likely to appear in a WS description: quantity, monetary value, time duration, etc. Regarding this problem, the quality of the automatic annotation can be improved by deriving concepts from several words, when they are available. For instance, if the parameter name is value01 and its type is myCurrencyType, then we have enough information to infer the most relevant concept. This can be done, for example, by taking advantage of the WordNet textual definitions.
Conclusion
In this article, we presented our tool MATAWS, which implements a new method to semantically annotate WS descriptions. It focuses on WS parameters, i.e. on the Data semantics [4], and implements most of the requirements defined by Uren et al. [3] and relevant to our context: it processes WSDL files and produces OWL-S files (R1 & R4), and is fully automated (R7). This automation level is enforced through the use of both an ontological mapping of the WordNet lexicon, and a multimodal approach consisting in using not only parameter names, but also data type names and structures to identify appropriate ontological concepts. When compared to existing annotation tools such as ASSAM [12] and MWSAF [11], it is important to notice that MATAWS is much less flexible, because it does not include any machine learning abilities. This is due to the fact our goal is different: we want to batch annotate a large collection of WS descriptions without any human intervention, whereas the cited works aim at helping human users to annotate individual WS descriptions. Moreover, we tested MATAWS on a large collection of syntactic real-world WS descriptions, and despite its simplicity, it obtained very promising results, with 72% of the parameters annotated.
The version presented in this article constitutes a first step in the development of our tool. Although some parameters could not be associated with relevant concepts, it is clear that we reduced the manual labor required for the annotation of WS. However, for now this reduction is not important enough to spare human intervention, which is needed at least to control the result of the annotation process. To get around this limitation, we plan to improve our tool on several points. First, in order to lower the proportion of parameters we failed to annotate, we can use other sources of latent semantics present in the WSDL descriptions: natural language descriptions and names of messages and operations. Second, the association step can be improved in two ways. We can complete the Associator by including more tools able to map a lexicon to an ontology, such as DBPedia [28]. This would complete and enhance the results already obtained through Sigma. Also, by taking advantage of our multimodal approach, we can retrieve all the words related to a given parameter through its data type, in order to compare them with concept definitions expressed in natural language (as found in a dictionary).
Fig. 1 .
1Architecture of MATAWS.
Table 1 .
1Collections of semantic WS descriptions: main features.Name
Dataset2
OWLS-TC3
SAWSDL-TC
SWS-TC
Source
ASSAM project SemWeb Central
SemWeb Central
SemWeb Central
Type
Real-world
descriptions
Real-world
descriptions,
partially resampled
Real-world
descriptions,
partially resampled
N/A
Language OWL-S
OWL-S
SAWSDL
OWL-S
Annotated
Type
Data,
Functional
Data
Data
Data
Size
164
1007
894
241
Particular
features
Processed using
Assam
annotator
Single interface,
one operation per
service
Single interface,
one operation per
service
N/A
SemWebCentral
Table 2 .
2WS Semantic annotation tools and their properties.Name
Output Format
Annotated Type
Automation
Last Update
Radiant
SAWSDL,
WSDL-S
Data, Functional
Fully manual May 2007
ASSAM
OWL-S
Data, Functional
Assisted
May 2005
MWSAF
WSDL-S
Data, Functional,
Non-Functional
Assisted
July 2004
WSMO Studio SAWSDL
Data, Functional
Fully manual Sept. 2007
Table 3 .
3Preprocessing examples.Transformation
Original Name
Extracted Words
Decomposition
WhiteMovesNext
White, Moves, Next
Decomposition
Number3Format
Number, Format
Decomposition
AUsername
Username
Decomposition
User_name
User, name
Normalization
no
number
Normalization
Password
password
Filtering
Table 4 .
4Concept association examples.Word
SUMO Concept associated by Sigma
buffalo HoofedMammal
school
EducationalProcess
talk
Communication
Table 5 .
5Success rates obtained by using the different functionalities of MATAWS.Added Modification Proportion of Annotated Parameters
No preprocessing
39,63%
Decomposition
41,94%
Normalization
90,01%
Filtering
69,06%
Type Explorer
72,04%
Table 6 .
6List of the most frequent words, with their associated concept. Bold rows represent semantically irrelevant concepts.Word
Occurrences Associated Concept
identity
1255
TraitAttribute
key
548
Key
name
470
HoldsRight
user
424
DiseaseOrSyndrome
code
295
Procedure
number
294
Object
address
258
SubjectiveAssessmentAttribute
date
203
DateFruit
city
168
City
amount
135
ConstantQuantity
administrator 128
Position
message
115
Text
value
106
ColorAttribute
password
98
LinguisticExpression
pass
70
ContestAttribute
customer
52
Customer
company
51
Corporation
phone
41
Device
electronic
35
ElectricDevice
computer
33
Computer
mailing
33
Transfer
month
32
Month
numeric
32
Number
Acknowledgments.The authors would like to thank Koray Mançuhan, who participated in the development of MATAWS.
The Semantic Web. T Berners-Lee, J Hendler, O Lassila, Scientific American. Berners-Lee, T., Hendler, J., Lassila, O.: The Semantic Web. Scientific American (2001)
Semantic Annotations in Web Services. M Nagarajan, Semantic Web Services, Processes and Applications. 3SpringerNagarajan, M.: Semantic Annotations in Web Services. Semantic Web Services, Processes and Applications, Vol. 3, 35-61. Springer (2006)
Semantic Annotation for Knowledge Management: Requirements and a Survey of the State of the Art. V Uren, P Cimiano, J Iria, S Handschuh, M Vargas-Vera, E Motta, F Ciravegna, Journal of Web Semantics. 4Uren, V., Cimiano, P., Iria, J., Handschuh, S., Vargas-Vera, M., Motta, E., Ciravegna, F.: Semantic Annotation for Knowledge Management: Requirements and a Survey of the State of the Art. Journal of Web Semantics 4, 14-28 (2006)
Semantic Web Process Lifecycle: Role of Semantics in Annotation, Discovery, Composition and Orchestration. A P Sheth, Workshop on E-Services and the Semantic Web. Sheth, A.P.: Semantic Web Process Lifecycle: Role of Semantics in Annotation, Discovery, Composition and Orchestration. In: Workshop on E-Services and the Semantic Web, (2003)
D Martin, M Burstein, J Hobbs, O Lassila, D Mcdermott, S Mcilraith, S Narayanan, M Paolucci, B Parsia, T Payne, E Sirin, N Srinivasan, K Sycara, Owl-S: Semantic Markup for Web Services. Martin, D., Burstein, M., Hobbs, J., Lassila, O., McDermott, D., McIlraith, S., Narayanan, S., Paolucci, M., Parsia, B., Payne, T., Sirin, E., Srinivasan, N., Sycara, K.: Owl-S: Semantic Markup for Web Services http://www.w3.org/Submission/OWL-S/
. J De Bruijn, C Bussler, J Domingue, D Fensel, M Hepp, U Keller, M Kifer, B König-Ries, J Kopecky, R Lara, H Lausen, E Oren, A Polleres, D Roman, J Scicluna, M Stollberg, Web Service Modeling Ontology. de Bruijn, J., Bussler, C., Domingue, J., Fensel, D., Hepp, M., Keller, U., Kifer, M., König- Ries, B., Kopecky, J., Lara, R., Lausen, H., Oren, E., Polleres, A., Roman, D., Scicluna, J., Stollberg, M.: Web Service Modeling Ontology, http://www.w3.org/Submission/WSMO/
. R Akkiraju, J Farrell, J Miller, M Nagarajan, M T Schmidt, A Sheth, K Verma, Web Service Semantics -Wsdl-S. Akkiraju, R., Farrell, J., Miller, J., Nagarajan, M., Schmidt, M.T., Sheth, A., Verma, K.: Web Service Semantics -Wsdl-S, http://www.w3.org/Submission/WSDL-S/
J Farrell, H Lausen, Semantic Annotations for Wsdl and Xml Schema. Farrell, J., Lausen, H.: Semantic Annotations for Wsdl and Xml Schema, http://www.w3.org/TR/sawsdl/
Opossum -an Online Portal to Collect and Share Sws Descriptions. U Küster, B König-Ries, A Krug, International Conference on Semantic Computing. Küster, U., König-Ries, B., Krug, A.: Opossum -an Online Portal to Collect and Share Sws Descriptions. In: International Conference on Semantic Computing, 480-481 (2008)
Assam: A Tool for Semi-Automatically Annotating Semantic Web Services. A Hess, E Johnston, N Kushmerick, International Semantic Web Conference. Hess, A., Johnston, E., Kushmerick, N.: Assam: A Tool for Semi-Automatically Annotating Semantic Web Services. In: International Semantic Web Conference, (2004)
Meteor-S Web Service Annotation Framework. A Patil, S Oundhakar, A Sheth, K Verma, International Conference on the World Wide Web. Patil, A., Oundhakar, S., Sheth, A., Verma, K.: Meteor-S Web Service Annotation Framework. International Conference on the World Wide Web (2004)
. A Hess, Assam (Automated Semantic Service Annotation with Machine Learning) Wsdl Annotator. Hess, A.: Assam (Automated Semantic Service Annotation with Machine Learning) Wsdl Annotator, http://www.andreas-hess.info/projects/annotator/index.html
. Bbn Infoether, Technologies, Semwebcentral.Org. InfoEther, BBN Technologies: Semwebcentral.Org, http://wwwprojects.semwebcentral.org/
. U Küster, B König-Ries, A Krug, Opossum Online Portal for Semantic Services. Küster, U., König-Ries, B., Krug, A.: Opossum Online Portal for Semantic Services, http://hnsp.inf-bb.uni-jena.de/opossum/index.php?action=dataguide
Efficient Semantic Web Service Discovery in Centralized and P2p Environments. D Skoutas, D Sacharidis, V Kantere, T K Sellis, ISWCSkoutas, D., Sacharidis, D., Kantere, V., Sellis, T.K.: Efficient Semantic Web Service Discovery in Centralized and P2p Environments. In: ISWC, (2008)
. J Ma, Y Zhang, J He, Web Services Discovery Based on Latent Semantic Approach. In: ICWS. Ma, J., Zhang, Y., He, J.: Web Services Discovery Based on Latent Semantic Approach. In: ICWS, (2008)
Radiant: A Tool for Semantic Annotation of Web Services. K Gomadam, K Verma, D Brewer, A Sheth, J Miller, International Semantic Web Conference. Gomadam, K., Verma, K., Brewer, D., Sheth, A., Miller, J.: Radiant: A Tool for Semantic Annotation of Web Services. In: International Semantic Web Conference, (2005)
Wsmo Studio -a Semantic Web Services Modelling Environment for Wsmo European Semantic Web Conference. M Dimitrov, A Simov, V Momtchev, M Konstantinov, Dimitrov, M., Simov, A., Momtchev, V., Konstantinov, M.: Wsmo Studio -a Semantic Web Services Modelling Environment for Wsmo European Semantic Web Conference, (2007)
N Srinivasan, Wsdl2owl-S. Srinivasan, N.: Wsdl2owl-S, http://www.semwebcentral.org/projects/wsdl2owl-s/
Visual Modelling of Owl-S Services. D Elenius, G Denker, J Scicluna, C Abela, M Montebello, IADIS International Conference WWW/Internet. Owl-S EditorMadrid, ESElenius, D., Denker, G.: Owl-S Editor, http://owlseditor.semwebcentral.org/index.shtml 21. Scicluna, J., Abela, C., Montebello, M.: Visual Modelling of Owl-S Services. In: IADIS International Conference WWW/Internet, Madrid, ES (2004)
Ws-Next, a Web Services Network Extractor Toolkit. C Cherifi, Y Rivierre, J.-F Santucci, 5th International Conference on Information Technology. Cherifi, C., Rivierre, Y., Santucci, J.-F.: Ws-Next, a Web Services Network Extractor Toolkit. In: 5th International Conference on Information Technology, (2011)
Towards a Standard Upper Ontology. A Pease, I Niles, A Pease, International Conference on Formal Ontology in Information Systems. Ogunquit, US-MESigma Knowledge Engineering EnvironmentPease, A.: Sigma Knowledge Engineering Environment, http://sigmakee.sourceforge.net/ 24. Niles, I., Pease, A.: Towards a Standard Upper Ontology. In: International Conference on Formal Ontology in Information Systems, Ogunquit, US-ME (2001)
Linking Lexicons and Ontologies: Mapping Wordnet to the Suggested Upper Merged Ontology. A Pease, I Niles, IEEE International Conference on Information and Knowledge Engineering. Pease, A., Niles, I.: Linking Lexicons and Ontologies: Mapping Wordnet to the Suggested Upper Merged Ontology. In: IEEE International Conference on Information and Knowledge Engineering, 412-416 (2003)
D L Mcguinness, F Harmelen, Owl Web Ontology Language. IEEE: Suo-Kif (Standard Upper Ontology Knowledge Interchange Format ; Universität Leipzig, Freie Universität BerlinIEEE: Suo-Kif (Standard Upper Ontology Knowledge Interchange Format), http://suo.ieee.org/SUO/KIF/suo-kif.html 27. McGuinness, D.L., Harmelen, F.: Owl Web Ontology Language, http://www.w3.org/TR/owl-features/ 28. Universität Leipzig, Freie Universität Berlin, OpenLink: Dbpedia.Org, http://wiki.dbpedia.org
| [] |
[
"Huqariq: A Multilingual Speech Corpus of Native Languages of Peru for Speech Recognition",
"Huqariq: A Multilingual Speech Corpus of Native Languages of Peru for Speech Recognition"
] | [
"Rodolfo Zevallos rodolfojoel.zevallos@upf.edu \nPompeu Fabra University\nPontifical Catholic University of Peru Barcelona Spain\nLimaPeru\n",
"Luis Camacho luis.camacho@pucp.pe \nPompeu Fabra University\nPontifical Catholic University of Peru Barcelona Spain\nLimaPeru\n",
"Nelsi Melgarejo nelsi.melgarejo@pucp.pe \nPompeu Fabra University\nPontifical Catholic University of Peru Barcelona Spain\nLimaPeru\n"
] | [
"Pompeu Fabra University\nPontifical Catholic University of Peru Barcelona Spain\nLimaPeru",
"Pompeu Fabra University\nPontifical Catholic University of Peru Barcelona Spain\nLimaPeru",
"Pompeu Fabra University\nPontifical Catholic University of Peru Barcelona Spain\nLimaPeru"
] | [] | The Huqariq corpus is a multilingual collection of speech from native Peruvian languages. The transcribed corpus is intended for the research and development of speech technologies to preserve endangered languages in Peru. Huqariq is primarily designed for the development of automatic speech recognition, language identification and text-to-speech tools. In order to achieve corpus collection sustainably, we employ the crowdsourcing methodology. Huqariq includes four native languages of Peru, and it is expected that by the end of the year 2022, it can reach up to 20 native languages out of the 48 native languages in Peru. The corpus has 220 hours of transcribed audio recorded by more than 500 volunteers, making it the largest speech corpus for native languages in Peru. In order to verify the quality of the corpus, we present speech recognition experiments using 220 hours of fully transcribed audio. | 10.48550/arxiv.2207.05498 | [
"https://arxiv.org/pdf/2207.05498v1.pdf"
] | 250,450,952 | 2207.05498 | 5b8db6e1d9d69d94da6ee38eb6ce0fee8a0a5148 |
Huqariq: A Multilingual Speech Corpus of Native Languages of Peru for Speech Recognition
12 Jul 2022
Rodolfo Zevallos rodolfojoel.zevallos@upf.edu
Pompeu Fabra University
Pontifical Catholic University of Peru Barcelona Spain
LimaPeru
Luis Camacho luis.camacho@pucp.pe
Pompeu Fabra University
Pontifical Catholic University of Peru Barcelona Spain
LimaPeru
Nelsi Melgarejo nelsi.melgarejo@pucp.pe
Pompeu Fabra University
Pontifical Catholic University of Peru Barcelona Spain
LimaPeru
Huqariq: A Multilingual Speech Corpus of Native Languages of Peru for Speech Recognition
12 Jul 2022Speech CorpusSpeech RecognitionLow-resource Languages
The Huqariq corpus is a multilingual collection of speech from native Peruvian languages. The transcribed corpus is intended for the research and development of speech technologies to preserve endangered languages in Peru. Huqariq is primarily designed for the development of automatic speech recognition, language identification and text-to-speech tools. In order to achieve corpus collection sustainably, we employ the crowdsourcing methodology. Huqariq includes four native languages of Peru, and it is expected that by the end of the year 2022, it can reach up to 20 native languages out of the 48 native languages in Peru. The corpus has 220 hours of transcribed audio recorded by more than 500 volunteers, making it the largest speech corpus for native languages in Peru. In order to verify the quality of the corpus, we present speech recognition experiments using 220 hours of fully transcribed audio.
Introduction
The Huqariq project responds to the endangerment currently faced by native languages in Latin America and the lack of language technologies faced by low-resource languages in Peru (Rogers and Campbell, 2015). This situation is mainly due to the lack of speech corpora, which are the raw material for the creation of language tools, are scarce and the few that exist are privately licensed; for this reason, they should be in the public domain to contribute to the development and revitalization of languages. Around the world, there are some initiatives for the collection of corpora for low-resources languages and that are in the public domain that employ different methodologies and ways of collection. One of the most successful methodologies is crowdsourcing, i.e. native speakers volunteering to help in the construction of the resources. This methodology is supported by web tools or mobile applications, which can be massively used. Our corpus collection tool is designed to expand organically to new native languages as community members record domain-specific base audios as prompts in the corpus collection. Unlike others (Common Voice (Ardila et al., 2019)), this tool does not use text to be read, as many native speakers of indigenous languages are illiterate in their native language. Therefore, our tool replaces reading texts with listening to audios. This subtle but important change facilitates corpus collection.
Prior work
Although the majority of the speech corpora employed in the most widely used tools are private, there are some worldwide initiatives of speech corpora with open licenses and low-resource languages. In 2019 the Vox-Forge project (VoxForge, 2019) collected a speech corpus for 17 languages; this project is community-driven in the same way Mozilla's Common Voice project (Ardila et al., 2019) has collected 2500 hours of transcribed audio for 29 languages by crowdsourcing being one of the most community-supported projects. On the other hand, the speech corpus projects of native languages of Latin America are almost null; in 2019, a speech corpus of 142 hours of fully transcribed Mapudungun was released (Duan et al., 2019); in 2018, the siminchikkunarayku (Cardenas et al., 2018)
Native Languages
Peru is a multicultural country, mainly due to the presence of native first nations, these make up a total of 10% of the population. Because of this population there are still 48 native languages spoken, however, they are under the risk of extinction. These languages are facing some major issues like the lack of a unique grammar or writing system, lack of presence on the internet, lack of mass of expert linguists and lack of electronic resources. (Cardenas et al., 2018) In this section we present some important linguistic characteristics relevant to NLP, especially in regards to dialectal and phonological variety which play an important role in speech based linguistic technology.
Quechua
Quechua (ISO 639-3 que) is a family of languages spoken in South America with about 10 million speakers, not only in the Andean regions but also in the valleys and plains connecting the Amazon jungle and the Pacific coast. Quechua languages are considered highly agglutinative with a subject-object-verb (SOV) sentence structure as well as mostly postpositional. Even though the classification of Quechua languages remains open to research (Heggarty et al., 2005;Lan-derman, 1992), recent work in language technology for Quechua (Rios, 2015;Rios and Mamani, 2014) have adopted the categorization system described by Torero (Torero, 1964). This categorization divides the Quechua languages into two main branches, QI (Glottolog quec1386) and QII (quec1388). Branch QI corresponds to the dialects spoken in central Peru, which are treated as one collective in this paper. QII is further divided in three branches, QIIA, QIIB and QIIC. QIIA groups the dialects spoken in Northern Peru, while QIIB the ones in Ecuador and Colombia. In this paper we work with QI (Central Quechua, Glottolog quec1386) and QIIC (Southern Quechua, Glottolog quec1389).
Southern Quechua
Southern Quechua (QIIC) has two main variants: Chanka Quechua (ISO 639-3 quy) and Collao Quechua, also known as Cusco Quechua (ISO 639-3 quz). In both dialects, only the vowels /a/, /i/ and /u/ are found as phonemic vowels. Referring to consonants, Chanka Quechua has a total of 15, most of them voiceless and as in Spanish, the phoneme /tS < / is written as ch, /ñ/ asñ, and /L/ as ll. On the other hand, Collao Quechua also has a glottal and an aspirated version of each plosive consonant, giving it a total of 25 consonants. Both dialects have voiced consonants in their phonemic inventory due to the large number of borrowings from Spanish.
Central Quechua
Since there is greater dialectal variation among the variants of the QI branch compared to the variation between Quechua Chanka and Quechua Collao, we will go into a bit more detail in this section. Unlike Southern Quechua, Central Quechua (QI) has 3 short phonemic vowels /a/, /i/ and /u/, and 3 long phonemic vowels /aa/, /ii/ and /uu/. Central Quechua also has 3 nasal consonants /m/ /n/, /ñ/, 4 occlusive consonants /p/, /t/, /k/, /q/, 2 affricate consonants of variable value, 3 fricative consonants /s/, /S/, /h/, 2 approximant consonants /j/, /w/ and 3 liquid consonants /λ/, /R/, /l/. The uvular /q/ is pronounced occlusive only in Callejón de Huaylas, being fricative in the other provinces: voiceless [X] in Corongo for all positions, while in the Conchucos it is voiced [K] in initial position and voiceless in coda.
Aymara
The Aymara language (ISO 639-3 aym) belongs to the Aru linguistic family, is spoken by the Aymara people and although it is in a vital state (MINEDU, 2018), it is considered an endangered language (Adelaar, 2014). Aymara is spoken in four countries: Argentina, Bolivia, Chile and Peru. In Peru, it is the second most spoken native language after Quechua, according to the 2017 census conducted by the National Institute of Statistics and Informatics (INEI, 2017). It is an agglutinative language. Aymara has 3 short phonemic vowels /a/, /i/ and /u/, and 3 longä /aa/,ï /ii/ andü /uu/. Also, it features 26 consonant phonemes, most of them aspirated occlu-
sives ph [p h ], th [t h ] and kh [k h ].
In addition, the aspirated postalveolar affricate is signaled by the triplet chh [tS h ] and an apostrophe is used to signal the occlusive and affricate ejective p'
[p'], t' [t'], ch' [ch'] and k' [k'].
Like Spanish and Quechua it features the phonemes /tS < / ch, /ñ/ñ, and /L/ ll (MINEDU, 2021a).
Shipibo-Konibo
The Shipibo-Konibo people are one of the most influential communities in the Peruvian Amazon. They call themselves "Jonikon", which means "real people"; they also adopted the exonym "shipibo". Their own language or 'joikon', 'true language' is now known as Shipibo-Konibo. This language belongs to the Panoan linguistic family, which is an important subject of study for many linguistic researchers in Peru (Adelaar, 2014;Zariquiey and others, 2006). Shipibo-Konibo is an agglomerative language, with a high use of common suffixes (130) plus some prefixes (13) for its wordformation process. Furthermore, the basic sentence order is SOV (subject-object-verb) as opposed to Spanish (SVO) (Valenzuela, 2003). This language is spoken by around 22 thousand people in 150 communities and is taught in almost 300 public schools (MINEDU, 2018). The majority of the population is bilingual, meaning they speak Shipibo-Konibo and Spanish. Although Shipibo-Konibo is still transmitted to children, there is a growing number of people who speak Spanish as a dominant language and achieve only partial or passive mastery of their native language. Furthermore, the degree of impact of Spanish speech and structure on Shipibo-Konibo is considerable. For these reasons, the language is considered to be in a vulnerable situation. The phonological repertoire of Shipibo consists of 16 consonants and 4 vowels. The vowels in Shipibo are characterized by the presence of two heights (high and low), among which it is important to point out the high central vowel, not rounded 1.
Corpus Creation
Methodology
Like Common Voice, we used the crowdsourcing method, which is based on the massive help of volun-teers for audio recordings. This methodology allowed us to collect as many audios as possible in a short time and with a small budget. We used two corpus collection applications (Huqariq, Tarpuriq) designed exclusively to record and validate respectively. Unlike the Common Voice platform, the volunteers do not have to read a sentence but listen to it. This last functionality is important for native languages of Peru, due to a large part of the native speaker population are illiterate.
Text Corpus
This section describes the steps followed to collect the text to be used in the corpus. The official dictionaries of each language described in this research were used. These dictionaries are publicly available on the Internet. We used the official dictionaries issued by the Peruvian Ministry of Education, because the texts in the dictionaries are correctly written according to the official standard of each language. In Table 1 we can observe the dictionaries used for the creation of our corpus. In order to organize the data of the collected dictionaries, a table was created manually with the following columns: Language, family, variety, region, author, dictionary name, year, lexical entry, grammatical category, gloss, definition in Spanish, definition in source language, synonym in Spanish, synonyms in source language, notes (clarifications), example in Spanish and example in source language. This table contains all the data from the dictionaries collected. This table was very helpful for the linguists who supported us in the project, since they could make filters to be able to review the data in a simpler and faster way. Finally, the entries that did not have an example in the source language were eliminated, since these examples are used as transcriptions in the corpus.
Preprocessing and Normalization
After obtaining all the data from the dictionaries of the different languages in a table, we eliminated all the sentences in the "example in source language" column that had more than 10 words. This was done so that the volunteers would not have problems remembering the sentence to repeat when recording their voices. Subsequently, 4 native speaker linguists corrected, normalized and standardized the sentences in the "example in source language" column according to the grammar issued by the Ministry of Education and Ministry of Culture for each language. On the other hand, for the Southern Quechua sentences, a morphological analyzer (Rios, 2015) was used, which automatically standardizes according to the rules of the Ministries of Education and Culture. Table 2 shows the number of sentences we selected from each language.
Recording of prompts
Linguists who are native speakers of each language recorded their voices reading each of the selected sentences. The recordings were made using the Tarpuriq application for Android, which has an audio recording module very similar to Huqariq application for Android (Camacho and Zevallos, 2020). The recordings were made in a controlled environment, mainly free of noise and interference of any kind. All recordings made by the linguists were stored in a folder called "prompts" and folders named after their respective languages. All the recordings (prompts) made by the linguists are then entered into the Huqariq application so that they can be listened to by the volunteers to record their voices. Finally, the prompts were saved as 16-bit, singlechannel WAV audio files with a sampling frequency of 16 kHz.
Recording and validation of audios
For the collection of recordings (audio files) from native speakers (users), Huqariq was used. This application allowed native speakers to record their voices repeating the sentences they hear in the prompts mentioned above. The app assigns 200 sentences per user, this feature of Huqariq was developed in this research in order for users to have a goal and to be able to be rewarded when they achieve it. The recordings of the volunteers have the same technical information as the prompts. The recordings made by users were validated using 2 methods. The first method used an automated quality validation module that checks the noise, silence and duration of the recordings, this method was incorporated into the Huqariq application. The second method was performed by Tarpuriq, which allowed native linguists of the respective languages to validate the quality of the recordings through a voting system, this method is similar to the one used by Common Voice. Each recording must be voted 3 times, if a recording receives two positive votes, it will be marked as valid, on the contrary, if it receives two negative votes, it will be marked as invalid. Recordings marked as valid will be added to the final training, development and test corpus. These 2 methods allow to have a good quality corpus. The validated recordings were stored in a folder where they were subsequently divided into three data sets (train, dev, test) according to statistical power analyses. Given the total number of validated recordings in a language, the number of recordings in the test set is equal to the number needed to achieve a 99% confidence level with a margin of error of 1% relative to the number of recordings in the training set. The same is true for the development set. Table 3 shows the number of hours recorded and validated for each language. As can be seen, Southern Quechua has the highest number of hours collected. This is due to the fact that Southern Quechua has the largest number of native speakers compared to the other languages in this study. In addition, it has a greater participation in revitalization tasks due to the majority of research carried out for this language. Central Quechua, Aymara and Shipibo-Konibo, on the other
Automatic Speech Recognition Experiments
The following experiment demonstrates the potential of the Huqariq corpus for multilingual speech research for low-resource languages. For this experiment we used the corpus described in Table 4. We used the pre-trained model Wav2Vec2 (Baevski et al., 2020) which was trained with 600 hours of Spanish 2 . It is important to mention that we use a pre-trained model of Spanish because the languages in our corpus contain many borrowings from Spanish and this can improve the performance of the model. Moreover, we used the training setup from the public repository of which we obtained the pre-trained model. We trained our models on a GPU with 8 GB of memory for about 24 hours. In addition, we used the Adam optimizer, a learning rate of 4x10 −5 and chose the Wav2letter++ decoder to obtain LM-biased results (Pratap et al., 2019). For the other four languages the modeling units are determined by the BPE algorithm, as in (Zhou et al., 2018). For the experiments, we add an additional projection layer and fit the ASR model with CTC loss as (Yi et al., 2020). During decoding, 5-gram models are used, each of which is trained with the corresponding training transcripts. Table 5 shows the results of the Wav2Vec2 model for each trained language and the results of previous work. The character error rate (CER) of the resulting model in the test set, defined as the Levenshtein distance (Fiscus et al., 2006) of characters between the true transcription and the decoding result was used to measure the performance of the models for each language. It can be seen from Table 5 Table 5: Performance results of the ASR models performed for each language using the CER metric.
Concluding remarks
We have presented Huqariq: a multilingual speech corpus of Peruvian native languages for the development of speech recognition tools. By using the crowdsourcing methodology and 2 mobile applications we have collected the largest speech corpus of native Peruvian languages. In addition, we have made some modifications to the collection applications so that they are better adapted to the problems of poorly resourced and endangered languages. We are going to release a Creative Commons CC0 licensed version so that the corpus can be in the public domain. On the other hand, we have conducted some experiments on automatic multilingual speech recognition with the Huqariq corpus using the Wav2Vec2 model. This is the first time that speech recognition experiments have been performed for Central Quechua, Aymara and Shipibo-Konibo. Finally, we are working toward the goal that by the end of 2022 Huqariq will be able to work with 20 native languages of Peru and that many more native speakers of these languages will become volunteers.
Acknowledgments
We thank all the volunteers who are native speakers of these beautiful languages native to Peru for their time, and especially the volunteer researchers who find new texts to translate and add to the application. Special thanks to Roger Gonzalo, Virginia Mamani, and Abel Anccalle for their work on Huqariq, and all the members of the Siminchikkunarayku team. This work was supported by the Pontifical Catholic University of Peru in 2020-2021.
project collected 99 hours of audio of southern Quechua. Unfortunately, both projects do not have open licenses.
The alveolar nasal /n/ has three allophones, namely: the velar [ñ], in syllabic coda and when it precedes the velar [k], the uvular [ð] when it precedes [q], and the bilabial [m] before [p]. The vibrant [R] becomes a retroflex sibilant [Þ] at word onset. The voiced bilabial /b/, dental /d/ and velar /g/, as well as the voiceless bilabial fricatives /φ/ and voiceless retroflex /Þ/, are used as distinct phonemes only in borrowings from Spanish. (MINEDU, 2021b).
Table 2 :
2Number of sentences for each language used
in the construction of the corpus.
hand, unfortunately have little or no participation in re-
vitalization or cultural promotion.
This corpus is a July 2021 version of the corpus, which
is the most updated, since due to the pandemic we have
not been able to continue working on the validation of
the corpus. The corpus currently has a private license,
since part of the work was done with funds from pri-
vate entities. For this reason, those interested can write
to us if they wish to make use of it. On the other hand,
the corpus statistics are visible on the Siminchikku-
narayku 1 page where the following information can be
seen: language, phrase, votes, gender, accent. This in-
formation is relevant for different types of research and
for that reason we consider it useful to place it, as well
as in Table 4 the number of hours divided by each lan-
guage.
Language volunteers
Hours
Total validated
Southern
Quechua
480
340
180
Central
Quechua
20
20
20
Aymara
8
15
14
Shipibo
2
7
6
Table 3 :
3Huqariq current data statistics. This data is from the July 2021 version.
Table 4 :
4Statistics of the number of hours divided ac-
cording to train, dev and test and by each language
that the Wav2Vec2 model does not outperform the previous work for Southern Quechua, this lead to the assumption that the amount of corpus used to train the Wav2Vec2 model is not large enough for the decoder to generalize well. On the other hand, the results for the other languages cannot be compared since in their case for the first time the decoder has been able to generalize well.Model
Southern
Quechua
Central
Quechua
Aymara Shipibo
wav2letter++
+ (DA)
31.48
22.75
-
-
-
Wav2Vec2
+ CTC (subword)
+ LM (decode)
28.73
23.19
41.15
36.37
59.81
52.6
72.15
67.47
https://huggingface.co/facebook/wav2vec2-large-xlsr
Endangered languages with millions of speakers: Focus on quechua in peru. W F H Adelaar, JournaLIPP. 3Adelaar, W. F. H. (2014). Endangered languages with millions of speakers: Focus on quechua in peru. JournaLIPP 3, 2014, 1-12.
Common voice: A massively-multilingual speech corpus. R Ardila, M Branson, K Davis, M Henretty, M Kohler, J Meyer, R Morais, L Saunders, F M Tyers, G Weber, arXiv:1912.06670arXiv preprintArdila, R., Branson, M., Davis, K., Henretty, M., Kohler, M., Meyer, J., Morais, R., Saunders, L., Tyers, F. M., and Weber, G. (2019). Common voice: A massively-multilingual speech corpus. arXiv preprint arXiv:1912.06670.
wav2vec 2.0: A framework for selfsupervised learning of speech representations. A Baevski, H Zhou, A Mohamed, Auli , M , arXiv:2006.11477arXiv preprintBaevski, A., Zhou, H., Mohamed, A., and Auli, M. (2020). wav2vec 2.0: A framework for self- supervised learning of speech representations. arXiv preprint arXiv:2006.11477.
Language technology into high schools for revitalization of endangered languages. L Camacho, R Zevallos, 2020 IEEE XXVII International Conference on Electronics, Electrical Engineering and Computing (INTERCON). IEEECamacho, L. and Zevallos, R. (2020). Language tech- nology into high schools for revitalization of endan- gered languages. In 2020 IEEE XXVII International Conference on Electronics, Electrical Engineering and Computing (INTERCON), pages 1-4. IEEE.
Siminchik: A speech corpus for preservation of southern quechua. R Cardenas, R Zevallos, R Baquerizo, L Camacho, 21Cardenas, R., Zevallos, R., Baquerizo, R., and Ca- macho, L. (2018). Siminchik: A speech corpus for preservation of southern quechua. ISI-NLP 2, page 21.
M Duan, C Fasola, S K Rallabandi, R M Vega, A Anastasopoulos, L Levin, A W Black, arXiv:1912.01772A resource for computational experiments on mapudungun. arXiv preprintDuan, M., Fasola, C., Rallabandi, S. K., Vega, R. M., Anastasopoulos, A., Levin, L., and Black, A. W. (2019). A resource for computational experiments on mapudungun. arXiv preprint arXiv:1912.01772.
Multiple dimension levenshtein edit distance calculations for evaluating automatic speech recognition systems during simultaneous speech. J G Fiscus, J Ajot, N Radde, C Laprun, LREC. CiteseerFiscus, J. G., Ajot, J., Radde, N., Laprun, C., et al. (2006). Multiple dimension levenshtein edit dis- tance calculations for evaluating automatic speech recognition systems during simultaneous speech. In LREC, pages 803-808. Citeseer.
Enigmas en el origen de las lenguas andinas: aplicando nuevas técnicas a las incógnitas por resolver. P Heggarty, M L Valko, S M Huarcaya, O Jerez, G Pilares, E P Paz, E Noli, H Usandizaga, Instituto nacional de estadística e informática. 40Heggarty, P., Valko, M. L., Huarcaya, S. M., Jerez, O., Pilares, G., Paz, E. P., Noli, E., and Usandizaga, H. (2005). Enigmas en el origen de las lenguas andi- nas: aplicando nuevas técnicas a las incógnitas por resolver. Revista Andina, 40:9-57. INEI. (2017). Instituto nacional de estadística e informática. https://www.inei.gob.pe/media/MenuRecursivo/pub Accessed: 2022-07-12.
Quechua dialects and their classification. P N Landerman, PhD ThesisLanderman, P. N. (1992). Quechua dialects and their classification. PhD Thesis.
La velarización en shipibo. R R Martinez, Escritura y pensamiento. 1224Martinez, R. R. (2009). La velarización en shipibo. Escritura y pensamiento, 12(24):91-134.
Documento nacional de lenguas originarias del perú. MINEDU. MINEDU. (2018). Documento na- cional de lenguas originarias del perú. https://centroderecursos.cultura.pe/sites/defau Accessed: 2022-07-12.
Aymara arutha chiqapa qillqañataki panka= manual de escritura aimara. Minedu, Ministerio de Educación. MINEDU. MINEDU. (2021a). Aymara arutha chiqapa qil- lqañataki panka= manual de escritura aimara. Min- isterio de Educación. MINEDU. (2021b).
Wav2letter++: A fast open-source speech recognition system. V Pratap, A Hannun, Q Xu, J Cai, J Kahn, G Synnaeve, V Liptchinsky, R Collobert, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEPratap, V., Hannun, A., Xu, Q., Cai, J., Kahn, J., Syn- naeve, G., Liptchinsky, V., and Collobert, R. (2019). Wav2letter++: A fast open-source speech recogni- tion system. In ICASSP 2019-2019 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6460-6464. IEEE.
Morphological disambiguation and text normalization for southern quechua varieties. A Rios, R C Mamani, 39Rios, A. and Mamani, R. C. (2014). Morphological disambiguation and text normalization for southern quechua varieties. COLING 2014, page 39.
A basic language technology toolkit for Quechua. A Rios, University of ZurichPh.D. thesisRios, A. (2015). A basic language technology toolkit for Quechua. Ph.D. thesis, University of Zurich.
. C Rogers, L Campbell, Endangered languagesRogers, C. and Campbell, L. (2015). Endangered languages.
. https:/oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore-9780199384655-e-21https://oxfordre.com/linguistics/view/10.1093/acrefore/9780199384655.001.0001/acrefore Accessed: 2022-07-12.
Los dialectos quechua. A Torero, Separata de Anales Científicos de la Universidad Agraria. IITorero, A. (1964). Los dialectos quechua. Separata de Anales Científicos de la Universidad Agraria Vol. II.
Transitivity in shipibokonibo grammar. P M Valenzuela, University of Oregon. VoxForge.Valenzuela, P. M. (2003). Transitivity in shipibo- konibo grammar. University of Oregon. VoxForge. (2019). Voxforge. http://www.voxforge.org/.
Applying wav2vec2. 0 to speech recognition in various low-resource languages. C Yi, J Wang, N Cheng, S Zhou, B Xu, arXiv:2012.12121arXiv preprintYi, C., Wang, J., Cheng, N., Zhou, S., and Xu, B. (2020). Applying wav2vec2. 0 to speech recognition in various low-resource languages. arXiv preprint arXiv:2012.12121.
Reinterpretación fonológica de los préstamos léxicos de base hispana en la lengua. Boletín de la Academia Peruana de la Lengua. R Zariquiey, Zariquiey, R. et al. (2006). Reinterpretación fonológica de los préstamos léxicos de base hispana en la lengua. Boletín de la Academia Peruana de la Lengua.
Multilingual end-to-end speech recognition with a single transformer on low-resource languages. S Zhou, S Xu, B Xu, arXiv:1806.05059arXiv preprintThis figure "imagen_2021-08-30_191826.png" is available in "pngZhou, S., Xu, S., and Xu, B. (2018). Multilingual end-to-end speech recognition with a single trans- former on low-resource languages. arXiv preprint arXiv:1806.05059. This figure "imagen_2021-08-30_191826.png" is available in "png" format from: http://arxiv.org/ps/2207.05498v1
| [] |
[
"Controlling Text Edition by Changing Answers of Specific Questions",
"Controlling Text Edition by Changing Answers of Specific Questions"
] | [
"Lei Sha lei.sha@cs.ox.ac.uk \nDepartment of Computer Science\nUniversity of Oxford\nUnited Kingdom\n",
"Patrick Hohenecker patrick@serein.ai \nDepartment of Computer Science\nUniversity of Oxford\nUnited Kingdom\n",
"Thomas Lukasiewicz thomas.lukasiewicz@cs.ox.ac.uk \nDepartment of Computer Science\nUniversity of Oxford\nUnited Kingdom\n"
] | [
"Department of Computer Science\nUniversity of Oxford\nUnited Kingdom",
"Department of Computer Science\nUniversity of Oxford\nUnited Kingdom",
"Department of Computer Science\nUniversity of Oxford\nUnited Kingdom"
] | [] | In this paper, we introduce the new task of controllable text edition, in which we take as input a long text, a question, and a target answer, and the output is a minimally modified text, so that it fits the target answer. This task is very important in many situations, such as changing some conditions, consequences, or properties in a legal document, or changing some key information of an event in a news text. This is very challenging, as it is hard to obtain a parallel corpus for training, and we need to first find all text positions that should be changed and then decide how to change them. We constructed the new dataset WIKIBIOCTE for this task based on the existing dataset WIKIBIO (originally created for table-to-text generation). We use WIKIBIOCTE for training, and manually labeled a test set for testing. We also propose novel evaluation metrics and a novel method for solving the new task. Experimental results on the test set show that our proposed method is a good fit for this novel NLP task. | 10.18653/v1/2021.findings-acl.110 | [
"https://arxiv.org/pdf/2105.11018v1.pdf"
] | 235,166,857 | 2105.11018 | 72024eef4198f88130ba3b8823bdbda9168cc493 |
Controlling Text Edition by Changing Answers of Specific Questions
23 May 2021
Lei Sha lei.sha@cs.ox.ac.uk
Department of Computer Science
University of Oxford
United Kingdom
Patrick Hohenecker patrick@serein.ai
Department of Computer Science
University of Oxford
United Kingdom
Thomas Lukasiewicz thomas.lukasiewicz@cs.ox.ac.uk
Department of Computer Science
University of Oxford
United Kingdom
Controlling Text Edition by Changing Answers of Specific Questions
23 May 2021
In this paper, we introduce the new task of controllable text edition, in which we take as input a long text, a question, and a target answer, and the output is a minimally modified text, so that it fits the target answer. This task is very important in many situations, such as changing some conditions, consequences, or properties in a legal document, or changing some key information of an event in a news text. This is very challenging, as it is hard to obtain a parallel corpus for training, and we need to first find all text positions that should be changed and then decide how to change them. We constructed the new dataset WIKIBIOCTE for this task based on the existing dataset WIKIBIO (originally created for table-to-text generation). We use WIKIBIOCTE for training, and manually labeled a test set for testing. We also propose novel evaluation metrics and a novel method for solving the new task. Experimental results on the test set show that our proposed method is a good fit for this novel NLP task.
Introduction
In many cases, we need to change some specific content in a document. For example, in the legal domain, the items and conditions in contract documents often need to be revised many times. We would like to use artificial intelligence to conduct this process for human editors. A major difficulty of this process is that the machine learning model should decide where to edit and how to edit.
Usually, the place of specific content ("where to edit") can be located by a question, and the content updating ("how to edit") can be determined by the answer of the question. Therefore, in this paper, we propose the new task of controllable text edition (CTE). In this task, we would like to achieve the following goal: adjust some content of a document D, to make the answer A of a documentrelated question Q changed to a new answer A ′ . The question Q to D has an answer A (in red; its rationale in D also in red). If we would like to change the answer to the new answer A ′ (in blue), then we have to change some content in D, yielding the modified text D ′ (with the new content in blue) in the lower box.
For example, in Fig. 1, when we change the red part of the original text to the blue part, the answer of the question turned to the new answer as a consequence.
There are three main challenges in this task:
(1) The machine learning model should decide the positions that need to be changed in the document. Usually, finding the answer positions for a given document-related question is similar to extractive machine reading comprehension tasks (Zeng et al., 2020), which requires to fully understand both the question and the document. Nearly all extractive machine reading tasks, such as SQuAD (Rajpurkar et al., 2016(Rajpurkar et al., , 2018 and CNN/Daily Mail (Hermann et al., 2015), focus on extracting one span from the document as answer. Differently from extractive machine reading, in our task, the answer A is not necessarily a substring of the document, and there may exist multiple positions that have to be changed. Therefore, our task is much more challenging than extractive machine reading.
(2) The model should generate a new document that supports the new answer A ′ for question Q.
Note that this cannot be solved by directly replacing the original words in the edit positions with the new answer A ′ , because the new answer may not fit perfectly with the document, which would make the document disfluent.
(3) There are nearly no parallel data for model training, because obtaining a large annotation set for this task is very hard. 1 However, the model may be trained by lists of triples Q, D, A that can be obtained from datasets in machine reading and/or structured data extraction (as described below).
In this paper, we introduce and define the task of controllable text edition (CTE). We propose to transform the WIKIBIO dataset (Lebret et al., 2016) into a list of triples Q, D, A for training. WIKIBIO was originally designed for table-to-text generation, in which each case is composed of a Wikipedia passage D and an infobox (which is a list of field, content 2 pairs). In detail, we take each "field" in the infobox as the question Q, and each "content" in the infobox as the answer A. Therefore, for each field, content pair, we can create a Q, D, A triple. After some pruning, we finally selected 26 different Q's and 141k Q, D, A triples for the training set, as well as 17.7k triples for the development set. We also annotated a small test set of about 1k data for evaluation in the form of Q, D, A, A ′ , D ′ (A ′ represents the new answer, and D ′ represents the ground-truth modified text). The resulting new dataset is called WIKIBIOCTE.
In addition, we propose a novel method, called Select-Mask-Generate (SMG), to solve the proposed CTE task. In this method, we use the selector-predictor architecture by to select the answer-related tokens, and we then use complementary masks to split the text into an answer-related part and an answer-unrelated part. Then, we reconstruct the original text based on the answer-unrelated part and the original answer. The reconstruction process is a partial generation method, which only generates the masked-out part without any length limit. In our experiments, the SMG model has achieved the state-of-the-art performance, compared to baseline models in the generation of modified documents. The code and the test set WIKIBIOCTE are available online 3 .
Related Work
The proposed task of controllable text edition is related to the following existing tasks.
Attribute Disentanglement
Attribute disentanglement tends to control the attributes of a given text or image (such as sentiment, tense, syntax, or face pose) by disentangling different attributes into different subspaces. When transferring attributes, the content of the text/image needs to be preserved. Usually, disentanglement works can be divided into implicit and explicit disentanglement. Implicit disentanglement (Higgins et al., 2017;Chen et al., 2018;Moyer et al., 2018;Mathieu et al., 2018;Kim and Mnih, 2018) separates the latent space into several components in a purely unsupervised way, expecting that each component corresponds to an attribute. However, the number of components cannot be decided in advance, neither does the correspondence between attributes and components. Also, the training process may prune some of the components (Stühmer et al., 2019), which will hurt the interpretability of the latent space. Explicit disentanglement (Chen et al., 2016;John et al., 2019;Romanov et al., 2019; tends to separate the latent space into more interpretable components with explicit correspondence to specific attributes. Hence, it usually requires gold labels of attributes in the training set.
In comparison, our task tends to control the content of the text by tuning answers to text-related questions. Attribute disentanglement is difficult to be applied to our task, because the modification of the content should be decided by both the question and the answer simultaneously, which is much sparser than attributes.
Lexically Constrained Decoding
Lexically constrained decoding (Hokamp and Liu, 2017;Miao et al., 2019;Sha, 2020) directly controls the output of the generation model by adding constraints. Usually, the constraints include hard constraints (requiring the generated 3 https://sites.google.com/view/control-text-edition/home sequence to contain some keywords) and soft constraints (requiring the generated sentence to have the same meaning to a given text).
The basic methods of lexically constrained decoding can be divided into enhanced beam search (Hokamp and Liu, 2017;Post and Vilar, 2018) and stochastic search (Miao et al., 2019;Liu et al., 2020;Sha, 2020). Enhanced beam search (Hokamp and Liu, 2017;Hasler et al., 2018; changes some strategies in beam search to make the process of searching for a constraint-satisfying sentence easier. However, for some tasks with an extremely large search space, beam-search-based methods may be computationally too costly or even fail (Miao et al., 2019). Stochastic search tends to edit an initial sentence step-by-step, where the editing position and action can be decided by Metropolis-Hastings sampling (Miao et al., 2019), a discrete scoring function (Liu et al., 2020), or gradient-based methods (Sha, 2020). However, lexically constrained decoding is hard to be applied to our task, because adjusting the text to fit a text-related question's new answer is much more complicated than simply satisfying a hard or soft constraint.
Text Editing and Infilling
In some tasks, to simplify the text generation problem, researchers tend to edit existing text or prototypes to obtain a refined text that satisfies some specific requirements. Examples are the generation of summaries by template-based rewriting (Cao et al., 2018; and the generation of text or a response by editing a prototype sentence Pandey et al., 2018;Wu et al., 2019). In (Yin et al., 2018), the distributed representations of edit actions are learned and applied to editing Wikipedia records (Faruqui et al., 2018) and Github code (Yin et al., 2018). Panthaplackel et al. (2020) further integrate a copy mechanism into text editing. Text infilling (Fedus et al., 2018) means to use machine learning models to fill the blanks of a cloze test. Zhu et al. (2019) propose a more general text infilling task, which allows an arbitrary number of tokens (instead of a single token) in each blank.
In the above text editing tasks, the goal of editing is always consistent among all the datasets: for a better summarization, a better response, or a better informative sentence. Differently from them, our proposed task requires the editing to be guided by the document-related answer of the question. So, each above case has a different editing goal. Thus, our task requires deciding where to edit according to the given question in the first step, and then deciding how to edit, which makes our task more complicated than all the above text editing tasks.
Dataset
We now formally define the task of controllable text edition and propose a dataset for this task.
Task Definition
The task of controllable text edition (CTE) is defined as follows. The input is a triple D, Q, A ′ , where D is a document, Q is a document-related question, and A ′ is an expected answer for Q to D. The output is D ′ , which is a minimal modification of D such that the answer for Q to D ′ is now A ′ . Note that the original answer of Q to D is A, but A is not an input to the task, and usually A = A ′ .
WIKIBIO as Controllable Text Editing Dataset
We propose to modify the WIKIBIO dataset (Lebret et al., 2016) to make it fit for our task. WIKIBIO was originally designed for table-to-text generation (Lebret et al., 2016;, which generates a celebrity's biography according to his/her basic information. Each example in the dataset is composed of a Wikipedia infobox and a text (the first paragraph in the Wiki page) describing the infobox as shown in Table 1.
In an inverse way, the WIKIBIO dataset can be taken as a question-answering dataset: each field can be taken as a question, and each content can be taken as an answer. For example, in Fig. 1, the field "Occupation" can be interpreted as question "What is the person's occupation?", and the corresponding content "Virology" is the answer.
Therefore, we take the text in WIKIBIO as the document (D) in our task, the field as the question (Q), and the content as the answer (A). Due to the huge cost of data annotation, the model needs to be trained without the changed answer (A ′ ) and the referenced document (D ′ ).
For the creation of the training and development sets, we count the frequency of fields and select the fields that occurred more than 5k times in WIK-IBIO's training set as candidate questions (Q's). Then, we filter out some Q's that do not have corresponding answers in D 4 . We then get a list of 26 different Q's as shown in Table 2. After filtering the Q's according to Table 2, we get 141k Q, D, A triples for the training set and 17.7k triples for the development set.
Then, we manually labeled a small test set in which each example contains (D, Q, A) as well as the changed answer (A ′ ) and the referenced document (D ′ ). The annotation process can be illustrated as follows:
1. We randomly sampled an equal number of examples for all the fields in Table 2. For each field, we sample ⌈ 1000 #F ⌉ cases (#F is the number of selected fields), to make sure that the size of the test set is around 1k.
2. We assigned a changed answer (A ′ ) to each example by randomly picking a similar phrase to the original answer (A). The similar phrase may occur in different examples, but it shares the same Q with the original answer (A).
3. We asked human data graders to give a modified text (D ′ ) for each example according to the original text (D), question (Q), and the changed answer (A ′ ). We asked two talented linguistics to annotate the 1k test set.
Note that there are also other datasets that are potentially able to be modified as controllable text editing dataset, such as SQuAD (Rajpurkar et al., 2016), RACE (Lai et al., 2017), and MCTest (Richardson et al., 2013). We did not choose them for the following reasons:
(1) For extractive machine reading tasks like SQuAD (Rajpurkar et al., 2016), the answers are simple substrings of the document, so that in most cases, the text modification in our task can be solved by a simple string replacement, which violates the goal of our task.
(2) Multiple-choice machine reading tasks like RACE (Lai et al., 2017) usually require full and deep reasoning of the whole document to get the answer, which would make the text modification in our task unable to be solved by partial modification. Differently from them, most contents (A) in WIKIBIO usually cannot be directly extracted as substrings from the document (D). Besides, the contents usually has some related information that should be modified at the same time. For example, if somebody is a pianist, then he/she may have received a piano award instead of a guitar award. Therefore, WIKIBIO satisfies the goal of our proposed task: making minimal changes to the original document to make it fit the changed answer (A ′ ).
Select-Mask-Generate (SMG) Method for Controllable Text Edition
We introduce the training and testing method of our proposed method. In the training phase, the model is trained to learn to recognize answerrelated (A-related) tokens and learn to fill newanswer-related (A ′ -related) tokens into the blanks after deleting answer-related tokens.
Training Phase
In the training phase, we only have Q, D, and A. So, we teach the model to (1) identify answerrelated information, and (2) be able to reconstruct D from A and (D − A p ) (the original text with all answer-related information masked out, where A p means the predicted answer-related tokens). The model architecture is shown in Fig. 2. Inspired by InfoCal , we use a Selector-Predictor architecture to identify the least-but-enough answer-related words in the original document (D). The main architecture of the Selector network is a BiLSTM model, which samples 5 a binary-valued mask (M ) for each input token (called answer mask), denoting whether to select this token as answer-related token (1) or not (0). Given an input document D = {x 1 , . . . , x n } and a question Q, the Selector samples an answer-related mask M = {m 1 , . . . , m n } as follows:
M ∼ Sel(M |D, Q),(1)
where "Sel" represents the selector network. Then, we call the complement of the answer mask (M = 1−M ) as the context mask, and we denote context template as the token sequence after masking out the answer-related tokens.
Answer Reconstruction
We require that the answer-related information contains everything about the answer A, so we use an answer decoder to reconstruct an answer sequenceÃ. Then, we calculate the reconstruction loss L A as follows:
p a (Ã|M, D) = Dec A ( 1 j m j i m i x i ),(2)L A = E M ∼Sel(M |D,Q) p a (A|M, D),(3)
where Dec A is the answer decoder, and p a is the sentence distribution generated by Dec A . Note that the input to Dec A is the average vector of the selected token vectors: the answerrelated tokens are usually very few, so it is not necessary to use heavier encoders like LSTMs (Hochreiter and Schmidhuber, 1997) or transformers (Vaswani et al., 2017).
Document Reconstruction
On the other hand, D should be reconstructed by the context template and the gold answer A. We use an LSTM encoder Enc D to encode the context 5 The sampling process is implemented by Gumbel Softmax (Jang et al., 2016), which is differentiable. Figure 2: The architecture of our SME model. In the testing phase, we need to replace the input to the answer encoder from the gold answer A to the new answer A ′ , then the output of the context decoder will become the modified textD ′ .
tokens as shown in Eqs. 4 and 5:
h ′ 1 , . . . , h ′ n = Enc D ([m 1 x 1 , . . . , m n x n ]), (4) H m = Maxpooling(h ′ 1 , . . . , h ′ n ),(5)
where h ′ 1 , . . . , h ′ n are the encoding vectors corresponding to each input token. We then take the averaged word vector of the input gold answer A, denoted V A , as an external condition of the decoder.
Differently from conventional decoders, our decoder only partially generates tokens to fill in the blanks of the context templates, as shown in Fig. 3. This brings two changes in the training phase: (1) we only need to calculate the loss caused by the tokens filled in the blank, and (2) the model needs to learn an external end-of-answer (EOA) token S eoa for each token filled in the blanks. The EOA token is very important because it is an indicator about when to stop filling the current blank.
Learning to generate the words. In each time step t of the decoder, we use an LSTM (Hochreiter and Schmidhuber, 1997) unit to predict the next word y t and the EOA token S eoa as follows:
h t = F LSTM ([y t−1 , V A ], h t−1 ),(6)h w h eoa = σ σ F m (h t ),(7)s lstm t (w) = F w (h w ),(8)p(S eoa (t)) = Softmax F eoa (h eoa ) ,(9)
where h w and h eoa are hidden layers (the time step index t is omitted), F LSTM is an LSTM cell, F m , F w , and F eoa are linear layers, and s lstm t (w) is a scoring function that suggests the next word to generate. p(S eoa (t)) is the probability distribution of the EOA token.
Note that in the decoder, we use the copy mechanism , which encourages the decoder to generate words by directly copying from the input context sequence D and answer sequence A. The copy mechanism computes a copy score s copy t (w) for each word in D and A. Then, the generated probability of each word is computed as:
s t (w) = s lstm t (w) + s copy t (w),(10)p t (w) = Softmax(s t (w)).(11)
Thus, the document D's reconstruction loss is as follows:
L recon = −E M t m t log p t (y t |M , A) ,(12)
where M ∼ Sel(M |D, Q), the mask m t is multiplied in each time step, because we only need the losses of blank-filling tokens.
Learning the end-of-answer (EOA) tags. We have an EOA tag for each blank-filling token. The EOA tag is 1 if the corresponding token is the last token in the blank. For the other blank-filling tokens, the EOA tag is 0. The gold EOA tag in each time step g eoa t can be computed by the difference between the previous answer mask m t−1 and the current answer mask m t . There are three possible values (−1, 0, and 1): g eoa t = 0 when the difference is −1 or 0, and g eoa t = 1 when the difference is 1. Then, we have the cross-entropy loss as Eq. 13:
g eoa t = max(mt−1 − mt, 0) Leoa = −EM t g eoa t mt log p(Seoa(t) = 1) + (1 − g eoa t )mt log p(Seoa(t) = 0) .(13)
Therefore, the final optimization objective is shown in Eq. 14:
L = L A + λ r L recon + λ eoa L eoa ,(14)
where λ r and λ eoa are hyperparameters.
Inference Phase
In the inference phase, we take the new answer A ′ as the input to the context decoder instead of the gold answer A. Then, the output of the context decoder will become the modified textD ′ . We choose an autoregressive partial generation method for inference. Our partial generation method can fill the blanks with any-length phrases and can directly replace any decoder, which cannot be done by any existing alternative methods. For example, in the method using global context (Donahue et al., 2020), it is an pretrained language model by itself. However, in our architecture, the masks are decided by the selector module. Therefore, even the number and length of the blanks cannot be decided before training. So, the ground-truth target sequence for the finetuning of the pretrained language model would also be hard to decide. Therefore, the partial generation method is the best choice for our task.
Partial Generation
Since we already have a context template when we are generating the modified document, we only need to generate tokens to fill the blanks in the context template. The partial decoding process is shown in Fig. 3. We use an indicator state= 0 to denote the reading mode (reading the context template words), and state= 1 to denote the writing mode (generating the blank-filling words). The basic generating process is described as follows: when the model is reading the context template, if it meets a masked token, the mode turns to writing mode, and it starts to generate words to fill the current blank. When the EOA tag turns to 1, or the decoding length l g surpassed a limit l max , the mode turns back to reading mode. Note that this decoding process can generate an arbitrary number of words for each blank, and we can fill all blanks in a context template in a single decoding pass, which is much more efficient than MaskGAN (Fedus et al., 2018) and text filler (Zhu et al., 2019). The detailed algorithm is shown in Algorithm 1.
Experiments
In the experiment part, we proposed some specific evaluation metric for our controllable text edition task and then compare and analysis the performance of our proposed method (SMG) on the WIKIBIOCTE dataset.
Evaluation Metrics
For the evaluation of the modified documentD ′ , we use the following two automatic evaluation (1) BLEU (D ′ vs. D ′ ): This metric measures the BLEU score (Papineni et al., 2002) between the generated modified documentD ′ and the reference document D ′ .
(2) iBLEU (Sun and Zhou, 2012): This metric is previously widely used in evaluating paraphrase generation tasks (Liu et al., 2020;Sha, 2020). iBLEU is defined as: iBLEU = BLEU(D ′ , D ′ ) − αBLEU(D ′ , D) 6 , which penalizes the similarity between the modified documentD ′ and the original document D. The goal of this metric is to measure the extent to which the model directly copies words from the original document D without taking any content from A ′ .
(3) diff-BLEU ratio: diff-BLEU is a BLEU score computed betweenD ′ and a difference sequence between the gold modified document D ′ and the original document D. The difference sequence is obtained by masking out the longest common sequence between D and D ′ from D ′ . Since this maximum value of this BLEU score is the BLEU value between the gold modified document D ′ and the difference sequence, we use their quotient as the diff-BLEU ratio score as shown in Eq. 15:
diff-BLEU ratio = BLEU(D ′ , D ′ − D) BLEU(D ′ , D ′ − D) .(15)
(4) Perplexity: This metric measures the fluency of the generated content-modified documentD ′ . We applied a third-party language model (Kneser-Ney language model (1995)) as the perplexity evaluator. We trained the language model on the whole training set of WIKIBIO, and use the trained model as the evaluation of fluency, where a lower perplexity value is better. Besides, we used human effort to evaluate two aspects of the content-modified documentD ′ . Correctness is an accuracy score from 0.0% ∼ 100.0%, which evaluates whetherD ′ has successfully turned the answer of question Q from A to A ′ . Fluency is from 0.0 ∼ 5.0, which evaluates whetherD ′ is fluent from a human being's view. The scoring details are in the supplemental materials.
Also, in our method, the selection of answerrelated words is very important, so we have two evaluations for the selection part:
(1) BLEU (predicted template) is the BLEU score between the predicted template (the token sequence after we masked out the answer-related words from the text D) and the gold template (the common sequence of D and D ′ ).
(2) Answer F 1 measures the Bag-of-words (BOW) F 1 value of the generated answerà compared to Table 4: Performance of answer-related words selection.
the gold answer A. This metric is difficult to achieve, because it requires both to select the correct answer-related tokens and to generate the correct words for the answer A.
Overall Performance
We compare our method (SMG) with a baseline method (Seq2Seq). In Seq2Seq, the difference with SMG is that the decoder part is a conventional decoder that completely generates the modified documentD ′ ignoring the context template.
The overall performance is shown in Table 3.
In Table 3, we see that our SMG method has outperformed the Seq2Seq baseline in nearly all evaluation metrics, no matter whether the context template applied to the decoding phase is gold or predicted. Especially, in the two most important metrics for the performance of controllable text edition: iBLEU and diff-BLEU ratio, our model has achieved a significantly higher score than competing methods. These results show that our method is effective in controllable text edition.
The human evaluation results are also listed in Table 3. The inter-rater agreements are all acceptable (> 0.85) due to Krippendorff's principle (2004). According to the human evaluation, when we are using the gold template for partially generating, both the correctness and the fluency of the partially generated textD ′ are better than using the predicted template, which is also consistent with our intuition. Note that the perplexity score and the fluency score of Seq2Seq are the best of all the three methods; this is because in the partially generated text, the end position of each blank may not fit very well with the next word sometimes, although we have trained an EOA tag. Table 4 shows the experiments evaluating the selection of answer-related words. We can see that our SMG model has a higher BLEU (predicted template) score than the Seq2Seq model. This fact shows that partially training the blank-filling tokens helps for the selection of answer-related tokens. Also, our model SMG has achieved a higher answer F 1 score (0.68) than competing methods.
Case Study
We have listed some examples of the modified documentD ′ generated by the three competing methods (Seq2Seq, SMG(g), and SMG(p)) in Table 5. We can see that although the answer-related words are already masked out, Seq2Seq still always generates the words in the original answer A and tends to mix up the words in A and the changed answer A ′ (like in the second example, Seq2Seq mixed "gymnastic" and "basketball" together.) Also, Seq2Seq cannot precisely change everywhere what should be modified, for example, in the second example, Seq2Seq failed to change "gymnastic coach" to "basketball coach". In the SMG methods, when we are using the gold template for partial generation, the model is able to generate the correct words aiming to change Q's answer to A ′ . Although there is still some risk to have some answer-related tokens left unchanged due to the error in the predicted template, the context tokens in the predicted template are ensured to be generated. Therefore, our model with predicted template is more fit for NLP products than Seq2Seq.
Conclusion
In this paper, we proposed a novel task, the goal of which is to modify some content of a given text to make the answer of a text-related question change to a given new answer. This task is very useful in many real-world tasks, like contract editing. We constructed a test set for evaluation and released this test set. We also proposed a novel model SMG to solve this task. In SMG, we first use a selectorpredictor structure to select the answer-related tokens in the input document, then we use a novel partial generation technique to generate the modified document without changing answer-unrelated Input D: george evans -lrb-born 13 december 1994 -rrb-is an english footballer who plays as a midfielder or centre-back for manchester city .
D: andrei UNK -lrb-born 1975 in satu mare , romania -rrbis a retired romanian aerobic gymnast . he had a successful career winning four world championships medals -lrbtwo gold , one silver , and one bronze -rrb-after his retirement in 1997 he went with to germany where he works as a gymnastics coach at the UNK gymnastics club in hanover . D: andrew justin stewart coats -lrb-born 1 february 1958 -rrb-is an australian -british academic cardiologist who has particular interest in the management of heart failure . his research turned established teaching on its head and promoted exercise traininglrb-rather than bed rest -rrb-as a treatment for chronic heart failure . he was instrumental in describing the " muscle hypothesis " of heart failure . Q: position Q: discipline Q: nationality A ′ : halfback quarterback A ′ : basketball player A ′ : philippines filipino Seq2SeqD ′ : george evans -lrb-born 13 december 1994 -rrb-is an english footballer who plays as a midfielder or centre-back for manchester city . he was a quarterback halfback in the manchester .
andrei UNK -lrb-born 1975 in satu mare romania is a retired romanian aerobic gymnast basketball he had a successful career winning four world championships medals -lrb-two gold , one silver , and one bronze , after his retirement in 1997 he went with to germany where he works as a gymnastics coach at the UNK basketball club andrew justin stewart coats -lrb-born 1 february 1958 is an filipino -british academic cardiologist who has particular interest in the management of heart failure . his research turned established teaching on its head and promoted exercise training -lrb-rather than bed rest -rrb-as a treatment for chronic heart failure . he was instrumental in describing the " muscle hypothesis " of philippines With gold template (SMG(g))D ′ : george evans -lrb-born 13 december 1994 -rrb-is an english footballer who plays as a halfback and quarterback for manchester city .D ′ : andrei UNK -lrb-born 1975 in satu mare , romania -rrb-is a retired romanian basketball player . he had a successful career winning four world championships medals -lrb-two gold , one silver , and one bronze -rrb-after his retirement in 1997 he went with to germany where he works as a basketball coach at the UNK basketball club in hanover .
andrew justin stewart coats -lrb-born 1 february 1958 -rrb-is an filipino academic cardiologist who has particular interest in the management of heart failure . his research turned established teaching on its head and promoted exercise training -lrb-rather than bed rest -rrb-as a treatment for chronic heart failure . he was instrumental in describing the " muscle hypothesis " of heart failure .
With predicted template (SMG(p))D ′ : george evans -lrb-born 13 december 1994 -rrb-is an halfback footballer who plays as a midfielder or quarterback for manchester city .
andrei UNK -lrb-born 1975 in satu mare , romania -rrb-is a retired romanian basketball player . he had a successful career winning four world championships medals -lrbtwo gold , one silver , and one bronze -rrb-after his retirement in 1997 he went with to germany where he works as a gymnastics coach at the UNK gymnastic club in hanover .D ′ : andrew justin stewart coats -lrb-born 1 february philippines academic cardiologist who has particular interest in the management of heart failure . his research turned established teaching on its head and promoted exercise training -lrb-rather than bed rest -rrb-as a treatment for chronic heart failure . he was instrumental in describing the " muscle hypothesis " of heart failure . Table 5: The example generated cases of competing methods. The underlined tokens are gold answer-related tokens. The bold tokens in the "Input" row are predicted answer-related tokens. In the other three rows, the bold tokens are the modified tokens that are related to the given new answer A ′ . tokens in the original document. The experiments proved the effectiveness of our model.
LangFigure 1 :
1Ping (born 10 December 1960) is a former Chinese volleyball player and the current head coach of China women's national volleyball team. She was the former head coach of the United States women's national volleyball team, herself being the MVP of women volleyball in 1984 Olympics. Question (Q): What is Lang PingNs current job? Original Answer (A): the Lang Ping (born 10 December 1960) is a former Chinese volleyball player and the current director of the National Sports Administration. She was the former head coach of the United States women's national volleyball team, herself being the MVP of women volleyball in 1984 Olympics. New Answer (AN): the director of the National The original text D is in the upper box.
Figure 3 :
3The partial decoding process. This process requires two tags (state and EOA tag) for indicating when to start generation and when to stop generation.Algorithm 1: The decoding process.Input: Context template: C Output: Generated Sequence:D ′ Data: Read-write state: S, End-of-answer label: Seoa, Context template index: Ic, Local generate length: lg, current input token xin S ← 0, Ic ← 0, lg ← 0,D ′ ← []; Set the first input token xin ← C[0]; for each time step t ← 1, 2, . . . do Calculateỹt by Eqn. 11; Calculate Seoa by Eqn. 9; if S = 0 theñ D ′ ←D ′ + [C[Ic]]; if C[Ic] ='[M]' and C[Ic + 1] ='[M]' then Ic ← Ic + 1; while C[Ic] ='[M]' do Ic ← Ic + 1; end if Seoa = 1 then S ← 1; end end else if C[Ic] ='[M]' and C[Ic + 1] ='[M]' then Ic ← Ic + 1; end end else if S = 1 theñ D ′ ←D ′ + [ỹt], lg ← lg + 1; if Seoa = 1 or lg ≥ lmax then S ← 0, lg ← 0; end end if S = 0 then xin ← C[Ic]; end else if S = 1 then xin ←ỹt; end end returnD ′ ; metrics:
Table :
:ID Field
Content
1 Name
Frank Fenner
2 Born
21 December 1914, Ballarat
3 Died
22 November 2010 (aged 95) Canberra
4 Occupation Virology
5 Nationality Australian
6 Known for Eradication of smallpox, Control of Aus-
tralia's rabbit plague
Text: Frank John Fenner (21 December 1914 -22 Novem-
ber 2010) was an Australian scientist with a distinguished ca-
reer in the field of virology. His two greatest achievements
are cited as overseeing the eradication of smallpox, and the
control of Australia's rabbit plague by introducing the Myx-
oma virus.
Table 1 :
1An example of a Wikipedia infobox and a ref-
erence text.
Table 2 :
2The selected fields from WIKIBIO and their occurrence in WIKIBIO's training set. These are taken as the questions (Q's) in our proposed task.
Table 3 :
3The overall performance of all competing
methods. SMG (g) denotes that the method SMG is
using the gold templates for partial generation, and
SMG (p) denotes that the method SMG is using the
predicted templates for partial generation.
Random Seq2Seq SMG
BLEU (predicted template)
21.5
59.5
89.1
Answer F 1
0.14
0.55
0.68
To annotate a large parallel dataset, we need to prepare a document, a document-related question, and its expected answer. Then, the data grader should provide an adjusted version of the document that satisfies the expected answer, which requires the data grader to have a high education level.2 In the Wikipedia Infobox, "field" represents the type of information (such as Name, BirthDate, and Known for), while "content" represents the value of "field".
Since D is the first paragraph in the Wikipedia page, it usually does not contain everything mentioned in the infobox, such as death cause and high school.
α is set to 0.9, which is consistent with previous works(Liu et al., 2020).
AcknowledgmentsThis work was supported by the EPSRC grant "Unlocking the Potential of AI for English Law". We also acknowledge the use of Oxford's Advanced Research Computing (ARC) facility, of the EPSRC-funded Tier 2 facility JADE (EP/P020275/1), and of GPU computing support by Scan Computers International Ltd.Appendices A Human Evaluation Question MarksOur annotators were asked the following questions, in order to assess the correctness and fluency of the modified document provided by our model.A.1 Correctness of modified documentQ: Do you think the modification of the document is correct so that it can make the question answer pair Q, A ′ true? (For partially correct cases: Partially correct means some places are changed to the new answer, and some places keep the old answer. In this case, only all places (that need to be changed) have been changed can be taken as correct. )Please choose "Yes" or "No".After all human annotators finished their work, the correctness score is calculated by dividing the number of "Yes" by the total number of examples.A.2 FluencyQ: How fluent do you think the modified document is?Please choose a score according to the following description. Note that the score is not necessarily an integer, you can give scores like 3.2 or 4.9 , if you deem appropriate.• 5: Very fluent.• 4: Highly fluent.• 3: Partial fluent.• 2: Very unfluent.• 1: Nonsense.B Experiment DetailsThe word embedding size is 300. The BiLSTM in the selector model has the following hyperparameters: hidden size = 200. The hidden size of decoder's LSTM cell is 200. The rest hyperparameters has the following values: λ r = 1.0, λ eoa = 10. The hyperparameters are obtained by grid search, the search scopes are λ r ∈ [0.0, 2.0] with step size 0.2, λ eoa ∈ [1, 20] with step size 1, the hidden size are searched in [100, 500] with step size 50. The best hyperparameters are selected when the model achieves the highest answer's F 1 in the development set. The total parameter size is 72M . Each training epoch costs about 1.5 hours on V100 GPU.
Retrieve, Rerank and Rewrite: Soft Template Based Neural Summarization. Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsZiqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, Rerank and Rewrite: Soft Tem- plate Based Neural Summarization. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 152-161. Association for Computational Lin- guistics.
Isolating Sources of Disentanglement in Variational Autoencoders. Xuechen Tian Qi Chen, Li, B Roger, David K Grosse, Duvenaud, Advances in Neural Information Processing Systems. Tian Qi Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. 2018. Isolating Sources of Dis- entanglement in Variational Autoencoders. In Ad- vances in Neural Information Processing Systems, pages 2610-2620.
InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, Pieter Abbeel, Advances in Neural Information Processing Systems. Xi Chen, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. 2016. InfoGAN: Interpretable Representation Learning by Informa- tion Maximizing Generative Adversarial Nets. In Advances in Neural Information Processing Systems, pages 2172-2180.
Enabling Language Models to Fill in the Blanks. Chris Donahue, Mina Lee, Percy Liang, 10.18653/v1/2020.acl-main.225Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsChris Donahue, Mina Lee, and Percy Liang. 2020. Enabling Language Models to Fill in the Blanks. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2492- 2501, Online. Association for Computational Lin- guistics.
WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse. Manaal Faruqui, Ellie Pavlick, Ian Tenney, Dipanjan Das, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsManaal Faruqui, Ellie Pavlick, Ian Tenney, and Dipan- jan Das. 2018. WikiAtomicEdits: A Multilingual Corpus of Wikipedia Edits for Modeling Language and Discourse. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing, pages 305-315. Association for Compu- tational Linguistics.
Maskgan: Better Text Generation via Filling in the. William Fedus, Ian Goodfellow, Andrew M Dai, International Conference on Learning Representations. William Fedus, Ian Goodfellow, and Andrew M Dai. 2018. Maskgan: Better Text Generation via Filling in the _. In International Conference on Learning Representations.
Incorporating Copying Mechanism in Sequence-to-Sequence Learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Jiatao Gu, Zhengdong Lu, Hang Li, and Victor O.K. Li. 2016. Incorporating Copying Mechanism in Sequence-to-Sequence Learning. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1631-1640. Association for Computational Linguistics.
Generating Sentences by Editing Prototypes. Kelvin Guu, B Tatsunori, Yonatan Hashimoto, Percy Oren, Liang, Transactions of the Association for Computational Linguistics. 6Kelvin Guu, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. 2018. Generating Sentences by Editing Prototypes. Transactions of the Association for Computational Linguistics, 6:437-450.
A Retrieve-and-Edit Framework for Predicting Structured Outputs. B Tatsunori, Kelvin Hashimoto, Yonatan Guu, Percy S Oren, Liang, Advances in Neural Information Processing Systems. Tatsunori B Hashimoto, Kelvin Guu, Yonatan Oren, and Percy S Liang. 2018. A Retrieve-and-Edit Framework for Predicting Structured Outputs. In Advances in Neural Information Processing Systems, pages 10052-10062.
Neural Machine Translation Decoding with Terminology Constraints. Eva Hasler, Adrià De Gispert, Gonzalo Iglesias, Bill Byrne, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics2Short PapersEva Hasler, Adrià de Gispert, Gonzalo Iglesias, and Bill Byrne. 2018. Neural Machine Translation De- coding with Terminology Constraints. In Proceed- ings of the 2018 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 2 (Short Papers), pages 506-512. Association for Computational Linguistics.
Teaching Machines to Read and Comprehend. Karl Moritz Hermann, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, Phil Blunsom, Advances in neural information processing systems. Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching Machines to Read and Comprehend. In Advances in neural in- formation processing systems, pages 1693-1701.
Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework. International Conference on Learning Representations. 26Irina Higgins, Loic Matthey, Arka Pal, Christopher Burgess, Xavier Glorot, Matthew Botvinick, Shakir Mohamed, and Alexander Lerchner. 2017. beta- VAE: Learning Basic Visual Concepts with a Con- strained Variational Framework. International Con- ference on Learning Representations, 2(5):6.
Long Short-Term Memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long Short-Term Memory. Neural Computation, 9(8):1735-1780.
Lexically Constrained Decoding for Sequence Generation Using Grid Beam Search. Chris Hokamp, Qun Liu, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Chris Hokamp and Qun Liu. 2017. Lexically Con- strained Decoding for Sequence Generation Using Grid Beam Search. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1535- 1546. Association for Computational Linguistics.
Improved Lexically Constrained Decoding for Translation and Monolingual Rewriting. J Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, Benjamin Van Durme, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1J. Edward Hu, Huda Khayrallah, Ryan Culkin, Patrick Xia, Tongfei Chen, Matt Post, and Benjamin Van Durme. 2019. Improved Lexically Constrained Decoding for Translation and Monolingual Rewrit- ing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 839- 850. Association for Computational Linguistics.
. Eric Jang, Shixiang Gu, Ben Poole, arXiv:1611.01144Categorical Reparameterization with Gumbel-Softmax. arXiv preprintEric Jang, Shixiang Gu, and Ben Poole. 2016. Cat- egorical Reparameterization with Gumbel-Softmax. arXiv preprint arXiv:1611.01144.
Disentangled Representation Learning for Non-Parallel Text Style Transfer. Vineet John, Lili Mou, Hareesh Bahuleyan, Olga Vechtomova, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsVineet John, Lili Mou, Hareesh Bahuleyan, and Olga Vechtomova. 2019. Disentangled Representation Learning for Non-Parallel Text Style Transfer. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 424- 434. Association for Computational Linguistics.
Hyunjik Kim, Andriy Mnih, arXiv:1802.05983Disentangling by Factorising. arXiv preprintHyunjik Kim and Andriy Mnih. 2018. Disentangling by Factorising. arXiv preprint arXiv:1802.05983.
Improved Backing-Off for M-gram Language Modeling. Reinhard Kneser, Hermann Ney, 1995 International Conference on Acoustics, Speech, and Signal Processing. IEEE1Reinhard Kneser and Hermann Ney. 1995. Improved Backing-Off for M-gram Language Modeling. In 1995 International Conference on Acoustics, Speech, and Signal Processing, volume 1, pages 181-184. IEEE.
Content Analysis: An Introduction to Its Methodology Thousand Oaks. Calif. Klaus Krippendorff, SageKlaus Krippendorff. 2004. Content Analysis: An Intro- duction to Its Methodology Thousand Oaks. Calif.: Sage.
RACE: Large-Scale ReAding Comprehension Dataset From Examinations. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsGuokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-Scale ReAd- ing Comprehension Dataset From Examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 785-794. Association for Computational Linguis- tics.
Neural Text Generation from Structured Data with Application to the Biography Domain. Rémi Lebret, David Grangier, Michael Auli, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsRémi Lebret, David Grangier, and Michael Auli. 2016. Neural Text Generation from Structured Data with Application to the Biography Domain. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1203-1213. Association for Computational Linguistics.
Table-to-Text Generation by Structure-Aware Seq2Seq Learning. Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui, Proceedings of the 32nd AAAI Conference on Artificial Intelligence. the 32nd AAAI Conference on Artificial IntelligenceTianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, and Zhifang Sui. 2018. Table-to-Text Generation by Structure-Aware Seq2Seq Learning. In Proceed- ings of the 32nd AAAI Conference on Artificial Intel- ligence.
Unsupervised Paraphrasing by Simulated Annealing. Xianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, Sen Song, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsXianggen Liu, Lili Mou, Fandong Meng, Hao Zhou, Jie Zhou, and Sen Song. 2020. Unsupervised Paraphras- ing by Simulated Annealing. In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 302-312. Association for Computational Linguistics.
Emile Mathieu, Tom Rainforth, Siddharth Narayanaswamy, Yee Whye Teh, arXiv:1812.02833Disentangling Disentanglement in Variational Autoencoders. arXiv preprintEmile Mathieu, Tom Rainforth, Siddharth Narayanaswamy, and Yee Whye Teh. 2018. Disentangling Disentanglement in Variational Autoencoders. arXiv preprint arXiv:1812.02833.
CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling. Ning Miao, Hao Zhou, Lili Mou, Rui Yan, Lei Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Ning Miao, Hao Zhou, Lili Mou, Rui Yan, and Lei Li. 2019. CGMH: Constrained Sentence Generation by Metropolis-Hastings Sampling. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 33, pages 6834-6842.
Invariant Representations without Adversarial Training. Daniel Moyer, Shuyang Gao, Advances in Neural Information Processing Systems. Rob Brekelmans, Aram Galstyan, and Greg Ver SteegDaniel Moyer, Shuyang Gao, Rob Brekelmans, Aram Galstyan, and Greg Ver Steeg. 2018. Invariant Rep- resentations without Adversarial Training. In Ad- vances in Neural Information Processing Systems, pages 9084-9093.
Exemplar Encoder-Decoder for Neural Conversation Generation. Gaurav Pandey, Danish Contractor, Vineet Kumar, Sachindra Joshi, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Gaurav Pandey, Danish Contractor, Vineet Kumar, and Sachindra Joshi. 2018. Exemplar Encoder-Decoder for Neural Conversation Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1329-1338.
Sheena Panthaplackel, Miltiadis Allamanis, Marc Brockschmidt, arXiv:2006.04771Copy That! Editing Sequences by Copying Spans. arXiv preprintSheena Panthaplackel, Miltiadis Allamanis, and Marc Brockschmidt. 2020. Copy That! Editing Se- quences by Copying Spans. arXiv preprint arXiv:2006.04771.
Bleu: a Method for Automatic Evaluation of Machine Translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318. Association for Computational Linguistics.
Fast Lexically Constrained Decoding with Dynamic Beam Allocation for Neural Machine Translation. Matt Post, David Vilar, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Association for Computational LinguisticsMatt Post and David Vilar. 2018. Fast Lexically Con- strained Decoding with Dynamic Beam Allocation for Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Pa- pers), pages 1314-1324. Association for Computa- tional Linguistics.
Know What You Don't Know: Unanswerable Questions for SQuAD. Pranav Rajpurkar, Robin Jia, Percy Liang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsShort Papers2Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know What You Don't Know: Unanswerable Ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789.
SQuAD: 100,000+ Questions for Machine Comprehension of Text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ Questions for Machine Comprehension of Text. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2383-2392.
MCTest: A Challenge dataset for the Open-Domain Machine Comprehension of Text. Matthew Richardson, J C Christopher, Erin Burges, Renshaw, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsMatthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A Challenge dataset for the Open-Domain Machine Comprehension of Text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193-203. Association for Computational Lin- guistics.
Adversarial Decomposition of Text Representation. Alexey Romanov, Anna Rumshisky, Anna Rogers, David Donahue, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Association for Computational LinguisticsAlexey Romanov, Anna Rumshisky, Anna Rogers, and David Donahue. 2019. Adversarial Decomposition of Text Representation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 815-825. Association for Computa- tional Linguistics.
Gradient-guided Unsupervised Lexically Constrained Text Generation. Lei Sha, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsLei Sha. 2020. Gradient-guided Unsupervised Lex- ically Constrained Text Generation. In Proceed- ings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 8692-8703. Association for Computational Linguis- tics.
Learning from the Best: Rationalizing Predictions by Adversarial Information Calibration. Lei Sha, Oana-Maria Camburu, Thomas Lukasiewicz, Proceedings of the 35th AAAI Conference on Artificial Intelligence‚ AAAI 2021‚ Virtual Conference‚. the 35th AAAI Conference on Artificial Intelligence‚ AAAI 2021‚ Virtual Conference‚AAAI PressLei Sha, Oana-Maria Camburu, and Thomas Lukasiewicz. 2021. Learning from the Best: Rationalizing Predictions by Adversarial Informa- tion Calibration. In Proceedings of the 35th AAAI Conference on Artificial Intelligence‚ AAAI 2021‚ Virtual Conference‚ February 2-9‚ 2021. AAAI Press.
Multi-type Disentanglement Without Adversarial Training. Lei Sha, Thomas Lukasiewicz, Proceedings of the 35th AAAI Conference on Artificial Intelligence. the 35th AAAI Conference on Artificial IntelligenceLei Sha and Thomas Lukasiewicz. 2021. Multi-type Disentanglement Without Adversarial Training. In Proceedings of the 35th AAAI Conference on Artifi- cial Intelligence.
Order-Planning Neural Text Generation From Structured Data. Lei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, Zhifang Sui, Proceedings of the 32nd AAAI Conference on Artificial Intelligence. the 32nd AAAI Conference on Artificial IntelligenceLei Sha, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. 2018. Order- Planning Neural Text Generation From Structured Data. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence.
Independent Subspace Analysis for Unsupervised Learning of Disentangled Representations. Jan Stühmer, E Richard, Sebastian Turner, Nowozin, arXiv:1909.05063arXiv preprintJan Stühmer, Richard E Turner, and Sebastian Nowozin. 2019. Independent Subspace Analysis for Unsupervised Learning of Disentangled Representa- tions. arXiv preprint arXiv:1909.05063.
Joint Learning of a Dual SMT System for Paraphrase Generation. Hong Sun, Ming Zhou, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational Linguistics2Short Papers). Association for Computational LinguisticsHong Sun and Ming Zhou. 2012. Joint Learning of a Dual SMT System for Paraphrase Generation. In Proceedings of the 50th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 38-42. Association for Com- putational Linguistics.
Attention is All You Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems. 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. Advances in Neural Information Process- ing Systems, 30:5998-6008.
Response Generation by Context-Aware Prototype Editing. Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhoujun Li, Ming Zhou, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Yu Wu, Furu Wei, Shaohan Huang, Yunli Wang, Zhou- jun Li, and Ming Zhou. 2019. Response Genera- tion by Context-Aware Prototype Editing. In Pro- ceedings of the AAAI Conference on Artificial Intel- ligence, volume 33, pages 7281-7288.
. Pengcheng Yin, Graham Neubig, Miltiadis Allamanis, Marc Brockschmidt, Alexander L Gaunt, arXiv:1810.13337Learning to Represent Edits. arXiv preprintPengcheng Yin, Graham Neubig, Miltiadis Allama- nis, Marc Brockschmidt, and Alexander L Gaunt. 2018. Learning to Represent Edits. arXiv preprint arXiv:1810.13337.
A Survey on Machine Reading Comprehension-Tasks. Changchang Zeng, Shaobo Li, Qin Li, Jie Hu, Jianjun Hu, Evaluation Metrics and Benchmark Datasets. Applied Sciences. 10217640Changchang Zeng, Shaobo Li, Qin Li, Jie Hu, and Jian- jun Hu. 2020. A Survey on Machine Reading Com- prehension-Tasks, Evaluation Metrics and Bench- mark Datasets. Applied Sciences, 10(21):7640.
. Wanrong Zhu, Zhiting Hu, Eric Xing, arXiv:1901.00158Text Infilling. arXiv preprintWanrong Zhu, Zhiting Hu, and Eric Xing. 2019. Text Infilling. arXiv preprint arXiv:1901.00158.
| [] |
[
"IMPROVING CROSS-MODAL UNDERSTANDING IN VISUAL DIALOG VIA CONTRASTIVE LEARNING",
"IMPROVING CROSS-MODAL UNDERSTANDING IN VISUAL DIALOG VIA CONTRASTIVE LEARNING"
] | [
"Feilong Chen chenfeilong2018@ia.ac.cn \nInstitute of Automation\nChinese Academy of Sciences\nBeijingChina\n\nSchool of Future Technology\n\n",
"Xiuyi Chen chenxiuyi2017@ia.ac.cn \nInstitute of Automation\nChinese Academy of Sciences\nBeijingChina\n\nUniversity of Chinese Academy of Sciences\nBeijingChina\n",
"Shuang Xu shuang.xu@ia.ac.cn \nInstitute of Automation\nChinese Academy of Sciences\nBeijingChina\n",
"Bo Xu xubo@ia.ac.cn \nInstitute of Automation\nChinese Academy of Sciences\nBeijingChina\n\nSchool of Future Technology\n\n\nUniversity of Chinese Academy of Sciences\nBeijingChina\n"
] | [
"Institute of Automation\nChinese Academy of Sciences\nBeijingChina",
"School of Future Technology\n",
"Institute of Automation\nChinese Academy of Sciences\nBeijingChina",
"University of Chinese Academy of Sciences\nBeijingChina",
"Institute of Automation\nChinese Academy of Sciences\nBeijingChina",
"Institute of Automation\nChinese Academy of Sciences\nBeijingChina",
"School of Future Technology\n",
"University of Chinese Academy of Sciences\nBeijingChina"
] | [] | Visual Dialog is a challenging vision-language task since the visual dialog agent needs to answer a series of questions after reasoning over both the image content and dialog history. Though existing methods try to deal with the cross-modal understanding in visual dialog, they are still not enough in ranking candidate answers based on their understanding of visual and textual contexts. In this paper, we analyze the cross-modal understanding in visual dialog based on the vision-language pre-training model VD-BERT and propose a novel approach to improve the cross-modal understanding for visual dialog, named ICMU. ICMU enhances cross-modal understanding by distinguishing different pulled inputs (i.e. pulled images, questions or answers) based on four-way contrastive learning. In addition, ICMU exploits the single-turn visual question answering to enhance the visual dialog model's cross-modal understanding to handle a multi-turn visually-grounded conversation. Experiments show that the proposed approach improves the visual dialog model's cross-modal understanding and brings satisfactory gain to the Vis-Dial dataset. | 10.1109/icassp43922.2022.9747769 | [
"https://arxiv.org/pdf/2204.07302v1.pdf"
] | 248,218,567 | 2204.07302 | 58e772fbdc0d7609dd99bbb43df7feae1866a643 |
IMPROVING CROSS-MODAL UNDERSTANDING IN VISUAL DIALOG VIA CONTRASTIVE LEARNING
Feilong Chen chenfeilong2018@ia.ac.cn
Institute of Automation
Chinese Academy of Sciences
BeijingChina
School of Future Technology
Xiuyi Chen chenxiuyi2017@ia.ac.cn
Institute of Automation
Chinese Academy of Sciences
BeijingChina
University of Chinese Academy of Sciences
BeijingChina
Shuang Xu shuang.xu@ia.ac.cn
Institute of Automation
Chinese Academy of Sciences
BeijingChina
Bo Xu xubo@ia.ac.cn
Institute of Automation
Chinese Academy of Sciences
BeijingChina
School of Future Technology
University of Chinese Academy of Sciences
BeijingChina
IMPROVING CROSS-MODAL UNDERSTANDING IN VISUAL DIALOG VIA CONTRASTIVE LEARNING
Index Terms-Visual DialogCross-modal UnderstandingContrastive Learning
Visual Dialog is a challenging vision-language task since the visual dialog agent needs to answer a series of questions after reasoning over both the image content and dialog history. Though existing methods try to deal with the cross-modal understanding in visual dialog, they are still not enough in ranking candidate answers based on their understanding of visual and textual contexts. In this paper, we analyze the cross-modal understanding in visual dialog based on the vision-language pre-training model VD-BERT and propose a novel approach to improve the cross-modal understanding for visual dialog, named ICMU. ICMU enhances cross-modal understanding by distinguishing different pulled inputs (i.e. pulled images, questions or answers) based on four-way contrastive learning. In addition, ICMU exploits the single-turn visual question answering to enhance the visual dialog model's cross-modal understanding to handle a multi-turn visually-grounded conversation. Experiments show that the proposed approach improves the visual dialog model's cross-modal understanding and brings satisfactory gain to the Vis-Dial dataset.
INTRODUCTION
Recently, with the rise of pre-trained models [2], researchers have begun to explore vision-and-language task [3,4,5] with pre-trained models [1]. Specifically, visual dialog [6,7,8,9], which aims to hold a meaningful conversation with a human about a given image, is a challenging task that requires models have sufficient cross-modal understanding based on both visual and textual context to answer the current question.
One way to gain sufficient cross-modal understanding is through utilizing kinds of attention mechanism [10,11,12]. ReDAN [13] and DMAM [14] use multi-step reasoning based on dual attention to learn cross-modal understanding. DAN [15], MCAN [7] and LTMI [16] utilize multi-head attention mechanisms to manage multimodal intersection. Moreover, there are some approaches [17,18,19,20,21] using graph-based structures to learn cross-modal understanding.
However, the approaches mentioned above do not utilize pretrained models, which have a strong power to deal with vision-andlanguage tasks. Visdial-BERT [22] and VD-BERT [1] take advantage of the pre-trained model to greatly improve the performance of the visual dialog task. As shown in Figure 1, the SOTA model VD-BERT often makes mistakes and usually ranks the wrong answers first. VD-BERT does not have enough cross-modal understanding [1]. We show the candidates ranking results of VD-VBERT based on its cross-modal understanding. It can be seen that in the first 8 candidates, wrong answers account for most of them, and the ranking results of correct answers are not so good.
capabilities, so that it often scores unrelated wrong answers very high, such as the top 1 candidate answer "no" to the question Q4 "is the food in his mouth ?" shown in Figure 1.
In this paper, we propose a novel approach to improve the crossmodal understanding for visual dialog, named ICMU. ICMU enhances cross-modal understanding by distinguishing different pulled inputs (i.e. pulled images, questions or answers) based on four-way contrastive learning. What's more, ICMU exploits the single-turn visual question answering to enhance the visual dialog model's crossmodal understanding to handle a multi-turn visually-grounded conversation. Experiments show that the proposed approach improves the visual dialog model's cross-modal understanding and brings satisfactory gain on the VisDial dataset [5]. The contributions of this work are summarized as follows:
• We propose a novel approach ICMU, including 4-way contrastive learning and enhancing by utilizing VQA, to improve the cross-modal understanding based on vision-and-language pre-trained models for visual dialog.
• We conduct extensive experiments and ablation studies on the large-scale datasets VisDial v1.0. Experimental results show that our approach improves the visual dialog model's crossmodal understanding and brings satisfactory gain.
METHODOLOGY
In this section, we first formally describe the visual dialog task. Given a current question Qt with an image I at t-th turn, as well as its dialog history Ht = {C, (Q1, A1), ..., (Qt−1, At−1)} (where C denotes the image caption), the dialog model is required to predict its answer At by ranking a list of 100 answer candidates Figure 2 shows the overview of our approach. First, we employ a unified vision-dialog Transformer to encode both the image and dialog history, where we append an answer candidateÂt in the input to model their interactions in an early fusion manner. Next, we adopt cross-modal masked token loss and cross-modal contrastive loss to train the model for effective cross-modal understanding in visual dialog. In addition, we exploit the single-turn visual question answering to enhance the visual dialog model's cross-modal understanding to handle a multi-turn visually-grounded conversation.
{ 1 t , 2 t , ..., 100 t }.
Vision-Dialog Transformer
Visual Features.
Given an image I, we employ Faster R-CNN [23] pre-trained on Visual Genome [24] to extract the object-level vision features RI = {o1, ..., o k }, where each object feature oi is a 2048-d Region-of-Interest (RoI) feature. k is fixed to 36 in our setting. In addition, we adopt normalized bounding box coordinates as the spatial location due to disorder of visual objects. Specifically, we define the location information by constructing a 5-d vector:
pi = ( x 1 W , y 1 H , x 2 W , y 2 H , (x 2 −x 1 )(y 2 −y 1 ) W H ),
where (x1, y1) and (x2, y2) are the coordinates of the bottom-left and top-right corner of the i-th object, W and H respectively denote the width and height of the input image, and the last element is the relative area of the object. We also extend pi with its class id and confidence score for a richer representation to 7-d vector.
Textual Features.
For the textual features, we pack all the textual elements (the history, question and answer candidate) into a long sequence and employ WordPiece tokenizer [25] to split it into a word sequence w, where each word is embedded with an absolute positional code following [26].
Cross-Modality Encoding.
Like a most vision-and-language transformers, we integrate the image objects with language elements into a whole input sequence. As shown in Figure 2, we use some special tokens to segment different elements in the input sequence. We use [CLS] to denote the beginning of the sequence, and [SEP] to separate the two modalities. Moreover, we utilize a special token [HIS] to denote end of turn [27], which informs the model when the dialog turn ends. And we use [Ques] and [Ans] to segment the current question and the answer candidate. As such, we prepare the input sequence into the format as ). Finally, We combine each input token embedding with its position embedding and segment embedding (0 or 1, indicating whether it is image or text) and then perform layer normalization [28].
Transformer Backbone.
We utilize transformer encoder as the Transformer backbone to handle cross-modal understanding. Formally, we denote the embedded vision-language inputs as H 0 = [e1, ..., e |x| ] and then encode them into multiple levels of cross-modal representations H l = [h l 1 , ..., h l |x| ] using L-stacked Transformer blocks, where the l-th Transformer block is denoted as H l = Transformer(H l−1 ), l ∈ [1, L]. Specifically, the cross-modal representations H l is calculated by using the multi-head self-attention [29] as follows:
Q = H l−1 W Q l , K = H l−1 W K l , V = H l−1 W V l ,(1)
Mij = 0, allow to attend, −∞, prevent from attending,
A l = softmax( QK T √ d k + M)V,(2)
where W Q l , W K l , W V l ∈ R d h ×d k are learnable weights for computing the queries, keys, and values respectively, and M ∈ R |x|×|x| is the self-attention mask that determines whether tokens from two
H l = FFN(A l )(4)
Cross-Modal Training Objectives
To make the model learn cross-modal understanding, we use two cross-modal training losses-cross-modal masked token loss and cross-modal contrastive loss:
L = LCMT L + LCCL4,(5)
where LCMT L is the cross-modal masked token loss and LCCL4 is a novel 4-way contrastive loss.
Cross-modal Masked Token Loss
At each iteration, we randomly mask each input token with probability 15% and replace the masked one with a special token [MASK].
The model is then required to recover them based not only on the surrounding tokens w \m but also on the image I by minimizing the negative log-likelihood:
LCMT L = −E (I,w)∼D log P (wm|w \m , I),(6)
where wm refers to the masked token and D denotes the training set.
Cross-modal Contrastive Loss
As shown in Figure
where the datasets I, H, Q, A ∈ D contains 50% matched quartettes, and the three negatives evenly divide the remaining 50% in the training set. Table 3. Ablation study on VisDial v1.0 val datasets. "VQA" denotes enhancing by utilizing VQA. "CL" denotes the 4-way contrastive learning.
Using VQA to Enhance Visual Dialog
Although VQA is single-turn, VQA models and visual dialog models require similar cross-modal understanding capabilities. We use VQA to enhance visual dialogue. We exploit the training and val split of VQA v2.0 dataset, which contains the same images as Vis-Dial v1.0 train split. As there is no caption for the image in VQA v2.0, we use VisDial v1.0 to construct a caption for each image in the VQA v2.0. Thus each input from VQA v2.0 can be defined as (I, C, Q, A), where I denotes the image, C denotes the constructed caption, Q denotes the question, A denotes the answer. We let the history H be null.
EXPERIMENTS
Experiment Setup
Datasets and Implementation Details.
We evaluate our model on the VisDial v1.0 datasets [30]. Specifically, v1.0 contains a training set of 123287 images, a validation set of 2048 images and a testing set (hosted blindly in the task organizers' server) of 8,000 images. Each image is associated with one caption and 10 question-answer pairs. For each question, it is paired with a list of 100 answer candidates, one of which is regarded as the correct answer. VQA v2.0 contains the same 123287 images as VisDial v.10 but different question-answer pairs. We use BERTBASE as the backbone, which consists of 12 Transformer blocks, each with 12 attention heads and a hidden state dimensions of 768. We use Adam [31] with an initial learning rate of 3e−5 and a batch size of 80 to train our model. A linear learning rate decay schedule with a warmup of 0.1 is employed. We first train our model for 20 epochs on a cluster of 4 A100 GPUs with 40G memory using CMTL and CCL4 losses (with equal coefficients). Here we only utilize one previous dialog turn for training efficiency. After that, we train for another 15 epochs only using CCL4 losses. Dur- ing inference, we rank the answer candidates according to the class score c = 0 of the CCL4 loss.
Automatic Evaluation
We use a retrieval setting to evaluate individual responses at each round of a dialog, following [5]. Specifically, at test time, apart from the image, ground truth dialog history and the question, a list of 100candidate answers is also given. The model is evaluated on retrieval metrics: (
Main Results
Baseline Methods
We compare our method with the following baseline methods: (1) Attention-based models: HCIAE [10], CoAtt [11], ReDAN [13],
LG [32]. (2) The pretraining model: VD-BERT [1] and VisDial-BERT [22]. (4) Graph-based models: GNN-EM [17], DualVD [19], FGA [18], GoG [6], KBGN [21].
Results
Performance on the benchmarks VisDial is shown in Table 1 and Table 2. From the results on VisDial v1.0 test shown in Table 1, we can observe that: (1) ICMU outperforms previous works on all metrics and obtains R@1 at 53.50%, beating the previous method VD-BERT by 1.47%, which shows that ICMU can select the standard groundtruth more accurate. (2) Comparing the performance of ICMU and model VD-BERT on NDCG, ICMU beats the pre-trained model VD-BERT by 1.34%. This shows the superiority of our proposed method to understand cross-modal information at a fine-grained level. Note that NGCG is invariant to the order of options with identical relevance and to the order of options outside of the top K, where K is the number of answers marked as correct by at least one annotator.
(3) Our approach is not only more accurate (R@1, Mean), but also better than previous models on multi-modal semantic understanding (NDCG).
From the results on VisDial v1.0 val shown in Table 2, we can get the same observations. From the ablation study on VisDial v1.0 val shown in Table 3, we can observe that: (1) Both cross-modal contrastive learning and enhancement by VQA bring satisfactory improvements. (2) cross-modal contrastive learning and enhancement by VQA can get along with each other and further improve the performance of the model.
Case Study
As shown in Figure 3, we provide two samples to analyze the crossmodal understanding of VD-BERT and ICMU. As shown in the left half of Figure 3, for Q4 "Does he have food in his mouth?", there are many reasonable answers to this question. VD-BERT ranks the opposite answer ''no'' first, and many reasonable answers "yes, it is, it is" are ranked lower. As shown in the right half of Figure 3, for Q4 "are there people on bus?", ICMU outperforms the VD-BERT. This shows that ICMU learns better cross-modal understanding than VD-BERT due to CCL4 and enhancing by VQA.
CONCLUSION
In this paper, we propose a novel approach to improve the crossmodal understanding for visual dialog, named ICMU. ICMU enhances the cross-modal understanding in visual dialog by distinguishing different pulled inputs based on 4-way contrastive learning. In addition, ICMU exploits the single-turn visual question answering to enhance the visual dialog model's cross-modal understanding. Experiments show that the proposed approach improves the visual dialog model's cross-modal understanding and brings satisfactory gain to the VisDial dataset.
ACKNOWLEDGEMENT
Fig. 1 .
1A motivating example of cross-modal understanding of VD-BERT
Fig. 2 .
2The Framework of our ICMU. * indicates the pulled inputs.
x = ([CLS], o1, ..., o k , [SEP], C, [His], Q1A1, [His], ..., [Ques], Qt, [Ans],Ât, [SEP]
2, to compute contrastive losses, for each input quartette X = (I, H, Q, A), we construct three types of negative (unmatched) quartettes, where I denotes the image, H denotes the history, Q denotes the question, A denotes the answer. The first one is the polluted image (I * , H, Q, A), the second is the polluted question (I, H, Q * , A) and the final one is the polluted answer (I, H, Q, A * ), where * denotes the polluted input. Since the encoding of [CLS] can be viewed as a representation of the quartette X = (I, H, Q, A), we apply a fully-connected (FC) layer on top of it as a 4-way classifier f (·) to predict whether the quartette is matched (c = 0), contains a polluted I * (c = 1), or contains a polluted Q * (c = 2) or contains a polluted A * (c = 3). The 4-way contrastive loss is defined as LCCL4 = −E (I,H,Q,A;c)∼D log P (c|f (I, H, Q, A),
Fig. 3 .
3Case study.
1 )
1Mean Rank of human response (Mean ↓), (2) Existence of the human response in top − k ranked responses, i.e., R@k ↑ (3) Mean Reciprocal Rank (MRR ↑) of the human response and (4) Normalized Discounted Cumulative Gain (NDCG ↑) for VisDial v1.0.
This work was supported by the National Key R&D Program of China under Grant No.2018YFB1005104 and the Key Research Program of the Chinese Academy of Sciences under Grant No.ZDBS-SSW-JSC006 and Strategic Priority Research Program of the Chinese Academy of Sciences under Grant No.XDA27030300.
VD-BERT: A unified vision and dialog transformer with bert. Yue Wang, Shafiq Joty, arXiv:2004.13278arXiv preprintYue Wang, Shafiq Joty, et al., "VD-BERT: A unified vi- sion and dialog transformer with bert," arXiv preprint arXiv:2004.13278, 2020.
Tinybert: Distilling bert for natural language understanding. Xiaoqi Jiao, Yichun Yin, arXiv:1909.10351arXiv preprintXiaoqi Jiao, Yichun Yin, et al., "Tinybert: Distilling bert for natural language understanding," arXiv preprint arXiv:1909.10351, 2019.
Exploring models and data for image question answering. Mengye Ren, Ryan Kiros, Richard Zemel, Advances in Neural Information Processing Systems. Mengye Ren, Ryan Kiros, and Richard Zemel, "Exploring models and data for image question answering," in Advances in Neural Information Processing Systems, 2015, pp. 2953-2961.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, ICML. Kelvin Xu, Jimmy Ba, et al., "Show, attend and tell: Neu- ral image caption generation with visual attention," in ICML, 2015, pp. 2048-2057.
Visual dialog. Abhishek Das, Satwik Kottur, Abhishek Das, Satwik Kottur, et al., "Visual dialog," in CVPR, 2017, pp. 326-335.
Gog: Relation-aware graphover-graph network for visual dialog. Feilong Chen, Xiuyi Chen, Findings of ACL. Feilong Chen, Xiuyi Chen, et al., "Gog: Relation-aware graph- over-graph network for visual dialog," in Findings of ACL, 2021.
History for visual dialog: Do we really need it?. Shubham Agarwal, Trung Bui, Joon-Young Lee, arXiv:2005.07493Ioannis Konstas, and Verena Rieser. arXiv preprintShubham Agarwal, Trung Bui, Joon-Young Lee, Ioannis Kon- stas, and Verena Rieser, "History for visual dialog: Do we really need it?," arXiv preprint arXiv:2005.07493, 2020.
Multimodal incremental transformer with visual grounding for visual dialogue generation. Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, Jie Zhou, Findings of ACL. Feilong Chen, Fandong Meng, Xiuyi Chen, Peng Li, and Jie Zhou, "Multimodal incremental transformer with visual grounding for visual dialogue generation," in Findings of ACL, 2021.
Two causal principles for improving visual dialog. Jiaxin Qi, Yulei Niu, Jianqiang Huang, Hanwang Zhang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionJiaxin Qi, Yulei Niu, Jianqiang Huang, and Hanwang Zhang, "Two causal principles for improving visual dialog," Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, 2020.
Best of both worlds: Transferring knowledge from discriminative learning to a generative visual dialog model. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, Dhruv Batra, Advances in Neural Information Processing Systems. Jiasen Lu, Anitha Kannan, Jianwei Yang, Devi Parikh, and Dhruv Batra, "Best of both worlds: Transferring knowl- edge from discriminative learning to a generative visual dialog model," in Advances in Neural Information Processing Sys- tems, 2017, pp. 314-324.
Are you talking to me? reasoned visual dialog generation through adversarial learning. Qi Wu, Peng Wang, Chunhua Shen, Ian Reid, Anton Van Den, Hengel, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionQi Wu, Peng Wang, Chunhua Shen, Ian Reid, and Anton van den Hengel, "Are you talking to me? reasoned visual dialog generation through adversarial learning," in Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6106-6115.
Visual coreference resolution in visual dialog using neural module networks. Satwik Kottur, M F José, Devi Moura, Dhruv Parikh, Marcus Batra, Rohrbach, abs/1809.01816ArXiv. Satwik Kottur, José M. F. Moura, Devi Parikh, Dhruv Ba- tra, and Marcus Rohrbach, "Visual coreference resolution in visual dialog using neural module networks," ArXiv, vol. abs/1809.01816, 2018.
Multi-step reasoning via recurrent dual attention for visual dialog. Zhe Gan, Yu Cheng, E I Ahmed, Linjie Kholy, Jingjing Li, Jianfeng Liu, Gao, ACL. Zhe Gan, Yu Cheng, Ahmed EI Kholy, Linjie Li, Jingjing Liu, and Jianfeng Gao, "Multi-step reasoning via recurrent dual attention for visual dialog," in ACL, 2019, pp. 6463-6474.
DMRM: A dual-channel multi-hop reasoning model for visual dialog. Feilong Chen, Fandong Meng, Jiaming Xu, Peng Li, Bo Xu, Jie Zhou, Thirty-Fourth AAAI Conference on Artificial Intelligence. Feilong Chen, Fandong Meng, Jiaming Xu, Peng Li, Bo Xu, and Jie Zhou, "DMRM: A dual-channel multi-hop reasoning model for visual dialog," Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020.
Dual visual attention network for visual dialog. Dan Guo, Hui Wang, Meng Wang, Dan Guo, Hui Wang, and Meng Wang, "Dual visual attention network for visual dialog," pp. 4989-4995, 2019.
Efficient attention mechanism for visual dialog that can handle all the interactions between multiple inputs. Masanori Van-Quang Nguyen, Takayuki Suganuma, Okatani, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionVan-Quang Nguyen, Masanori Suganuma, and Takayuki Okatani, "Efficient attention mechanism for visual dialog that can handle all the interactions between multiple inputs," Pro- ceedings of the European Conference on Computer Vision, 2020.
Reasoning visual dialogs with structural and partial observations. Zilong Zheng, Wenguan Wang, Siyuan Qi, Song-Chun Zhu, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZilong Zheng, Wenguan Wang, Siyuan Qi, and Song-Chun Zhu, "Reasoning visual dialogs with structural and partial ob- servations," in Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition, 2019, pp. 6669-6678.
Factor graph attention. Idan Schwartz, Seunghak Yu, Tamir Hazan, Alexander G Schwing, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIdan Schwartz, Seunghak Yu, Tamir Hazan, and Alexander G Schwing, "Factor graph attention," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recogni- tion, 2019, pp. 2039-2048.
DualVD: An adaptive dual encoding model for deep visual understanding in visual dialogue. Xiaoze Jiang, Jing Yu, Zengchang Qin, Yingying Zhuang, Xingxing Zhang, Yue Hu, Qi Wu, AAAI, 2020. 15Xiaoze Jiang, Jing Yu, Zengchang Qin, Yingying Zhuang, Xingxing Zhang, Yue Hu, and Qi Wu, "DualVD: An adaptive dual encoding model for deep visual understanding in visual dialogue.," in AAAI, 2020, vol. 1, p. 5.
Iterative context-aware graph inference for visual dialog. Dan Guo, Hui Wang, Hanwang Zhang, Zheng-Jun Zha, Meng Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionDan Guo, Hui Wang, Hanwang Zhang, Zheng-Jun Zha, and Meng Wang, "Iterative context-aware graph inference for vi- sual dialog," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 10055- 10064.
KBGN: Knowledge-bridge graph network for adaptive vision-text reasoning in visual dialogue. Xiaoze Jiang, Siyi Du, Zengchang Qin, Yajing Sun, Jing Yu, Proceedings of the 28th ACM International Conference on Multimedia. the 28th ACM International Conference on MultimediaXiaoze Jiang, Siyi Du, Zengchang Qin, Yajing Sun, and Jing Yu, "KBGN: Knowledge-bridge graph network for adaptive vision-text reasoning in visual dialogue," Proceedings of the 28th ACM International Conference on Multimedia, 2020.
Large-scale pretraining for visual dialog: A simple stateof-the-art baseline. Vishvak Murahari, Dhruv Batra, Devi Parikh, Abhishek Das, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionVishvak Murahari, Dhruv Batra, Devi Parikh, and Abhishek Das, "Large-scale pretraining for visual dialog: A simple state- of-the-art baseline," Proceedings of the European Conference on Computer Vision, 2020.
Faster R-CNN: towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, He, NeurIPS. Shaoqing Ren, Kaiming He, et al., "Faster R-CNN: towards real-time object detection with region proposal networks," in NeurIPS, 2015, pp. 91-99.
Visual genome: Connecting language and vision using crowdsourced dense image annotations. Ranjay Krishna, Yuke Zhu, IJCV. 1231Ranjay Krishna, Yuke Zhu, et al., "Visual genome: Connecting language and vision using crowdsourced dense image annota- tions," IJCV, vol. 123, no. 1, pp. 32-73, 2017.
Google's neural machine translation system: Bridging the gap between human and machine translation. Yonghui Wu, Mike Schuster, abs/1609.08144CoRR. Yonghui Wu, Mike Schuster, et al., "Google's neural machine translation system: Bridging the gap between human and ma- chine translation," CoRR, vol. abs/1609.08144, 2016.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL-HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova, "BERT: pre-training of deep bidirectional trans- formers for language understanding," in NAACL-HLT, 2019, pp. 4171-4186.
Domain adaptive training BERT for response selection. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, Heuiseok Lim, abs/1908.04812CoRR. Taesun Whang, Dongyub Lee, Chanhee Lee, Kisu Yang, Dongsuk Oh, and Heuiseok Lim, "Domain adaptive training BERT for response selection," CoRR, vol. abs/1908.04812, 2019.
Layer normalization. Jimmy Lei, Jamie Ryan Ba, Geoffrey E Kiros, Hinton, abs/1607.06450CoRR. Lei Jimmy Ba, Jamie Ryan Kiros, and Geoffrey E. Hinton, "Layer normalization," CoRR, vol. abs/1607.06450, 2016.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Ashish Vaswani, Noam Shazeer, et al., "Attention is all you need," in NeurIPS, 2017, pp. 5998-6008.
Visual dialog. Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, M F José, Devi Moura, Dhruv Parikh, Batra, 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu, HI, USAAbhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, and Dhruv Ba- tra, "Visual dialog," in 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21-26, 2017, 2017, pp. 1080-1089.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba, "Adam: A method for stochastic optimization," in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
Learning to ground visual objects for visual dialog. Feilong Chen, Xiuyi Chen, Can Xu, Daxin Jiang, arXiv:2109.06013arXiv preprintFeilong Chen, Xiuyi Chen, Can Xu, and Daxin Jiang, "Learn- ing to ground visual objects for visual dialog," arXiv preprint arXiv:2109.06013, 2021.
| [] |
[
"HIGH ORDER RECURRENT NEURAL NETWORKS FOR ACOUSTIC MODELLING",
"HIGH ORDER RECURRENT NEURAL NETWORKS FOR ACOUSTIC MODELLING"
] | [
"C Zhang \nCambridge University Engineering Dept\nTrumpington StCB2 1PZCambridgeU.K\n",
"P C Woodland \nCambridge University Engineering Dept\nTrumpington StCB2 1PZCambridgeU.K\n"
] | [
"Cambridge University Engineering Dept\nTrumpington StCB2 1PZCambridgeU.K",
"Cambridge University Engineering Dept\nTrumpington StCB2 1PZCambridgeU.K"
] | [] | Vanishing long-term gradients are a major issue in training standard recurrent neural networks (RNNs), which can be alleviated by long short-term memory (LSTM) models with memory cells. However, the extra parameters associated with the memory cells mean an LSTM layer has four times as many parameters as an RNN with the same hidden vector size. This paper addresses the vanishing gradient problem using a high order RNN (HORNN) which has additional connections from multiple previous time steps. Speech recognition experiments using British English multi-genre broadcast (MGB3) data showed that the proposed HORNN architectures for rectified linear unit and sigmoid activation functions reduced word error rates (WER) by 4.2% and 6.3% over the corresponding RNNs, and gave similar WERs to a (projected) LSTM while using only 20%-50% of the recurrent layer parameters and computation. | 10.1109/icassp.2018.8461608 | [
"https://arxiv.org/pdf/1802.08314v1.pdf"
] | 3,508,892 | 1802.08314 | 4eb4f21f8f1b07fa7f372f82632fb98b8434536e |
HIGH ORDER RECURRENT NEURAL NETWORKS FOR ACOUSTIC MODELLING
C Zhang
Cambridge University Engineering Dept
Trumpington StCB2 1PZCambridgeU.K
P C Woodland
Cambridge University Engineering Dept
Trumpington StCB2 1PZCambridgeU.K
HIGH ORDER RECURRENT NEURAL NETWORKS FOR ACOUSTIC MODELLING
Vanishing long-term gradients are a major issue in training standard recurrent neural networks (RNNs), which can be alleviated by long short-term memory (LSTM) models with memory cells. However, the extra parameters associated with the memory cells mean an LSTM layer has four times as many parameters as an RNN with the same hidden vector size. This paper addresses the vanishing gradient problem using a high order RNN (HORNN) which has additional connections from multiple previous time steps. Speech recognition experiments using British English multi-genre broadcast (MGB3) data showed that the proposed HORNN architectures for rectified linear unit and sigmoid activation functions reduced word error rates (WER) by 4.2% and 6.3% over the corresponding RNNs, and gave similar WERs to a (projected) LSTM while using only 20%-50% of the recurrent layer parameters and computation.
INTRODUCTION
A recurrent neural network (RNN) is an artificial neural network layer where hidden layer outputs from the previous time step form part of the input used to process the current time step [1,2]. This allows information to be preserved through time and is well suited to sequence processing problems, such as acoustic and language modelling for automatic speech recognition [3,4]. However, training RNNs with sigmoid activation functions by gradient descent can be difficult. The key issues are exploding and vanishing gradients [5], i.e., the long-term gradients, which are back-propagated through time, can either continually increase (explode) or decrease to zero. This causes RNN training to either fail to capture long-term temporal relations or for standard update steps to put parameters out of range.
Many methods have been proposed to solve the gradient exploding and vanishing problems. While simple gradient clipping has been found to work well in practice to prevent gradient exploding [4], circumventing vanishing gradients normally requires more sophisticated strategies [6]. For instance [7] uses Hessian-Free training which makes use of second-order derivative information. Modifying the recurrent layer structure is another approach. The use of both rectified linear unit (ReLU) and sigmoid activation functions with trainable amplitudes were proposed to maintain the magnitude of RNN long-term gradients [8][9][10]. A gating technique is used in the long short-term memory (LSTM) model where additional parameters implement a memory circuit which can remember longterm information from the recurrent layer [11]. A model similar to the LSTM is the gated recurrent unit [12]. More recently, additional residual [13] and highway connections [14] were proposed to train very deep feed-forward models, which allows gradients to pass more easily through many layers. Various similar ideas have been applied to recurrent models [15][16][17][18][19][20]. Among these approaches, Thanks to Mark Gales and the MGB3 team for the MGB3 setup used. the LSTM has recently become the dominant type of recurrent architecture. However LSTMs, due to the extra parameters associated with gating, use four times more parameters as standard RNNs with the same hidden layer size, which significantly increases storage and computation in both training and testing.
In this paper, we propose another RNN modification, the high order RNN (HORNN), as an alternative to the LSTM. It handles vanishing gradients by adding connections from hidden state values at multiple previous time steps to the RNN input. By interpreting the RNN layer hidden vector as a continuous valued hidden state, the connections are termed high order since they introduce dependencies on multiple previous hidden states. Acoustic modelling using HORNNs is investigated for both sigmoid and ReLU activation functions. In the sigmoid case, it is found that additional high order connections are beneficial. Furthermore, analogous to the projected LSTM (LSTMP) [22], a linear recurrent projection layer can be used by HORNNs to reduce the number of parameters, which results in the projected HORNN (HORNNP). Experimental results show that the HORNN/HORNNP (both sigmoid and ReLU) have similar word error rates (WERs) to LSTM/LSTMP models with the same hidden vector size, while using fewer than half the parameters and computation. Furthermore, HORNNs were also found to outperform RNNs with residual connections in terms of both speed and WER. This paper is organised as follows. Section 2 reviews RNN and LSTM models. The (conditional) Markov property of RNNs is described in Sec. 3, which leads to HORNNs and architectures for both sigmoid and ReLU activation functions. The experimental setup and results are given in Sec. 4 and Sec. 5, followed by conclusions.
RNN AND LSTM MODELS
In this paper, an RNN refers to an Elman network [2] that produces its output hidden vector at step t, ht, based on the previous output ht−1 and the current input xt by
ht = f (at) = f (Wxt + Uht−1 + b),(1)
where W and U are the weights, b is the bias, and f (·) and at are the activation function and its input activation value. In general, ht is processed by a number of further layers to obtain the final network output. It is well known that when f (·) is the sigmoid denoted σ(·), RNNs suffer from the vanishing gradient issue since
∂σ(at) ∂at = σ (at)(1 − σ(at)) 1 4 ,
which enforces gradient magnitute reductions in backpropagation [3]. Note that ReLU RNNs suffer less from this issue. In contrast to a standard RNN, the LSTM model resolves gradient vanishing by using an additional linear state ct at each step of the sequence, which can be viewed as a memory cell. step, a new cell candidatect is created to encode the information from the current step. ct is first updated by interpolating ct−1 with ct based on the forget gate ft and input gate it, and then converted to the LSTM hidden state by transforming with hyperbolic tangent (tanh) and scaling by the output gate ot. This procedure simulates a memory circuit where ft, it, and ot are analogous to its logic gates [11]. More specifically, an LSTM layer step t is evaluated as
it = σ(Wixt + Uiht−1 + Vi ct−1 + bi), ft = σ(W f xt + U f ht−1 + V f ct−1 + b f ), ct = tanh(Wcxt + Ucht−1 + bc), ct = ft ct−1 + it c t, ot = σ(Woxt + Uoht−1 + Vo ct + bo), ht = ot tanh(ct),
where represents element-wise product, and the V matrices are diagonal which serve as a "peephole". Although LSTMs work very well on a large variety of tasks, it is computationally very expensive. The representations for each temporal step,ct, are extracted in the same way as the RNN ht. However, the additional cost of finding ct overct, requires three times the computation and parameter storage since it, ft, and ot all need to be calculated.
HIGH ORDER RNN ACOUSTIC MODELS
In this section, HORNNs are proposed by relaxing the first-order Markov conditional independence constraint.
Markov Conditional Independence
The posterior probability of the T frame label sequence y1:T given the T frame input sequence x1:T can be found by integrating over all possible continuous hidden state sequencesh1:T
Since the initial hidden state is given (often set to h0 = 0), all subsequent states h1:T are determined by Eqn. (1), which means p(ht|ht−1, xt) is a Kronecker delta function
p(ht|ht−1, xt) = 1 ifht = ht 0 otherwise . Hence P (y1:T |x1:T ) = T t=1 P (yt|ht = f (Wxt + Uht−1 + b)). Eqn.
(2) is the 1st-order Markov conditional independence property [23]. It means that the current state ht depends only on its immediately preceding state ht−1 and the current input xt. This property differs from the 1st-order Markov property by also conditioning on xt. 1 Note that this property also applies to bidirectional RNNs [24], which is easy to show by defining a new hidden state h bid
HORNNs for Sigmoid and ReLU Activation Functions
In this paper, the gradient vanishing issue is tackled by relaxing the first-order Markov conditional independence constraint. Hence, not only the direct preceding state ht−1 but also previous states ht−n(n > 1) are used when calculating ht. This adds additional high order connections to the RNN architecture and results in a HORNN. From a training perspective, including high order states creates shortcuts for backpropagation to allow additional long-term information to flow more easily. Specifically, the gradients w.r.t. ht−1 of a general n-order RNN can be obtained by
∂F ∂ht−1 = n i=1 ∂F ∂ht−i−1 ∂ht−i−1 ∂ht−1 ,(3)
where F is the training criterion. For n > 1, Eqn. (3) sums multiple terms to prevent the gradient vanishing. From an inference (testing) perspective, an RNN assumes sufficient past temporal information has been embedded in the representation ht−1, but using a fixed sized ht−1, means that information from distant long-term steps may not be properly integrated with new short-term information. The HORNN architecture allows more direct access to the past long-term information.
There are many alternative ways of using ht−n in the calculation of ht in the HORNN framework. This paper assumes that the high order connections are linked to the input at step t. It was found to be sufficient to use only one high order connection at the input, i.e.
ht = f (Wxt + U1ht−1 + Unht−n + b).(4)
Here ht−n can be viewed as a kind of "memory" whose temporal resolution is modified by Un. From our experiments the structure in Eqn. (4) allowed ReLU HORNNs to give similar WERs to LSTMs. However, when using sigmoid HORNNs, a slightly different structure is needed to reach a similar WER. This has an extra high order connection from ht−m to the sigmoid function input, i.e.
ht = f (Wxt + U1ht−1 + Unht−n + ht−m + b).(5)
Here, ht−m is directly added to the sigmoid input without impacting the temporal resolution at t since ht−m is from a previous sigmoid output. Eqns. (4) and (5) are used for ReLU and sigmoid HORNNs throughout the paper.
Parameter Control using Matrix Factorisation
Comparing Eqns. (4) and (5) to Eqn. (1), HORNN increases the number of RNN layer parameters from (Dx
+ D h )D h + D h to (Dx + 2D h )D h + D h ,
where Dx and D h are the sizes of xt and ht. One method to reduce the increase in parameters is to project the hidden state vectors to some a dimension Dp with a recurrent linear projection P [22]. This factorises U1 and Un in Eqns. (4) and (5) to Up1P and UpnP with a low-rank approximation. The projected HORNNs (denoted by HORNNP) for ReLU and sigmoid activations are hence defined as
ht = f (Wxt + Up1Pht−1 + UpnPht−n + b)(6)
and
ht = f (Wxt + Up1Pht−1 + UpnPht−n + ht−m + b),(7)
and the number of parameters used is D h Dp+(Dx+2Dp)D h +D h . The resulting parameter reduction ratio is approximately 2D h /3Dp (given D h > Dp Dx). Note that the same idea was used by the projected LSTM (LSTMP) to factorise Ui, U f , Uc, and Uo [22], which reduces the number of LSTM parameters from 4(Dx +
D h )D h + 7D h to D h Dp + 4(Dx + Dp)D h + 7D h .
Next we compare the computational complexity of LSTMs and HORNNs. Given that multiplying a l × m matrix by a m × n matrix (l = m = n) requires lmn multiply-adds, and ignoring all element-wise operations, the testing complexity for a HORNNP layer is O(T (Dx + 3Dp)D h ), whereas for an LSTMP it is O(T D h Dp + 4T (Dx + Dp)D h ). This shows that HORNNPs use less than 3/5 of the calculations of LSTMPs. It has been found that HORNNPs often result in a 50% speed up over LSTMPs in our current HTK implementation [25][26][27].
Related Work
After independently developing the HORNN for acoustic modelling, we found that similar ideas had previously been applied to rather different tasks [28][29][30][31][32]. However, both the research focus and model architectures were different to this paper. In particular, the model proposed in [28,31] is equivalent to Eqn. (4) without subsampling the high order hidden vectors, and [32] applied that model to TIMIT phone recognition. Furthermore, previous studies didn't discuss the high order connections in the Markov property framework.
Adding ht−m to the input of the sigmoid function in Eqn. (5) is similar to the residual connection in residual networks [13]. A residual RNN (ResRNN) with a recurrent kernel depth of two (d = 2) can be written as
ht = f (U d2 f (Wxt + U d1 ht−1 + b) + ht−m),(8)
where m = 1 [17]. Another related model is the recent residual memory network [21], which can be viewed as an unfolded HORNN defined in Eqn. (4) with U1 and b being zero, W being distinct untied parameters in each unfolded layer, and n 1 being any positive integer. In addition, since highway networks [14] can be viewed as a generalised form of the residual networks, highway RNNs and LSTMs are also related to this work [15,19]. Note that it is also possible to combine the residual and highway ideas with HORNNs by increasing the recurrent depth.
EXPERIMENTAL SETUP
The proposed HORNN models were evaluated by training systems on multi-genre broadcast (MGB) data from the MGB3 speech recognition challenge task [33,34]. The audio is from BBC TV programmes covering a range of genres. A 275 hour (275h) full training set was selected from 750 episodes where the sub-titles have a phone matched error rate < 40% compared to the lightly supervised output [35] which was used as training supervision. A 55 hour (55h) subset was sampled at the utterance level from the 275h set. A 63k word vocabulary [36] was used with a trigram word level language model (LM) estimated from both the acoustic transcripts and a separate 640 million word MGB subtitle archive. The test set, dev17b, contains 5.55 hours of audio data and 5,201 manually segmented utterances from 14 episodes of 13 shows. This is a subset of the official full development set (dev17a) with data that overlaps training and test sets excluded. System outputs were evaluated with confusion network decoding (CN) [37] as well as 1-best Viterbi decoding. All experiments were conducted with an extended version of HTK 3.5 [25,26]. The LSTM was implemented following [22]. A 40d log-Mel filter bank analysis was used and expanded to an 80d vector with its ∆ coefficients. The data was normalised at the utterance level for mean and at the show-segment level for variance [38].
The inputs at each recurrent model time step were single frames delayed for 5 steps [22,39]. All models were trained using the crossentropy criterion and frame-level shuffling used. All recurrent models were unfolded for 20 time steps, and the gradients of the shared parameters were normalised by dividing by the sharing counts [26]. The maximum parameter changes were constrained by update value clipping with a threshold of 0.32 for a minibatch with 800 samples.
About 6k/9k decision tree clustered triphone tied-states along with GMM-HMM/DNN-HMM system training alignments were used for the 55h/275h training sets. One hidden layer with the same dimension as ht was added between the recurrent and output layers to all models. The NewBob + learning rate scheduler [26,27] was used to train all models with the setup from our previous MGB systems [38]. An initial learning rate of 5 × 10 −4 was used for all ReLU models, while an initial rate of 2 × 10 −3 was used to train all the other models. Since regularisation plays an important role in RNN/LSTM training, weight decay factors were carefully tuned to maximise the performance of each system.
EXPERIMENTAL RESULTS
55 Hour Single Layer HORNN Experiments
Initial experiments studied various HORNN architectures in order to investigate suitable values of n for the ReLU model in Eqn. (4), and for both m and n for the sigmoid model in Eqn. (5). To save computation, the 55h subset was used for training. All models had one recurrent layer with the ht size fixed to 500. An LSTM and a standard RNN were created as baselines, which had 1.16M and 0.29M parameters in the recurrent layers respectively. A ResRNN, defined by Eqn. (8) was also tested as an additional baseline using both ReLU and sigmoid functions. 2 ResRNNs had the same number of parameters (0.54M) as the HORNNs. Note that rather than the standard case with m = 1 [17], m ∈ [1, 4] were examined which falls into the high order framework when m > 1. For HORNNs, n ∈ [2, 6] were tested; m was fixed to 2 for all sigmoid HORNNs. From the results shown in Figure 1, the LSTM gives lower WERs than a standard RNN, but the ReLU ResRNN with m set to 1 or 2 had a similar WER to the LSTM.
ReLU HORNNs gave WERs at least as low as the LSTM and the best ReLU ResRNN systems. Sigmoid HORNNs gave better WERs than sigmoid ResRNNs and similar WERs to those from the LSTM. The performance can be further improved by using p-sigmoid [40] as the HORNN activation function which associates a linear scaling factor to each recurrent layer output unit and makes it more similar to a ReLU. In addition, HORNNs were faster than both LSTM and ResRNNs. ResRNNs were slightly slower than HORNNs since the second matrix multiplication depends on the first one at each recurrent step. For the rest of the experiments, all ReLU HORNNs used n = 4 , and all sigmoid HORNNs used m = 1 and n = 2.
Projected and Multi-Layered HORNN Results
Next, projected LSTMs and projected HORNNs were compared. First, D h (the size of ht) and Dp (the projected vector size) were fixed to 500 and 250 respectively for the single recurrent layer (1L) LSTMP and HORNNP models. The LSTMP baseline L 55h 1 had 0.79M parameters and HORNNP system S 55h 1 and R 55h 1 had 0.42M parameters. From Table 1 similar WERs to LSTM and LSTMP (L 55h 1 ) with only 20% and 29% of the recurrent layer parameters.
The values of D h and Dp for HORNNs were increased to 800 and 400 respectively to make the overall number of recurrent layer parameters (1.02M) closer to that of the 500d LSTM (1.16M). This produced system S 55h 3 and R 55h 3 . The LSTMP was also modified to D h = 600 and Dp = 300 to have 1.10M parameters. From the results in Table 1, S 55h 3 and R 55h 3 both outperformed L 55h 2 by a margin since the 800d representations embed more accurate temporal information than with 600d. The p-sigmoid function was not used for HORNNPs since the linear projection layer also scales ht.
Finally, the LSTMP and HORNNP were compared by stacking another recurrent layer. With two recurrent layers (2L) of D h = 500 and Dp = 250, the 2L HORNNP systems S 55h 4 and R 55h 4 had 0.92M parameters and still produced similar WERs to the 2L LSTMP system L 55h 3 (with 1.91M parameters). These results indicate that rather than spending most of the calculations on maintaining the LSTM memory cell, it is more effective to use HORNNs and use the computational budget for extracting better temporal representations using wider and deeper recurrent layers.
Experiments on 275 Hour Data Set
To ensure that the previous results scale to a significantly larger training set, some selected LSTMP and HORNNP systems were built on the full 275h set. Here D h and Dp were set to 1000 and 500, which increased the number of recurrent layer parameters to better model the full training set. From Table 2, for both single recurrent layer and two recurrent layer architectures, HORNNs still produced similar WERs to the corresponding LSTMPs. This validates our previous finding on a larger data set that the proposed HORNN structures can work as well as the widely used LSTMs on acoustic modelling by using far fewer parameters. In addition, along with the multi-layered structure, HORNNs can also be applied to other kinds of recurrent models by replacing RNNs and LSTMs, such as the bidirectional [24] and grid [39,41,42] Table 2. %WERs for a selection of 275h system on dev17b. Systems use a trigram LM with Viterbi decoding (tg) or CN decoding (cn).
CONCLUSIONS
This paper proposed the use of HORNNs for acoustic modelling to address the vanishing gradient problem in training recurrent neural networks. Two different architectures were proposed to cover both ReLU and sigmoid activation functions. These yielded 4%-6% WER reductions over the standard RNNs with the same activation function. Furthermore, additional structures were investigated: reducing the number of HORNN parameters with a linear recurrent projected layer; and adding another recurrent layer. In all cases, compared to the projected LSTMs and the residual RNNs, it was shown that HORNNs gave similar WER performance while being significantly more efficient in computation and storage. When the savings in parameter number and computation are used to implement wider or deeper recurrent layers, (projected) HORNNs gave a 4% relative reduction in WER over the comparable (projected) LSTMs .
At each To appear in Proc. ICASSP 2018, April 15-20, 2018, Calgary, Canada c IEEE 2018
PP
(y1:T |x1:T ) = P (y1:T |h1:T , x1:T )p(h1:T |x1:T ) (yt|y1:t−1,h1:T , x1:T )p(ht|h1:t−1, x1:T ) dh1:T . When implemented using an RNN, P (yt|y1:t−1,h1:T , x1:T ) = P (yt|ht), which is produced by the layers after the RNN layer. From Eqn. (1), ht depends only onht−1 and xt, i.e., p(ht|h1:t−1, x1:T ) = p(ht|ht−1, xt).
, the HORNNPs have similar WERs to the LSTMP. By further reducing Dp to 250, the HORNN systems,S 55h
2 and R 55h
2 , reduced the number of parameters to 0.23M and gave
m=1
m=2
m=3
m=4
m=1
m=2
m=3
m=4
31
32
33
34
35
36
Baseline System %WER
32.9
32.2
33.9
33.2
35.4
34.3
32.7
32.9
33.1 33.0
32.1 32.2 32.3 32.3
34.3
33.5
33.9 33.8
33.4
32.7
33.1 33.1
LSTM tg
LSTM cn
ReLU RNN tg
ReLU RNN cn
sigmoid RNN tg
sigmoid RNN cn
ReLU ResRNN tg
ReLU ResRNN cn
sigmoid ResRNN tg
sigmoid ResRNN cn
n=2
n=3
n=4
n=5
n=6
m=1
n=2
m=1
n=3
m=1
n=4
m=1
n=5
m=1
n=6
m=1
n=2
31
32
33
34
35
36
HORNN System %WER
32.7 32.7 32.5
32.8
32.5
31.9 31.9 31.8 31.9 31.8
33.1
33.3
32.9
33.1
33.3
32.2
32.6
32.3 32.3
32.5
32.9
32.0
ReLU HORNN tg
ReLU HORNN cn
sigmoid HORNN tg
sigmoid HORNN cn
p-sigmoid HORNN tg
p-sigmoid HORNN cn
Fig. 1. %WERs of 55h systems on dev17b. Systems use a trigram
LM with Viterbi decoding (tg) or CN decoding (cn).
Table 1. %WERs for various 55h system on dev17b. Systems use a trigram LM with Viterbi decoding (tg) or CN decoding (cn).ID
System
D h
Dp
tg
cn
L 55h
1
1L LSTMP
500 250 32.9 32.1
L 55h
2
1L LSTMP
600 300 32.7 32.0
L 55h
3
2L LSTMP
500 250 31.3 30.6
S 55h
1
1L sigmoid HORNNP 500 250 32.8 31.9
S 55h
2
1L sigmoid HORNNP 500 125 33.0 32.1
S 55h
3
1L sigmoid HORNNP 800 400 31.6 30.9
S 55h
4
2L sigmoid HORNNP 500 250 31.4 30.7
R 55h
1
1L ReLU HORNNP
500 250 32.0 31.4
R 55h
2
1L ReLU HORNNP
500 125 32.5 31.8
R 55h
3
1L ReLU HORNNP
800 400 31.4 30.7
R 55h
4
2L ReLU HORNNP
500 250 31.4 30.7
structures etc. Finally, a 7 layer (7L) sigmoid DNN system, D 275h 1 , was built following [38] as a reference. 2L sigmoid HORNNP 1000 500 25.6 25.2 R 275hID
System
D h
Dp
tg
cn
L 275h
1
1L LSTMP
1000 500 26.5 26.0
S 275h
1
1L sigmoid HORNNP 1000 500 26.4 25.8
R 275h
1
1L ReLU HORNNP
1000 500 26.4 25.9
L 275h
3
2L LSTMP
1000 500 25.7 25.2
S 275h
4
4
2L ReLU HORNNP
1000 500 25.3 25.0
D 275h
1
7L sigmoid DNN
1000
28.4 27.5
t = {h fwd t , h bwd t }, where h fwd tand h bwd t are the forward and backward RNN hidden states.1 For language modelling, ht has the standard Markov property as the RNN models P (y 1:T ) without conditioning on x 1:T .
This is also the first time to apply such ResRNNs to acoustic modelling.
D E Rumelhart, J L Mcclelland, & the PDP Research Group Parallel Distributed Processing: Explorations in the Microstructure of Cognition. MIT Press1D.E. Rumelhart, J.L. McClelland, & the PDP Research Group Parallel Distributed Processing: Explorations in the Microstructure of Cogni- tion, Volume 1: Foundations, MIT Press, 1986.
Finding structure in time. J L Elman, Cognitive Science. 14J.L. Elman, "Finding structure in time", Cognitive Science, vol. 14, pp. 179-211, 1990.
The use of recurrent neural networks in continuous speech recognition. T Robinson, M Hochberg, S Renals, Automatic Speech and Speaker Recognition. SpringerT. Robinson, M. Hochberg and S. Renals. "The use of recurrent neural networks in continuous speech recognition", In Automatic Speech and Speaker Recognition, pp. 233-258, Springer, 1996.
T Mikolov, Statistical Language Models based on Neural Networks. Brno, Czech RepublicBrno University of TechnologyPh.D. thesisT. Mikolov, Statistical Language Models based on Neural Networks, Ph.D. thesis, Brno University of Technology, Brno, Czech Republic, 2012.
Learning long-term dependencies with gradient descent is difficult. Y Bengio, P Simard, & P Frasconi, IEEE Transactions on Neural Networks. 5Y. Bengio, P. Simard, & P. Frasconi, "Learning long-term dependen- cies with gradient descent is difficult", IEEE Transactions on Neural Networks, vol. 5, pp. 157-166, 1994.
On the difficulty of training recurrent neural networks. R Pascanu, T Mikolov, & Y Bengio, Proc. ICML. ICMLAtlantaR. Pascanu, T. Mikolov, & Y. Bengio, "On the difficulty of training recurrent neural networks", Proc. ICML, Atlanta, 2013.
Generating text with recurrent neural networks. I Sutskever, J Martens, & G Hinton, Proc. ICML. ICMLNew YorkI. Sutskever, J. Martens, & G. Hinton, "Generating text with recurrent neural networks", Proc. ICML, New York, 2011.
A model of multiplicative neural responses in parietal cortex. E F Salinas & L, Abbott, Proc. National Academy of Science U.S.A. 93E. Salinas & L.F. Abbott, "A model of multiplicative neural responses in parietal cortex", Proc. National Academy of Science U.S.A., vol. 93, pp. 11956-11961, 1996.
On the piecewise analysis of networks of linear threshold neurons. R L T Hahnloser, Neural Networks. 11R.L.T. Hahnloser, "On the piecewise analysis of networks of linear threshold neurons", Neural Networks, vol. 11, pp. 691-697, 1998.
Recurrent neural networks with trainable amplitude of activation functions. S L P Goh & D, Mandic, Neural Networks. 16S.L. Goh & D.P. Mandic "Recurrent neural networks with trainable amplitude of activation functions", Neural Networks, vol. 16, pp. 1095- 1100, 2003.
Long short-term memory. S Hochreiter, & J Schmidhuber, Neural Computation. 9S. Hochreiter & J. Schmidhuber, "Long short-term memory", Neural Computation, vol. 9, pp. 1735-1780, 1997.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, C Gulcehre, K H Cho, & Y Bengio, arXiv.org, 1412.3555J. Chung, C. Gulcehre, K.H. Cho, & Y. Bengio, "Empirical evaluation of gated recurrent neural networks on sequence modeling", arXiv.org, 1412.3555, 2014.
Deep residual learning for image recognition. K He, X Zhang, S Ren, & J Sun, Proc. CVPR. CVPRLas VegasK. He, X. Zhang, S. Ren, & J. Sun, "Deep residual learning for image recognition", Proc. CVPR, Las Vegas, 2016.
Highway networks. R K Srivastava, K Greff, & J Schmidhuber, arXiv.org, 1505.00387R.K. Srivastava, K. Greff, & J. Schmidhuber, "Highway networks", arXiv.org, 1505.00387, 2015.
Recurrent highway networks. J G Zilly, R K Srivastava, J Koutník, & J Schmidhuber, arXiv.org, 1607.03474J.G. Zilly, R.K. Srivastava, J. Koutník, & J. Schmidhuber, "Recurrent highway networks", arXiv.org, 1607.03474, 2016.
Highway long short-term memory RNNs for distant speech recognition. Y Zhang, G Chen, D Yu, K Yao, S Khudanpur, & J Glass, Proc. ICASSP. ICASSPShanghaiY. Zhang, G. Chen, D. Yu, K. Yao, S. Khudanpur, & J. Glass, "Highway long short-term memory RNNs for distant speech recognition", Proc. ICASSP, Shanghai, 2016.
Recurrent residual learning for sequence classification. Y Wang & F. Tian, Proc. EMNLP. EMNLPAustinY. Wang & F. Tian, "Recurrent residual learning for sequence classifi- cation", Proc. EMNLP, Austin, 2016.
Pixel recurrent neural networks. A Van Den Oord, N Kalchbrenner, & K Kavukcuoglu, Proc. ICML. ICMLNew YorkA. van den Oord, N. Kalchbrenner, & K. Kavukcuoglu, "Pixel recurrent neural networks", Proc. ICML, New York, 2016.
Highway-LSTM and recurrent highway networks for speech recognition. G N Pundak & T, Sainath, Proc. Interspeech. InterspeechStockholmG. Pundak & T.N. Sainath, "Highway-LSTM and recurrent highway networks for speech recognition", Proc. Interspeech, Stockholm, 2017.
Residual LSTM: Design of a deep recurrent architecture for distant speech recognition. J Kim, M El-Khamy, & J Lee, Proc. Interspeech. InterspeechStockholmJ. Kim, M. El-Khamy, & J. Lee, "Residual LSTM: Design of a deep re- current architecture for distant speech recognition", Proc. Interspeech, Stockholm, 2017.
Residual memory networks: Feed-forward approach to learn long-term temporal dependencies. M K Baskar, M Karafiát, L Burget, K Veselý, F Grézl, & J H Černocký, Proc. ICASSP, New Orleans. ICASSP, New OrleansM.K. Baskar, M. Karafiát, L. Burget, K. Veselý, F. Grézl, & J.H.Černocký, "Residual memory networks: Feed-forward approach to learn long-term temporal dependencies", Proc. ICASSP, New Or- leans, 2017.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling. H Sak, A Senior, & F Beaufays, Proc. Interspeech. InterspeechSingaporeH. Sak, A. Senior, & F. Beaufays, "Long short-term memory recurrent neural network architectures for large scale acoustic modeling", Proc. Interspeech, Singapore, 2014.
Creadit assignment through time: Alternatives to backpropagation. Y Bengio, & P Frasconi, Advances in NIPS. 6Y. Bengio & P. Frasconi, Creadit assignment through time: Alternatives to backpropagation, Advances in NIPS 6, Hong Kong, 1993.
Bidirectional recurrent neural networks. M K Schuster & K, Paliwal, IEEE Transactions on Signal Processing. 45M. Schuster & K.K. Paliwal, "Bidirectional recurrent neural networks", IEEE Transactions on Signal Processing, vol. 45, pp. 2673-2681, 1997.
The HTK Book (for HTK version 3.5). S Young, G Evermann, M Gales, T Hain, D Kershaw, X Liu, G Moore, J Odell, D Ollason, D Povey, A Ragni, V Valtchev, P Woodland, & C Zhang, Cambridge University Engineering DepartmentS. Young, G. Evermann, M. Gales, T. Hain, D. Kershaw, X. Liu, G. Moore, J. Odell, D. Ollason, D. Povey, A. Ragni, V. Valtchev, P. Woodland, & C. Zhang, The HTK Book (for HTK version 3.5), Cam- bridge University Engineering Department, 2015.
A general artificial neural network extension for HTK. C C Zhang & P, Woodland, Proc. Interspeech. InterspeechDresdenC. Zhang & P.C. Woodland, "A general artificial neural network exten- sion for HTK", Proc. Interspeech, Dresden, 2015.
Joint Training Methods for Tandem and Hybrid Speech Recognition Systems using Deep Neural Networks. C Zhang, Cambridge, UKUniversity of CambridgePh.D. thesisC. Zhang, Joint Training Methods for Tandem and Hybrid Speech Recognition Systems using Deep Neural Networks, Ph.D. thesis, Uni- versity of Cambridge, Cambridge, UK, 2017.
Learning long-term dependencies in NARX recurrent neural networks. T Lin, B G Horne, P Tiňo, & C Lee Giles, IEEE Transactions on Neural Networks. 7T. Lin, B.G. Horne, P. Tiňo, & C. Lee Giles, "Learning long-term dependencies in NARX recurrent neural networks", IEEE Transactions on Neural Networks, vol. 7, pp. 1329-1338, 1996.
Markovian architectural bias of recurrent neural networks. P Tiňo, M Čerňanský, & L Beňušková, IEEE Transactions on Neural Networks. 15P. Tiňo, M.Čerňanský, & L. Beňušková, "Markovian architectural bias of recurrent neural networks", IEEE Transactions on Neural Networks, vol. 15, pp. 6-15, 2004.
Temporal-kernel recurrent neural networks. I Sutskever, & G Hinton, Neural Networks. 23I. Sutskever & G. Hinton, "Temporal-kernel recurrent neural net- works", Neural Networks, vol. 23, pp. 239-243, 2010.
Higher order recurrent neural networks. R Soltani, & H Jiang, arXiv.org, 1605.00064R. Soltani & H. Jiang, "Higher order recurrent neural networks", arXiv.org, 1605.00064, 2016.
To improve the robustness of LSTM-RNN acoustic models using higher-order feedback from multiple histories. H Huang, & B Mak, Proc. Interspeech. InterspeechStockholmH. Huang & B. Mak, "To improve the robustness of LSTM-RNN acoustic models using higher-order feedback from multiple histories", Proc. Interspeech, Stockholm, 2017.
The MGB challenge: Evaluating multi-genre broadcast media transcription. P Bell, M J F Gales, T Hain, J Kilgour, P Lanchantin, X Liu, A Mc-Parland, S Renals, O Saz, M Wester, & P C Woodland, Proc. ASRU. ASRUScottsdaleP. Bell, M.J.F. Gales, T. Hain, J. Kilgour, P. Lanchantin, X. Liu, A. Mc- Parland, S. Renals, O. Saz, M. Wester, & P.C. Woodland, "The MGB challenge: Evaluating multi-genre broadcast media transcrip- tion", Proc. ASRU, Scottsdale, 2015.
Selection of Multi-Genre Broadcast data for the training of automatic speech recognition systems. P Lanchantin, M J F Gales, P Karanasou, X Liu, Y Qian, L Wang, P C Woodland, & C Zhang, Proc. Interspeech. InterspeechSan FranciscoP. Lanchantin, M.J.F. Gales, P. Karanasou, X. Liu, Y. Qian, L. Wang, P.C. Woodland, & C. Zhang, "Selection of Multi-Genre Broadcast data for the training of automatic speech recognition systems", Proc. Inter- speech, San Francisco, 2016.
On generating Combilex pronunciations via morphological analysis. K Richmond, R Clark, & S Fitt, Proc. Interspeech, Makuhari. Interspeech, MakuhariK. Richmond, R. Clark, & S. Fitt, "On generating Combilex pronuncia- tions via morphological analysis", Proc. Interspeech, Makuhari, 2010.
Large vocabulary decoding and confidence estimation using word posterior probabilities. G Evermann, & P Woodland, Proc. ICASSP. ICASSPIstanbulG. Evermann & P. Woodland, "Large vocabulary decoding and con- fidence estimation using word posterior probabilities", Proc. ICASSP, Istanbul, 2000.
Cambridge University transcription systems for the Multi-Genre Broadcast challenge. P C Woodland, X Liu, Y Qian, C Zhang, M J F Gales, P Karanasou, P Lanchantin, & L Wang, Proc. ASRU. ASRUScottsdaleP.C. Woodland, X. Liu, Y. Qian, C. Zhang, M.J.F. Gales, P. Karana- sou, P. Lanchantin, & L. Wang, "Cambridge University transcription systems for the Multi-Genre Broadcast challenge", Proc. ASRU, Scotts- dale, 2015.
Reducing the computational complexity of twodimensional LSTMs. B N Li & T, Sainath, Proc. Interspeech. InterspeechStockholmB. Li & T.N. Sainath, "Reducing the computational complexity of twodimensional LSTMs", Proc. Interspeech, Stockholm, 2017.
Parameterised sigmoid and ReLU hidden activation functions for DNN acoustic modelling. C C Zhang & P, Woodland, Proc. Interspeech. InterspeechDresdenC. Zhang & P.C. Woodland, "Parameterised sigmoid and ReLU hidden activation functions for DNN acoustic modelling", Proc. Interspeech, Dresden, 2015.
Grid long short-term memory. N Kalchbrenner, I Danihelka, & A Graves, Proc. ICLR. ICLRSan JuanN. Kalchbrenner, I. Danihelka, & A. Graves, "Grid long short-term memory", Proc. ICLR, San Juan, 2016.
Improved TDNNs using deep kernels and frequency dependent Grid-RNNs. F L Kreyssig, C Zhang, & P C Woodland, Proc. ICASSP. ICASSPCalgaryF.L. Kreyssig, C. Zhang, & P.C. Woodland, "Improved TDNNs using deep kernels and frequency dependent Grid-RNNs", Proc. ICASSP, Calgary, 2018.
| [] |
[
"Quote Erat Demonstrandum: A Web Interface for Exploring the Quotebank Corpus",
"Quote Erat Demonstrandum: A Web Interface for Exploring the Quotebank Corpus"
] | [
"EPFLVuk Vuković vuk.vukovic@epfl.ch \nUniversity of Konstanz\n\n",
"Akhil Arora akhil.arora@epfl.ch \nUniversity of Konstanz\n\n",
"Epfl \nUniversity of Konstanz\n\n",
"EPFLHuan-Cheng Chang huan-cheng.chang@epfl.ch \nUniversity of Konstanz\n\n",
"Andreas Spitz andreas.spitz@uni-konstanz.de \nUniversity of Konstanz\n\n",
"Robert West robert.west@epfl.ch \nUniversity of Konstanz\n\n"
] | [
"University of Konstanz\n",
"University of Konstanz\n",
"University of Konstanz\n",
"University of Konstanz\n",
"University of Konstanz\n",
"University of Konstanz\n"
] | [] | The use of attributed quotes is the most direct and least filtered pathway of information propagation in news. Consequently, quotes play a central role in the conception, reception, and analysis of news stories. Since quotes provide a more direct window into a speaker's mind than regular reporting, they are a valuable resource for journalists and researchers alike. While substantial research efforts have been devoted to methods for the automated extraction of quotes from news and their attribution to speakers, few comprehensive corpora of attributed quotes from contemporary sources are available to the public. Here, we present an adaptive web interface for searching Quotebank, a massive collection of quotes from the news, which we make available at https://quotebank.dlab.tools.CCS CONCEPTS• Information systems → Web interfaces; Web searching and information discovery; Web mining; Users and interactive retrieval. | 10.1145/3477495.3531696 | [
"https://arxiv.org/pdf/2207.03592v1.pdf"
] | 250,340,316 | 2207.03592 | f3cddf31e869d3acd11443751597598d4464663c |
Quote Erat Demonstrandum: A Web Interface for Exploring the Quotebank Corpus
EPFLVuk Vuković vuk.vukovic@epfl.ch
University of Konstanz
Akhil Arora akhil.arora@epfl.ch
University of Konstanz
Epfl
University of Konstanz
EPFLHuan-Cheng Chang huan-cheng.chang@epfl.ch
University of Konstanz
Andreas Spitz andreas.spitz@uni-konstanz.de
University of Konstanz
Robert West robert.west@epfl.ch
University of Konstanz
Quote Erat Demonstrandum: A Web Interface for Exploring the Quotebank Corpus
KEYWORDS Quote, Quote corpus, News, Search, Adaptive Interface ACM Reference Format: Vuk Vuković, Akhil Arora, Huan-Cheng Chang, Andreas Spitz, and Ro-bert West. 2022. Quote Erat Demonstrandum: A Web Interface for Ex-ploring the Quotebank Corpus. In ,. ACM, New York, NY, USA, 5 pages.
The use of attributed quotes is the most direct and least filtered pathway of information propagation in news. Consequently, quotes play a central role in the conception, reception, and analysis of news stories. Since quotes provide a more direct window into a speaker's mind than regular reporting, they are a valuable resource for journalists and researchers alike. While substantial research efforts have been devoted to methods for the automated extraction of quotes from news and their attribution to speakers, few comprehensive corpora of attributed quotes from contemporary sources are available to the public. Here, we present an adaptive web interface for searching Quotebank, a massive collection of quotes from the news, which we make available at https://quotebank.dlab.tools.CCS CONCEPTS• Information systems → Web interfaces; Web searching and information discovery; Web mining; Users and interactive retrieval.
INTRODUCTION
Quotes of sources, politicians, athletes, or scientists play an important role in lending credibility to news articles [3]. As news stories evolve, quotes are useful data for analyzing the spread of information through the news [10], for determining the source of news information [17], or in fact checking and credibility assessment [2,16]. Outside of journalistic applications, extracted quotes from the news are also valuable in social studies, for example through opinion mining from quotes [1]. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. arXiv, © 2022 Copyright held by the owner/author(s). Publication rights licensed to ACM. https://doi.org /10.1145/3477495.3531696 The substance of quotes lies not just in what is being said, but by whom the quote is uttered since the attribution to a speaker provides context to the words. As a result, the automated extraction and attribution of quotes from document corpora has been the subject of ongoing research over the years [8,11,14,15]. However, while methods for the extraction and attribution of quotes are developed continuously and benefit from recent advances in natural language processing, few of the resulting resources are available to endusers. To address this need, we report on the development of a user interface for searching Quotebank [18], a massive corpus of quotes that we extracted from a decade of English news.
Contributions. We provide an interface for (faceted) search in Quotebank, a Web-scale database of quotes from a decade of English news articles. Our tool is accessible as a website that is geared towards end-users who would otherwise be unable to use this resource, and it provides near-realtime query performance that enables an interactive exploration of the Quotebank corpus.
RELATED WORK
Prior work that is related to Quotebank is quite sparse and can be grouped into two categories: corpora of quotes and system demonstrations that make use of or recommend quotes.
Quote corpora. Available corpora include the PolNeAR corpus [9], the Penn Attribution Relation Corpus [13], the Speech, Thought, and Writing Presentation corpus [5], the Rich Quotation Annotations corpus [12], and the DirectQuote dataset [20]. In contrast to Quotebank, all of the above corpora are available only as NLP resources, not as searchable repositories that can be used by laypeople. Furthermore, they are substantially smaller in comparison to Quotebank, which exceeds the amount of contained quotes in these corpora by several orders of magnitude.
System demonstrations. To the best of our knowledge, there are no other search interfaces for large-scale corpora of quotes from the news domain. Repositories of quotes that can be accessed and searched by laypeople remain limited to manually curated repositories of mostly historical quotes, such as WikiQuote. 1 System demonstrations in the information retrieval community tend to only consider quotes indirectly and often in the wider contexts of credibility assessment, such as CredEye [16], or fact checking for news such as BeLink [2] and the work of Miranda et al. [7].
The likely most closely related approach to ours is by MacLaughlin et al., who propose a system for quote recommendation based Figure 1: Schematic overview of the Quotebank system architecture. During pre-processing, speaker candidates and quotes are annotated in the news article corpus, and quotes are attributed to speakers with Quobert [18]. The attributed quote data is augmented by linking speakers to Wikidata and subsequently loaded into Elasticsearch, which is used for indexing and retrieval. Kibana and Logstash are used internally for monitoring and managing indices. The web server exposes the query API to handle user queries and directly interfaces with the cache to reduce the impact of complex queries on system load. The JavaScript UI translates user queries to request to the web server (for details on the UI, see Figure 2). on context by using a BERT architecture [6]. However, the employed QUOTUS data set [10] is relatively small and the tool is neither available nor suitable for the exploration of a Web-scale news corpus that would be useful to an end-user.
QUOTE CORPUS
Quotebank is a dataset of 235 million unique, speaker-attributed quotes that were extracted from 196 million English news articles (127 million of them containing quotes) published between September 2008 and April 2020 [18]. The data was extracted with the BERT-based architecture Quobert, which utilizes Quootstrap [15] for the extraction of training data. We make two stages of the data available in our search interface: article-level and quote-level.
Article-level data. In the article-level version of the data, articles are the central unit. The data contains individual quote annotations for all articles in the news data set, each attributed with a ranked list of the most likely speaker candidates in the article (or no speaker if no suitable candidate could be attributed by Quobert). Additionally, the data contains context windows surrounding each of the quote mentions in the news article text.
Quote-level data. The quote-level data contains an aggregated view of the quotes across all articles. To generate this stage of the data, all individual occurrences of quotes are first canonicalized and quotes with matching canonical form are then aggregated into a single data point. Speaker candidates for these quotes are merged by weighted consensus, i.e., by summing over the local candidate probabilities in all individual occurrences to derive the most likely global speaker for a given quote.
Speaker data. To improve the querying capability of Quotebank and support faceted search, we further enrich the quote corpus with speaker information extracted from Wikidata [19], which is one of the largest publicly available knowledge graphs (about 97M entities). Specifically, we extract and add data concerning the occupation, nationality, and gender of the speakers in Quotebank.
A full JSON dump of the raw data prior to the addition of speaker data from Wikidata is also available for download. 2 For details on the generation, we refer the reader to Vaucher et al. [18].
THE QUOTEBANK SYSTEM
The complete architecture of the Quotebank system is shown in Figure 1. It consists of three core components: (1) a database system housing the core storage and querying capabilities; (2) a web server providing a layer of abstraction and API on top of the the database system; and (3) a user interface responsible for delivering the results to the user through an interactive and flexible visualization. In the following, we provide a description of these components and highlight their key functionality.
Database system
The Quotebank database is built using Elasticsearch [4], which is one of the most popular distributed, scalable, and open-source search and analytics engines supporting full-text indexing and querying, thereby serving our primary goal of efficiently searching through terabytes of quotes and news content.
Our database consists of three indices: (1) article, (2) quote, and (3) speaker, which store and index the article-level, quote-level, and speaker data, respectively. Naïvely indexing the Quotebank corpus using Elasticsearch would require more than 2TB of disk space, while the storage footprint of our optimized database is 4 times smaller and consumes only about 500GB. We employ the following fundamental tried-and-true design decisions and optimizations to reduce the storage footprint and improve the query efficiency.
• Database normalization. We follow standard principles such as removing data redundancy, unless redundancy entails significant query speed-ups. • Choice of data types. We use data types with minimal storage requirements, such as the integer type for fields supporting range queries (e.g., number of occurrences of a quote), or keyword type for fields used for creating filters (e.g., speaker nationality). • Querying-indexing trade-off. We push the complexity to the indexing phase by aggregating all text type fields into a single field, which results in faster querying (at the cost of slower indexing) in comparison to searching multiple fields at query time.
The database system also houses Kibana and Logstash, which are primarily used for internal monitoring purposes. While Kibana is used for monitoring the status of indices and analyzing database performance statistics, Logstash handles storage management of the logs produced by Elasticsearch.
Web server
The web server provides a layer of abstraction on top of the database system and exposes the API endpoints that enable the communication between the user-interface and the database. It is responsible for composing, validating, and submitting user queries to the database as well as parsing and returning the retrieved result from the database to the user. In a nutshell, the web server prevents the users from communicating directly and in an unrestrained manner with the database, thereby providing an added level of security.
User interface
The user interface (cf. Figure 2) is an adaptive web-based search platform built with React.js, 3 which allows users to interactively explore the Quotebank corpus. It consists of two main views: (1) the search panel, and (2) the search engine result page.
Search panel. Figure 2a portrays the interface that is exposed to the users for querying the Quotebank corpus. In the following, we describe the key features of the search panel.
Corpus type. While the system supports queries on the quote-level data (called quotation-centric in Figure 2a) by default, it also provides the user with the option to query the article-level data.
Query type. The most straight-forward way to search the Quotebank corpus is via text query, which supports fuzzy matching by default. However, the user may also choose to utilize exact matching of the query text by enabling the checkbox option 'Enable exact match'. For the article-level data, which supports only text queries, users are able to query both the quotes and the context in which they occur in the original articles ('Search with context' checkbox). To enable search faceting, the interface supports a multitude of search filters for querying quote-level data (cf. Figure 2a), such as speaker name or nationality, minimum number of quote occurrences, etc., thereby providing additional refinements over and above the text query. By default, the time window is set to the full 3 https://reactjs.org/ date range of the Quotebank corpus corresponding to September 1, 2008 and April 17, 2020, but the user may adapt this window by using the From and To fields, respectively. All other filter-related fields are not initialized with default values.
Auto-complete. To assist users in applying search filters and increase search precision, auto-complete is implemented for each speaker-related field. Similar to most auto-complete implementations, users are only required to input the initial characters of a speaker's name in the corresponding field and may choose from the provided suggestions. The auto-complete function also provides a short description of the speaker (extracted from Wikidata), which is displayed alongside each suggestion to help users disambiguate and choose from the list of potential suggestions.
Search engine result page (SERP). Figure 2b portrays the SERP, which consists of three primary components: a summary section, a histogram of quote occurrences, and the main result blocks.
Summary section. This section displays basic information about the retrieved result, such as the number of quotes or articles that were deemed relevant to the query, and the query time in seconds. To reduce the load on the client, the total number of results returned from the web server is capped at 1000, corresponding to the 1000 most-relevant results based on their content matching score with the user query. Additionally, this section provides a 'Share' button, which generates a shareable permalink to the user's query and copies it to the clipboard. A 'Save results' button enables the user to download the results either as a JSON or text file.
Histogram. The histogram, implemented in D3.js, 4 displays the distribution of quotes or articles that match the user query in the specified time window. The histogram bin boundaries are automatically adjusted to improve its aesthetics while simultaneously keeping the resource utilization on the client side low. The main purpose of the histogram is to allow the user to effectively visualize the trend portrayed by results relevant to her query. While the returned results are capped at 1000, note that the histogram is based on all quotes or articles that match the user's query. Thus, the counts displayed in the histogram may exceed the number of returned documents.
Result block. The returned results are displayed in blocks, where each block contains either a single relevant quote or a relevant article that contains at least one matching quote. The font and layout of the result block is designed to simulate a real quote as one might encounter it in a newspaper. The URLs in each block are clickable links to the news articles in which the quote was originally published. For quote blocks, all possible speakers of a quote are annotated with their unique Wikidata identifiers, which are also clickable links pointing to the Wikidata page of the speaker. Long result blocks are collapsed to a fixed size, allowing the user to effectively browse the result summary and only expand specific blocks if desired. In the same vein, we also implement pagination by breaking down the result blocks into pages of at most 10 blocks each. The buttons Prev and Next can be used to navigate the pages. Lastly, we also enable the user to re-rank the returned results based on several pre-defined sorting criteria.
DEMONSTRATION
As part of the demonstration, we encourage the reader to engage with three visual scenarios crafted to highlight key features of the interface, namely the exploration of (1) quote-level and (2) articlelevel data, and (3) a free-roam exploration on a device of the user's choice (e.g., tablet, phone, or laptop). These scenarios were designed with support from an EPFL journalist, who also helped in improving our system's usability.
Quote-level exploration
In this scenario, the users may explore the Quotebank corpus to identify all the quotes from Donald Trump containing the text "great again" and that appeared in at least 500 distinct articles. They are further encouraged to re-rank the results in reverse chronological order and share the results with their colleagues by sending the permalink via e-mail or messenger service.
User interaction. This scenario provides a demonstration of the 'Quotation-centric' (cf. Figure 2a) search panel. In addition to simply posing the text query "great again", the user has the option of applying multiple filters. Specifically, the user should set the 'Minimum number of occurrences' to 500, and leverage the auto-complete functionality to set the input for 'Name of speaker' to "Donald Trump". Lastly, the user may select the pre-defined filter 'Date (descending)' to re-rank the results in reverse chronological order. A permalink of the query is obtained by clicking on the 'Share' button on the results page. The retrieved results for this scenario can be accessed at this quote-level permalink.
Article-level exploration
In this scenario, the user is encouraged to explore the Quotebank corpus to identify all articles published on May 19, 2018 that contain the text "gdpr" in either the quotes or in their surrounding context. The user should also download the results as a text file and view its contents in a text editor.
User interaction. This scenario provides a demonstration of the 'Article-centric' (cf. Figure 2a) search panel. In addition to posing the text query "gdpr", the user may vary the time window by restricting it to a particular day, i.e., May 19, 2018. Moreover, the user analyzes the differences in the retrieved results by toggling the 'Search with context' checkbox. Lastly, to facilitate later exploration and analyses, the user may download the results corresponding to her query as a text file by clicking on the 'Save results' button on the results page. The retrieved results for this scenario can also be accessed at this article-level permalink.
Free-roam exploration
While we provide a few example use-cases (described below) to bootstrap the exploration in this scenario, the reader is encouraged to conduct a free-roam exploration of Quotebank.
• UC 1: Using Quotebank on their phone, users search quotes from female tennis players of Switzerland that appear in at least 100 different articles and share the results via messenger. • UC 2: Using Quotebank on their tablet, users search quotes related to "science" by female journalists that appear in at least 100 different articles and share the results via e-mail. • UC 3: Using 'Enable exact match' to search for a quote that the user remembers. For instance, the famous quote: "You have to dream before your dreams can come true" to recall the name of the speaker or identify its popularity over the past decade.
CONCLUSION
In this paper, we introduced a search interface that makes the Quotebank dataset more accessible to end users who lack the computational background to work directly with the raw data. To support faceted search based on speaker attributes, we further enriched the quote corpus with information from Wikidata. Our intention is to enable the general public to explore the Quotebank data, and we are looking forward to seeing the findings and insights that are gained in the exploration of the data by journalists, social scientist, and laypeople.
Future work. In the future, we plan on analyzing the search logs and investigate users' search patterns to determine exploration strategies and use-cases that can aid us in further refining and augmenting the Quotebank corpus. We are also working on disambiguation techniques for speaker candidates during quote attribution and are preparing a stand-alone end-to-end pipeline for attributed quote extraction.
Figure 2 :
2The Quotebank user interface. (a) The search panel supports quote-level and the article-level searches, which can be faceted by speaker attributes. (b) The search engine result page (SERP) displays retrieved quotes in the selected time window alongside speaker candidates and the URLs of articles from which the quotes are sourced. A histogram shows the distribution of quotes in the result over time.
https://www.wikiquote.org/
https://doi.org/10.5281/zenodo.4277311
https://d3js.org
Acknowledgments. We would like to thank Tanya Petersen for testing the interface and sharing her expert-user insights. We are also grateful to the members of EPFL DLAB and the students enrolled in the 2021 edition of the Applied Data Analysis (ADA) course for their feedback. This project was partly funded by the Swiss National Science Foundation (grant 200021_185043), the European Union (TAILOR, grant 952215), and the Microsoft Swiss Joint Research Center. We also gratefully acknowledge generous gifts from Facebook and Google.
Opinion Mining on Newspaper Quotations. Alexandra Balahur, Ralf Steinberger, Erik Van Der, Bruno Goot, Mijail A Pouliquen, Kabadjov, 10.1109/WI-IAT.2009.340WI-IAT'09 Workshops. Alexandra Balahur, Ralf Steinberger, Erik Van der Goot, Bruno Pouliquen, and Mijail A. Kabad- jov. 2009. Opinion Mining on Newspaper Quotations. In WI-IAT'09 Workshops. https: //doi.org/10.1109/WI-IAT.2009.340
BeLink: Querying Networks of Facts, Statements and Beliefs. Ludivine Tien Duc Cao, François Duroyon, Ioana Goasdoué, Xavier Manolescu, Tannier, 10.1145/3357384.3357851CIKM'19. Tien Duc Cao, Ludivine Duroyon, François Goasdoué, Ioana Manolescu, and Xavier Tannier. 2019. BeLink: Querying Networks of Facts, Statements and Beliefs. In CIKM'19. https://doi. org/10.1145/3357384.3357851
Don't Quote Me: Effects of Named, Quoted, and Partisan News Sources. Megan Duncan, Kathleen Bartzen Culver, Douglas Mcleod, Christopher Kremmer, 10.1080/17512786.2019.1588148Journalism Practice. 139Megan Duncan, Kathleen Bartzen Culver, Douglas McLeod, and Christopher Kremmer. 2019. Don't Quote Me: Effects of Named, Quoted, and Partisan News Sources. Journalism Practice 13, 9 (2019). https://doi.org/10.1080/17512786.2019.1588148
Clinton Gormley, Zachary Tong, Elasticsearch: The Definitive Guide. O'Reilly Media, Inc1st ed.Clinton Gormley and Zachary Tong. 2015. Elasticsearch: The Definitive Guide (1st ed.). O'Reilly Media, Inc.
Corpus Stylistics: Speech, Writing, and Thought Presentation in a Corpus of English Writing. Donald E Hardy, 10.1093/llc/fqm030Lit. Linguistic Comput. 224Donald E. Hardy. 2007. Corpus Stylistics: Speech, Writing, and Thought Presentation in a Corpus of English Writing. Lit. Linguistic Comput. 22, 4 (2007). https://doi.org/10.1093/llc/ fqm030
Context-Based Quotation Recommendation. Ansel Maclaughlin, Tao Chen, Dan Burcu Karagol Ayan, Roth, ICWSM'21. Ansel MacLaughlin, Tao Chen, Burcu Karagol Ayan, and Dan Roth. 2021. Context-Based Quo- tation Recommendation. In ICWSM'21. https://ojs.aaai.org/index.php/ICWSM/article/view/ 18070
Automated Fact Checking in the News Room. Sebastião Miranda, David Nogueira, Afonso Mendes, Andreas Vlachos, Andrew Secker, Rebecca Garrett, Jeff Mitchell, Zita Marinho, 10.1145/3308558.3314135WWW'19 Companion. Sebastião Miranda, David Nogueira, Afonso Mendes, Andreas Vlachos, Andrew Secker, Re- becca Garrett, Jeff Mitchell, and Zita Marinho. 2019. Automated Fact Checking in the News Room. In WWW'19 Companion. https://doi.org/10.1145/3308558.3314135
A Two-stage Sieve Approach for Quote Attribution. Grace Muzny, Michael Fang, Angel X Chang, Dan Jurafsky, 10.18653/v1/e17-1044EACL'17. Grace Muzny, Michael Fang, Angel X. Chang, and Dan Jurafsky. 2017. A Two-stage Sieve Approach for Quote Attribution. In EACL'17. https://doi.org/10.18653/v1/e17-1044
An Attribution Relations Corpus for Political News. Edward Newell, Drew Margolin, Derek Ruths, LREC'18. Edward Newell, Drew Margolin, and Derek Ruths. 2018. An Attribution Relations Corpus for Political News. In LREC'18. http://www.lrec-conf.org/proceedings/lrec2018/pdf/1051.pdf
QUOTUS: The Structure of Political Media Coverage as Revealed by Quoting Patterns. Vlad Niculae, Caroline Suen, Justine Zhang, Cristian Danescu-Niculescu-Mizil, Jure Leskovec, 10.1145/2736277.2741688WWW'15. Vlad Niculae, Caroline Suen, Justine Zhang, Cristian Danescu-Niculescu-Mizil, and Jure Leskovec. 2015. QUOTUS: The Structure of Political Media Coverage as Revealed by Quot- ing Patterns. In WWW'15. https://doi.org/10.1145/2736277.2741688
Quotation Detection and Classification with a Corpus-Agnostic Model. Sean Papay, Sebastian Padó, 10.26615/978-954-452-056-4_103RANLP'19. Sean Papay and Sebastian Padó. 2019. Quotation Detection and Classification with a Corpus- Agnostic Model. In RANLP'19. https://doi.org/10.26615/978-954-452-056-4_103
RiQuA: A Corpus of Rich Quotation Annotation for English Literary Text. Sean Papay, Sebastian Padó, LREC'20. Sean Papay and Sebastian Padó. 2020. RiQuA: A Corpus of Rich Quotation Annotation for English Literary Text. In LREC'20. https://aclanthology.org/2020.lrec-1.104/
Attribution: A Computational Approach. Silvia Pareti, University of EdinburghPh.D. Dissertation.Silvia Pareti. 2015. Attribution: A Computational Approach. Ph.D. Dissertation. University of Edinburgh, UK. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.679448
Automatically Detecting and Attributing Indirect Quotations. Silvia Pareti, O' Timothy, Ioannis Keefe, James R Konstas, Irena Curran, Koprinska, EMNLP'13. Silvia Pareti, Timothy O'Keefe, Ioannis Konstas, James R. Curran, and Irena Koprinska. 2013. Automatically Detecting and Attributing Indirect Quotations. In EMNLP'13. https: //aclanthology.org/D13-1101/
Quootstrap: Scalable Unsupervised Extraction of Quotation-Speaker Pairs from Large News Corpora via Bootstrapping. Dario Pavllo, Tiziano Piccardi, Robert West, ICWSM'18. Dario Pavllo, Tiziano Piccardi, and Robert West. 2018. Quootstrap: Scalable Unsupervised Extraction of Quotation-Speaker Pairs from Large News Corpora via Bootstrapping. In ICWSM'18. https://aaai.org/ocs/index.php/ICWSM/ICWSM18/paper/view/17827
CredEye: A Credibility Lens for Analyzing and Explaining Misinformation. Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, Gerhard Weikum, 10.1145/3184558.3186967WWW'18 Companion. Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, and Gerhard Weikum. 2018. CredEye: A Credibility Lens for Analyzing and Explaining Misinformation. In WWW'18 Companion. https://doi.org/10.1145/3184558.3186967
Don't quote me on that": Finding Mixtures of Sources in News Articles. Alexander Spangher, Nanyun Peng, Jonathan May, Emilio Ferrara, CoRR abs/2104.09656Alexander Spangher, Nanyun Peng, Jonathan May, and Emilio Ferrara. 2021. "Don't quote me on that": Finding Mixtures of Sources in News Articles. CoRR abs/2104.09656 (2021). https: //arxiv.org/abs/2104.09656
Quotebank: A Corpus of Quotations from a Decade of News. Timoté Vaucher, Andreas Spitz, Michele Catasta, Robert West, 10.1145/3437963.3441760WSDM'21. Timoté Vaucher, Andreas Spitz, Michele Catasta, and Robert West. 2021. Quotebank: A Corpus of Quotations from a Decade of News. In WSDM'21. https://doi.org/10.1145/3437963.3441760
Wikidata: A Free Collaborative Knowledgebase. Denny Vrandecic, Markus Krötzsch, 10.1145/2629489Commun. ACM. 5710Denny Vrandecic and Markus Krötzsch. 2014. Wikidata: A Free Collaborative Knowledgebase. Commun. ACM 57, 10 (2014). https://doi.org/10.1145/2629489
DirectQuote: A Dataset for Direct Quotation Extraction and Attribution in News Articles. Yuanchi Zhang, Yang Liu, CoRR abs/2110.07827Yuanchi Zhang and Yang Liu. 2021. DirectQuote: A Dataset for Direct Quotation Extraction and Attribution in News Articles. CoRR abs/2110.07827 (2021). https://arxiv.org/abs/2110. 07827
| [] |
[
"Wembedder: Wikidata entity embedding web service",
"Wembedder: Wikidata entity embedding web service"
] | [
"Årup Finn \nCognitive Systems\nDTU Compute Technical University of Denmark Kongens Lyngby\nDenmark\n",
"Nielsen \nCognitive Systems\nDTU Compute Technical University of Denmark Kongens Lyngby\nDenmark\n"
] | [
"Cognitive Systems\nDTU Compute Technical University of Denmark Kongens Lyngby\nDenmark",
"Cognitive Systems\nDTU Compute Technical University of Denmark Kongens Lyngby\nDenmark"
] | [] | I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim's Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600'000 Wikidata items and properties. | 10.5281/zenodo.1009127 | [
"https://arxiv.org/pdf/1710.04099v1.pdf"
] | 28,308,521 | 1710.04099 | 6e4f9515292f06861b8a85456ca0822d719b1f94 |
Wembedder: Wikidata entity embedding web service
Årup Finn
Cognitive Systems
DTU Compute Technical University of Denmark Kongens Lyngby
Denmark
Nielsen
Cognitive Systems
DTU Compute Technical University of Denmark Kongens Lyngby
Denmark
Wembedder: Wikidata entity embedding web service
WikidataembeddingRDFweb service
I present a web service for querying an embedding of entities in the Wikidata knowledge graph. The embedding is trained on the Wikidata dump using Gensim's Word2Vec implementation and a simple graph walk. A REST API is implemented. Together with the Wikidata API the web service exposes a multilingual resource for over 600'000 Wikidata items and properties.
INTRODUCTION
The Word2Vec model [7] spawned an interest in dense word representation in a low-dimensional space, and there are now a considerable number of "2vec" models beyond the word level. 1 One recent avenue of research in the "2vec" domain uses knowledge graphs [13]. Such systems can take advantage of the large knowledge graphs, e.g., DBpedia or Freebase, for graph embedding. Graph embedding in the simplest case would map individual nodes of the network to a continuous low-dimensional space, while embedding with knowledge graphs would typically handle the typed links between knowledge items/nodes. Wikidata https://www.wikidata.org/ [16] is a relatively new knowledge graph resource. It is run by the Wikimedia Foundation that is also behind Wikipedia, thus Wikidata can be regarded as a sister site to Wikipedia. While Wikipedia has been extensively used as a data and text mining resource [6], Wikidata has so far seen less use in machine learning contexts. There are several advantages with Wikidata. Wikidata is not tied to a single language, but can include labels for hundreds of languages for each item in the knowledge graph. As such, an embedding that works from Wikidata items is in principle multilingual (however, there is no guarantee that the item label for a specific language is set). Another advantage with Wikidata is that each item can provide extra contextual data from the Wikidata statements. Search in Wikidata is enabled by string-based search engines in the Wikidata API as well 1 https://github.com/MaxwellRebo/awesome-2vec as the SPARQL-based Wikidata Query Service (WDQS). General functions for finding related items or generating fixed-length features for machine learning are to my knowledge not available.
There is some research that combines machine learning and Wikidata [9,14], e.g., Mousselly-Sergieh and Gurevych have presented a method for aligning Wikidata items with FrameNet based on Wikidata labels and aliases [9].
Property suggestion is running live in the Wikidata editing interface, where it helps editors recall appropriate properties for items during manual editing, and as such a form of recommender system. Researchers have investigated various methods for this process [17].
Scholia at https://tools.wmflabs.org/scholia/ is our web service that presents scholarly profiles based on data in Wikidata extracted with WDQS [11]. Counting co-occurrence patterns with SPARQL queries, Scholia can list related items based on a query item. For instance, Scholia lists related diseases based on overlapping associated genes. 2 Other than these count-and SPARQL-based methods Scholia has limited means to show related items to a Scholia user.
Several research groups provide word embedding web services: GPL-licensed WebVectors uses a Flask and Gensim [5,4], and instances for English and Russian run at http://rusvectores. org/ and for English and Norwegian at http://vectors. nlpl.eu/explore/embeddings/. A Turku BioNLP group provides a Flask-based word embedding web service at http: //bionlp-www.utu.fi/wv_demo/ based on the English Google News and Finnish Corpora. A web service for handling multilingual word embeddings has also been announced [1]. Wembedder is distinguished from these services by using the Wikidata entities (items and properties) as the "words" in the embedding (rather than natural language words) and by using the live Wikidata web service to provide multilingual labels for the entities.
WEMBEDDER 2.1 Model setup
The Wikimedia Foundation provides the Wikidata RDF dumps for download at https://dumps.wikimedia.org/ wikidatawiki/entities/. For the setup of the initial model, I downloaded the so-called truthy dumps available in Notation3 format. The specific file was the 5.2 GB large compressed file wikidata-20170613-truthy-BETA.nt.bz2. The truthy dumps only have a limited set of all the triples in Wikidata: Those that are associated with the wdt prefix. From this dump, I extracted the triples where both subject and object were Wikidata items, i.e., leaving out triples where the object is a value such as an identifier, a date, a name, etc. The generated file contains 88'941'173 lines each with a triple. The http://www.wikidata.org/entity/ and http://www.wikidata.org/prop/direct/ prefixes were stripped, so the first few lines of the generated file have the following content in a format similar to Magnus Manske's QuickStatements format:
Q22 P1546 Q2016568 Q22 P610 Q104674 Q22 P1151 Q8143311 Q22 P31 Q3336843 Q22 P36 Q23436 Q22 P47 Q21 ...
Each line can be regarded as a very simple graph walk consisting of a single step from one Wikidata item through a typed property to the next Wikidata item. These triple data I now regard as a sentence of three "words" which can be treated by standard word embedding implementations. I use the Word2Vec model in the Gensim program [18]. The initial model trained used the CBOW training algorithm, an embedding dimension on 100, a window of 1 and a minimum count of 20, i.e., any "word" must appear 20 times or more to be included in the model. The rest of the parameters in
Web service
The web service was set up with the Python Flask web framework [3] with the Apache-licensed code available at a GitHub repository: https://github.com/fnielsen/wembedder. Figure 1 shows the interface. A version of Wembedder runs from https: //tools.wmflabs.org/wembedder/, i.e., from the cloud service provided by the Wikimedia Foundation.
The Wembedder web service relies on the Wikidata API at https://www.wikidata.org/w/api.php and its wbsearchentities action for searching for items in multiple languages in an implementation based on the search facility on the Wikidata homepage. Labels for searching and labels for the results are generated via ajax calls to the Wikidata API.
A REST API is implemented as part of Wembedder and returns JSON-formatted results, e.g., /api/most-similar/Q80 will return the most similar entities for a query on Tim Berners-Lee (Q80), see also Figure 2. Similarity computations are implemented with a URL such as /api/similarity/Q2/Q313. Here the Earth and Venus are compared. The human interface of Wembedder uses the REST API in a ajax fashion, returning an HTML page with an empty result list and with JavaScript for the actual fetching of the results.
EVALUATION
The embedding in the current version of Wembedder is fairly simple compared to the state of the art embeddings, that uses complex/holographic knowledge graph embedding [10] or multiple knowledge graphs and pre-trained corpora-based resources for building the embedding [15]. One should not expect Wembedder to perform at the state of the art level, 3 and a comparison with the Wordsim-353 dataset for semantic relatedness evaluation [2] shows poor performance with Pearson and Spearman correlations on just 0.13.
When used to evaluate the Wikidata graph embedding, a matching is needed between English Wordsim-353 words and the Wikidata items. It is not straightforward as there usually is a semantic difference between the words and the items. It is often possible to find the word as the English label in a Wikidata item, but for instance, for the Wordsim-353 word "Japanese" one must decide whether it should be linked to Japanese as a language (Q5287), Japanese as a people (Q161652), another item (e.g., the disambiguation page, Q346080) or an average or sum over the items. I attempted to match the words with items, but left several unmatched so only 278 of the word pairs of the 353 were possible in the analysis. The correlations were computed from these 278 word pairs. A skipgram trained model yielded even lower performance with correlations of just 0.11 and 0.10 for Pearson and Spearman correlations, respectively. A CBOW model trained with a higher number of iterations (DOI 10.5281/zenodo.827339) performed somewhat better with correlations of 0.21.
DISCUSSION AND FUTURE WORK
Wembedder-with its 100-dimensional Gensim model query-will usually be able to return results in around one second, while the API call is considerably faster. It means that it could be used for interactive "related items" search. The SPARQL-based related items queries in Scholia usually takes several seconds.
Wikidata at its current state is mostly an encyclopedic source having little lexical information. State of the art relational modeling ConceptNet is setup from both encyclopedic and lexical knowledge graphs as well as corpus-based embeddings [15]. Embeddings based on Wikidata could presumably perform better by using the link to Wikipedia with the different language versions of Wikipedia acting as a corpora. There exist several works describing joint models of words and entities from knowledge bases/graphs, see, e.g., [8] and reference therein. There is work underway to enable Wikidata to represent lexical information [12]. A Wikidata-based embedding may benefit from such data.
ACKNOWLEDGMENT
The Danish Innovation Foundation supported this work through Danish Center for Big Data Analytics driven Innovation (DABAI).
Figure 1 :
1Wembedder's output after a query on Q298 (the country Chile) with the interface set to Swedish.
Figure 2 :
2JSON output from Wembedder's REST API with a query on Q2 (Earth) rendered in a web browser. The first entities in the result list is Q313 (Venus). the Word2Vec model were kept at the Gensim defaults. With this setup, the model ends up with a vocabulary of 609'471. This number includes 738 properties and 608'733 Wikidata items. Gensim can store its model parameters in files with a combined size of 518 megabytes. A permanent version of the model parameters is available in Zenodo under DOI 10.5281/zenodo.823195.
See, e.g., the page for schizophrenia at https://tools. wmflabs.org/scholia/disease/Q41112.
. W Ammar, G Mulcaire, Y Tsvetkov, G Lample, C Dyer, N A Smith, Massively Multilingual Word Embeddings. AMMAR, W., MULCAIRE, G., TSVETKOV, Y., LAMPLE, G., DYER, C., AND SMITH, N. A. Massively Multilingual Word Embeddings.
. L Finkelstein, E Gabrilovich, Y Matias, E Rivlin, Z Solan, G Wolfman, E Ruppin, FINKELSTEIN, L., GABRILOVICH, E., MATIAS, Y., RIVLIN, E., SOLAN, Z., WOLFMAN, G., AND RUPPIN, E.
An overview of the state of the art performance in semantic relatedness task, including for the Wordsim-353 task. ACM Transactions on Information Systems. 20(State_of_the_art) Placing search in context: the concept revisitedAn overview of the state of the art performance in se- mantic relatedness task, including for the Wordsim-353 task, is available at https://aclweb.org/aclwiki/index. php?title=Similarity_(State_of_the_art) Placing search in context: the concept revisited. ACM Transactions on Information Systems 20 (January 2002), 116-131.
. M Grinberg, Flask Web, Development, GRINBERG, M. Flask Web Development.
Building Web-Interfaces for Vector Semantic Models with the WebVectors Toolkit. A Kutuzov, E Kuzmenko, Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational LinguisticsKUTUZOV, A., AND KUZMENKO, E. Building Web-Interfaces for Vector Semantic Models with the WebVectors Toolkit. Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics (April 2017), 99-103.
A Kutuzov, E Kuzmenko, Webvectors, A Toolkit for Building Web Interfaces for Vector Semantic Models. Analysis of Images, Social Networks and Texts: 5th International Conference, AIST 2016. KUTUZOV, A., AND KUZMENKO, E. WebVectors: A Toolkit for Building Web Interfaces for Vector Semantic Models. Analysis of Images, Social Networks and Texts: 5th International Conference, AIST 2016 (December 2017), 155-161.
Excavating the mother lode of human-generated text: A systematic review of research that uses the wikipedia corpus. M Mehdi, C Okoli, M Mesgari, F Å Nielsen, A Lanamäki, Information Processing & Management. 53MEHDI, M., OKOLI, C., MESGARI, M., NIELSEN, F. Å., AND LANAMÄKI, A. Excavating the mother lode of human-generated text: A systematic review of research that uses the wikipedia corpus. Information Processing & Management 53 (March 2017), 505-529.
Efficient Estimation of Word Representations in Vector Space. T Mikolov, K Chen, G Corrado, J Dean, MIKOLOV, T., CHEN, K., CORRADO, G., AND DEAN, J. Efficient Estimation of Word Representations in Vector Space.
Combining Word and Entity Embeddings for Entity Linking. The Semantic Web. J G Moreno, R Besançon, R Beaumont, E D'hondt, A.-L Ligozat, S Rosset, X Tannier, B Grau, MORENO, J. G., BESANÇON, R., BEAUMONT, R., D'HONDT, E., LIGOZAT, A.-L., ROSSET, S., TANNIER, X., AND GRAU, B. Combining Word and Entity Embeddings for Entity Linking. The Semantic Web (May 2017), 337-352.
Enriching Wikidata with Frame Semantics. H Mousselly-Sergieh, I Gurevych, Proceedings of the 5th Workshop on Automated Knowledge Base Construction. the 5th Workshop on Automated Knowledge Base ConstructionMOUSSELLY-SERGIEH, H., AND GUREVYCH, I. Enriching Wikidata with Frame Semantics. Proceedings of the 5th Workshop on Automated Knowledge Base Construction (June 2016), 29-34.
Holographic Embeddings of Knowledge Graphs. M Nickel, L Rosasco, T Poggio, NICKEL, M., ROSASCO, L., AND POGGIO, T. Holographic Embeddings of Knowledge Graphs.
. F Å Nielsen, D Mietchen, E Willighagen, Scholia, Wikidata, NIELSEN, F. Å., MIETCHEN, D., AND WILLIGHAGEN, E. Scholia and scientometrics with Wikidata.
Let's move forward with support for Wiktionary. Wikidata mailing list. L Pintscher, PINTSCHER, L. Let's move forward with support for Wiktionary. Wikidata mailing list (September 2016).
P Ristoski, H Paulheim, Rdf2vec, RDF Graph Embeddings for Data Mining. The Semantic Web -ISWC 2016. RISTOSKI, P., AND PAULHEIM, H. RDF2Vec: RDF Graph Embeddings for Data Mining. The Semantic Web -ISWC 2016 (December 2016), 498-514.
End-to-end Representation Learning for Question Answering with Weak Supervision. D Sorokin, I Gurevych, SOROKIN, D., AND GUREVYCH, I. End-to-end Representation Learning for Question Answering with Weak Supervision.
ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. R Speer, J Chin, C Havasi, Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence. the Thirty-First AAAI Conference on Artificial IntelligenceSPEER, R., CHIN, J., AND HAVASI, C. ConceptNet 5.5: An Open Multilingual Graph of General Knowledge. Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (December 2016), 4444-4451.
. D Vrandečić, M Krötzsch, Wikidata, Communications of the ACM. 57VRANDEČIĆ, D., AND KRÖTZSCH, M. Wikidata: a free collaborative knowledgebase. Communications of the ACM 57 (October 2014), 78-85.
An Empirical Evaluation of Property Recommender Systems for Wikidata and Collaborative Knowledge Bases. E Zangerle, W Gassler, M Pichl, S Steinhauser, G Specht, Proceedings of the 12th International Symposium on Open Collaboration. the 12th International Symposium on Open CollaborationZANGERLE, E., GASSLER, W., PICHL, M., STEINHAUSER, S., AND SPECHT, G. An Empirical Evaluation of Property Recommender Systems for Wikidata and Collaborative Knowledge Bases. Proceedings of the 12th International Symposium on Open Collaboration (December 2016).
Software framework for topic modelling with large corpora. R Řehůřek, P Sojka, New Challenges For NLP Frameworks Programme. ŘEHŮŘEK, R., AND SOJKA, P. Software framework for topic modelling with large corpora. New Challenges For NLP Frameworks Programme (May 2010), 45-50.
| [
"https://github.com/MaxwellRebo/awesome-2vec",
"https://github.com/fnielsen/wembedder."
] |
[
"NOISE ROBUST IOA/CAS SPEECH SEPARATION AND RECOGNITION SYSTEM FOR THE THIRD 'CHIME' CHALLENGE",
"NOISE ROBUST IOA/CAS SPEECH SEPARATION AND RECOGNITION SYSTEM FOR THE THIRD 'CHIME' CHALLENGE",
"NOISE ROBUST IOA/CAS SPEECH SEPARATION AND RECOGNITION SYSTEM FOR THE THIRD 'CHIME' CHALLENGE",
"NOISE ROBUST IOA/CAS SPEECH SEPARATION AND RECOGNITION SYSTEM FOR THE THIRD 'CHIME' CHALLENGE"
] | [
"Xiaofei Wang \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Chao Wu \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Pengyuan Zhang \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Ziteng Wang \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Yong Liu \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Xu Li \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Qiang Fu \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Yonghong Yan \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Xiaofei Wang \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Chao Wu \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Pengyuan Zhang \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Ziteng Wang \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Yong Liu \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Xu Li \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Qiang Fu \nInstitute of Acoustics\nChinese Academy of Sciences\n\n",
"Yonghong Yan \nInstitute of Acoustics\nChinese Academy of Sciences\n\n"
] | [
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n",
"Institute of Acoustics\nChinese Academy of Sciences\n"
] | [] | This paper presents the contribution to the third 'CHiME' speech separation and recognition challenge including both front-end signal processing and back-end speech recognition. In the front-end, Multi-channel Wiener filter (MWF) is designed to achieve background noise reduction. Different from traditional MWF, optimized parameter for the tradeoff between noise reduction and target signal distortion is built according to the desired noise reduction level. In the back-end, several techniques are taken advantage to improve the noisy Automatic Speech Recognition (ASR) performance including Deep Neural Network (DNN), Convolutional Neural Network (CNN) and Long short-term memory (LSTM) using medium vocabulary, Lattice rescoring with a big vocabulary language model finite state transducer, and ROVER scheme. Experimental results show the proposed system combining front-end and back-end is effective to improve the ASR performance. | null | [
"https://arxiv.org/pdf/1509.06103v1.pdf"
] | 4,067,389 | 1509.06103 | 5863048400e841efbc8f2c19891ba4d02710cece |
NOISE ROBUST IOA/CAS SPEECH SEPARATION AND RECOGNITION SYSTEM FOR THE THIRD 'CHIME' CHALLENGE
21 Sep 2015
Xiaofei Wang
Institute of Acoustics
Chinese Academy of Sciences
Chao Wu
Institute of Acoustics
Chinese Academy of Sciences
Pengyuan Zhang
Institute of Acoustics
Chinese Academy of Sciences
Ziteng Wang
Institute of Acoustics
Chinese Academy of Sciences
Yong Liu
Institute of Acoustics
Chinese Academy of Sciences
Xu Li
Institute of Acoustics
Chinese Academy of Sciences
Qiang Fu
Institute of Acoustics
Chinese Academy of Sciences
Yonghong Yan
Institute of Acoustics
Chinese Academy of Sciences
NOISE ROBUST IOA/CAS SPEECH SEPARATION AND RECOGNITION SYSTEM FOR THE THIRD 'CHIME' CHALLENGE
21 Sep 2015Index Terms-CHiME challengeMulti-channel Wiener filterDeep Neural NetworkNoise RobustAutomatic Speech Recognition
This paper presents the contribution to the third 'CHiME' speech separation and recognition challenge including both front-end signal processing and back-end speech recognition. In the front-end, Multi-channel Wiener filter (MWF) is designed to achieve background noise reduction. Different from traditional MWF, optimized parameter for the tradeoff between noise reduction and target signal distortion is built according to the desired noise reduction level. In the back-end, several techniques are taken advantage to improve the noisy Automatic Speech Recognition (ASR) performance including Deep Neural Network (DNN), Convolutional Neural Network (CNN) and Long short-term memory (LSTM) using medium vocabulary, Lattice rescoring with a big vocabulary language model finite state transducer, and ROVER scheme. Experimental results show the proposed system combining front-end and back-end is effective to improve the ASR performance.
INTRODUCTION
Automatic Speech Recognition (ASR) has been applied to many human-computer interaction systems, such as tablet computer, smartphones, personal computers and televisions. Meanwhile, robust ASR in noisy environments is paid more attention due to its applicable value. The 3rd 'CHiME' speech separation and recognition challenge is such a platform for testing the recognition rate of noisy speech in complex environments [1]. Our contributions to CHiME are separated into two parts: front-end techniques and back-end techniques.
It is well known that a lot of front-end techniques aim at extracting clean desired speech signals. Among them, multichannel system is proved effective to improve the front-end performance in noisy and reverberant environment so that it attracts more attention in consideration of better balance between noise reduction and speech distortion. As is known to all, more noise reduction doesn't mean more clean desired speech. Speech distortion brought by artifacts affects ASR performance severely. Therefore, taking speech distortion into account in the multi-channel optimization criterion, multi-channel wiener filter (WMF) technique has been proposed to estimate the desired speech component in noisy environment [2]. The technique is generalized as speech distortion weighted MWF (SDW-MWF). The tradeoff between noise reduction and speech distortion is taken into consideration. In principle, it is desired to have less noise reduction in speech dominant segments and more noise reduction otherwise. From this motivation, we improve the SDW-MWF by focusing on the tradeoff parameter optimization from the perspective of desired noise reduction control technique.
Recently, acoustic modelling based on the Deep Neural Networks (DNNs) has gained popularity with the consistent improvement in recognition performance over earlier Neural Network based front-ends (e.g. [3]). DNNs are either deployed as the front-end for standard Hidden Markov Model based on Gaussian Mixture Models (HMM-GMMs), or in a hybrid form to directly estimate state level posteriors. As noted in several publications [4,5,6,7], DNNs show general word error rate (WER) improvements on the order of 10-30% relative across a variety of small and large vocabulary tasks when compared with HMM-GMMs built on classic features. A DNN is a conventional Multi-Layer Perceptron (MLP) with many internal or hidden layers. Convolutional Neural Networks (CNNs) are an alternative type of neural network that can be used to reduce spectral variations and model spectral correlations which exist in signals. CNNs are a more effective model for speech compared to DNNs [8]. Besides, Long Short-Term Memory (LSTM) is also a specific recurrent neural network (RNN) architecture that was designed to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. LSTM are also proved more effective than DNNs and conventional RNNs for acoustic modeling [9,10]. In this paper, we take advantage of these techniques for acoustic modeling and make a combination of them to achieve a better ASR performance [11].
This paper is organized as follows. In section 2, 3, we describe the front-end and back-end of the proposed system.
In section 4, we carry out ASR experiments and list the results with analysis. At last, we draw a conclusion in section 5.
SPEECH ENHANCEMENT FRONT-END
In order to suppress background noise, multichannel wiener filter (MWF) is introduced to the multi-microphone set-up [2]. Since MWF does not require transfer functions between a target speaker and microphones, it is suitable for the CHiME3 task. Taking speech distortion into account in its optimization criterion, MWF is generalized as speech distortion weighted multichannel wiener filter (SDW-MWF), which provides a tradeoff between speech distortion and noise reduction [12,13,14]. In this work, a tradeoff parameter optimized method based on SDW-MWF is used.
Considering an array of M microphones. Let Y m (k, l), m = 1, . . . , M denote the short-time Fourier transform (STFT) domain notation of m-th microphone signal at frequency index k and frame index l, the received signals are given as
Y m (k, l) = S(k, l)G m (k, l) + N m (k, l) = X m (k, l) + N m (k, l) (1) where S(k, l),G m (k, l),X m (k, l),N m (k, l)
are respectively the STFT domain expression of the source signal s(t), the transfer function from the source to the m-th microphone g m (t), the target signal x m (t) and noise signal n m (t) at microphone m.
To find an optimal estimate of the target signal, the designed SDW-MWF criterion is [13,15]
w SDW −MW F = arg min w E{|w H y − X 1 | 2 + µ|w H n| 2 } (2)
where X 1 is the target signal at the first microphone, y(k, l) is the received signal vector defined as y(k, l) = [Y 1 (k, l), . . . , Y M (k, l)] T and w H (k, l), x(k, l), n(k, l), g(k, l) are defined similarly, among which w(k, l) represents the linear filter given by w(k, l) = [W 1 (k, l), . . . , W M (k, l)] T . Here operators (.) T and (.)H represent the transposition and Hermitian transpose operation respectively. Apparently, a larger value of µ emphasize more on noise reduction. Variables k and l are omitted here for simplicity. The solution to SDW-MWF can be obtained as
w SDW −MW F = [Φ xx + µΦ nn ] −1 Φ xx u 1(3)
where
u 1 = [1 . . . 0 . . . 0] T is a M -dimensional vector corre-
sponds to the first microphone (channel 1 of the 6-microphone array), Φ xx and Φ nn are the correlation matrices of clean speech signal and noise signal, respectively. Using a fixed parameter µ, the reduced residual noise level generally achieved at the expense of increased speech distortion. In our work, we compute the parameter according to desired noise reduction level.
µ = min(s, s/SNR i )(4)
where SNR i denotes the imput signal-to-noise ratio (SNR) of the first microphone, s is a noise reduction control factor defined as s = φ n1n1 /φ 0 , φ n1n1 represents the noise power at the first microphone, and φ 0 represents desired residual noise level. Apparently, when the background noise level is relatively high or the input SNR is relatively low, the optimized parameter will emphasize more on noise reduction, which is reasonable. In this work, the noise power and noise covariance matrix for each frequency bin are computed from the initial and final 10 frames of each utterance. The DNN baseline provides the state-of-the-art ASR performance. It is based on the Kaldi recipe for Track 2 of the 2nd CHiME Challenge [16]. The DNN is trained using the standard procedure (pre-training using restricted Boltzmann machine, cross entropy training, and sequence discriminative training). This baseline requires relatively massive computational resources (GPUs for the DNN training and many CPUs for lattice generation).
BACK-END DESCRIPTION
Acousitic modeling with neural network
We start DNN training based on scripts of baseline system. We use 7 hidden layers and 2048 nodes for each hidden layer. The features for the DNN training are 40-dimensional filter-bank and its delta, delta-delta features. A context window of 11 frames (5+1+5) is used so that the dimension of the input layer for DNN is 40 * 3 * 11. Cepstral Mean and Variance Normalization (CMVN) is applied and proves to be useful. The DNN output layer size is the same as the GMM-HMM, which is 2024. The DNN is trained using the standard procedure like baseline system.
The CNN uses fbank+pitch features and contains two convolutional hidden layers and a max-pooling layer. The input feature vector (not including pitch) is divided into 40 bands. The corresponding dimension of the 11 consecutive feature frames are arranged in each band, together with their derivatives. So that the input dimension of the CNN is 43 * 3 * 11. The first set of convolutional filters are applied to 8 consecutive bands and generate 128 feature mappings. We then apply max-pooling across 3 bands to generate 11 bands. The second set of convolutional filters are applied to 4 consecutive bands and generate 256 feature mappings. Four fully-connected hidden layers of 1024 nodes are arranged after the convolutional layers. The total number of parameters for the CNN is 7.7M.
Fig. 1. Back-end description
The LSTM network used in this paper is a two layer LSTM RNN, where each LSTM layer has 1024 memory cells and a dimensionality reducing recurrent projection layer of 200 linear units [9,10].
In our experiments, we use an official trigram language model (LM) on the initial decoding pass and use a 5-order LM for lattice rescoring in a second pass. The official trigram LM has 5k vocabularies. The 5-order LM is trained using official training data only, but has vocabularies up to 12k.
Combination of different systems
To combine these multiple speech recognition outputs into a single one, we employ ROVER at the decision level [11] in the final step. The fusion enables us to achieve a lower error rate than any of the individual systems alone. In this paper, NIST scoring toolkit (SCTK,version 1.3) is used as a rover tool to combine the different results. It takes N input files and does an N-way dynamic programming (DP) alignment on those files. The output is a voted output depending the maximum confidence score.
EXPERIMENTS AND RESULTS
The experiments are all carried out following the instructions of CHiME challenge. In this section, we list the ASR improvement step by step according to each technique we used resulting in the final WER of the test set provided by CHiME challenge. Table.1 gives the GMM and DNN baselines 'CHiME' provided, Table.2 shows the ASR results by the proposed system and Table.3 shows the ASR results under each scenario including the bus (BUS), cafe (CAF), pedestrian area (PED), and street junction (STR) according to the best system after ROVER.
ASR performance of front-end speech enhancement
As mentioned above, front-end speech enhancement brings benefits to the ASR performance. Table.2 demonstrates that WER of real test data decreases from 37.36% to 23.19% by changing the speech enhancement method from MVDR (supplied by CHiME organizers [17]) to the proposed SDW-MWF under GMM acoustic model. If we randomize the SNR of training data from -6dB-6dB (denoted by Random SNR in Table.2) instead of the estimated SNR calculated from really recorded data for simulating training set, the WER decreases to 22.07%. Under DNN+sMBR acoustic model, the WER decreases from 33.76% to 18.4% on test data using SDW-MWF and random SNR schemes. It is worthy mentioning that all the training data is enhanced to compensate the mismatch between the training data and test data.
Back-end ASR performance
The results of DNN model on the development and evaluation set are also given in Table 2. WERs of proposed system.
relative WER reduction comparing with GMM system on the real data of the test set. Obviously, the improvement is not enough, then we tried to use several other NN topologies.
As it is shown in Table 2, the CNN acoustic models as it has shown superior performance over conventional DNN. The WER decreases from 18.4% to 17.87%. Table 2 shows that LSTM gets further improvement. 14.09% relative reduction was achieved comparing to GMM. After lattice rescoring, all of the systems get significantly improvement.
Finally the best ASR result was obtained by combining all the systems with lattice rescoring together. We achieve a final WER of 13.2% on the real data of the test set, resulting in a 60.9% relative reduction in WER compared to the result of 33.23% from the best GMM-baseline. Table 3. WERs of the best system under different environments.
CONCLUSION
A state-of-the-art ASR system is presented in this paper facing with the task of reducing the effects of noise under different real applicable scenarios using a 6-microphone array. Two aspects are stated separately. Front-end speech enhancement using SDW-MWF achieves considerable performance improvement. Back-end techniques including GMM, DNN, CNN and LSTM are investigated. The combination of the four systems with lattice rescoring has the best ASR performance on the develop and test set. we achieve a relative 60.9% WER reduction on the real data of the test data compared to the best baseline system.
This work is partially supported by the National Natural Science Foundation of China (Nos. 11161140319, 91120001, 61271426), the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant Nos. XDA06030100, XDA06030500), the National 863 Program (No. 2012AA012503) and the CAS Priority Deployment Project (No. KGZD-EW-103-2).
Fig. 1
1demonstrates the back-end description including the techniques we used of the proposed system.The GMM baseline includes the standard triphone based acoustic models with various feature transformations including linear discriminant analysis (LDA), maximum likelihood linear transformation (MLLT), and feature space maximum likelihood linear regression (fMLLR) with speaker adaptive training (SAT).
Table 2 .
2we can see that DNN get 16.63%Model
Test Data Training Data
Dev. Set
Test Set
Real
Sim.
Real
Sim.
noisy
clean
55.65
50.25
79.84
63.30
GMM
noisy
18.70
18.71
33.23
21.59
MVDR
clean
41.88
21.72
78.12
25.63
MVDR
20.55
9.79
37.36
10.59
DNN+sMBR
noisy
noisy
16.13
14.30
33.43
21.51
DNN+sMBR
MVDR
MVDR
17.72
8.17
33.76
11.19
Table 1. WER Baselines from the 3rd CHiME challenge.
Model
Test Data
Training Data
Dev. Set
Test Set
Real
Sim.
Real
Sim.
GMM
SDW-MWF
Clean
30.23
29.75
53.43
41.58
SDW-MWF
13.16
14.11
23.19
18.65
GMM
SDW-MWF Random SNR+SDW-MWF 13.01
13.95
22.07
17.57
GMM+Rescore
11.61
12.37
20.35
15.7
DNN+sMBR
SDW-MWF Random SNR+SDW-MWF 9.95
10.03
18.4
12.98
DNN+sMBR+Rescore
8.48
9.01
15.3
11.29
CNN+sMBR
SDW-MWF Random SNR+SDW-MWF 9.52
9.64
17.87
12.64
CNN+sMBR+Rescore
8.51
8.77
16.37
11.55
LSTM
SDW-MWF Random SNR+SDW-MWF 10.81
11.18
18.96
14.1
LSTM+Rescore
9.44
9.71
16.45
12.48
ROVER
SDW-MWF Random SNR+SDW-MWF 7.29
7.68
13.2
9.71
Table . 3
.shows the detail ASR results under different recording scenarios. The best single system is the DNN+sMBR using lattice rescoring shown byTable.2.Environment
Dev. Set
Test Set
Real
Sim.
Real
Sim.
BUS
8.88
6.77
17.74
7.4
CAF
7.08
9.94
11.75
10.95
PED
5.78
6.14
13.34
9.19
STR
7.4
7.89
9.96
11.32
The third 'chime' speech separation and recognition challenge: Dataset, task and baselines. Jon Barker, Ricard Marxer, Emmanuel Vincent, Shinji Watanabe, Submitted to IEEE 2015 Automatic Speech Recognition and Understanding Workshop (ASRU). IEEEJon Barker, Ricard Marxer, Emmanuel Vincent, and Shinji Watanabe, "The third 'chime' speech separation and recognition challenge: Dataset, task and baselines," in Submitted to IEEE 2015 Automatic Speech Recogni- tion and Understanding Workshop (ASRU). IEEE, 2015.
Gsvd-based optimal filtering for multi-microphone speech enhancement. Simon Doclo, Marc Moonen, Microphone Arrays. SpringerSimon Doclo and Marc Moonen, "Gsvd-based optimal filtering for multi-microphone speech enhancement," in Microphone Arrays, pp. 111-132. Springer, 2001.
Probabilistic and bottle-neck features for LVCSR of meetings. F Grezl, M Karafiat, S Kontar, J Cernocky, IV-757-IV-760Acoustics, Speech and Signal Processing. 4IEEE International Conference onF. Grezl, M. Karafiat, S. Kontar, and J. Cernocky, "Prob- abilistic and bottle-neck features for LVCSR of meet- ings," in Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on, 2007, vol. 4, pp. IV-757-IV-760.
Exploring strategies for training deep neural networks. H Larochelle, Y Bengio, J Louradour, P Lamblin, J. Mach. Learn. Res. 10H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin, "Exploring strategies for training deep neural networks," J. Mach. Learn. Res., vol. 10, pp. 1-40, June 2009.
Conversational speech transcription using context-dependent deep neural networks. F Seide, G Li, D Yu, INTERSPEECH. F. Seide, G. Li, and D. Yu, "Conversational speech transcription using context-dependent deep neural net- works," in INTERSPEECH, 2011, pp. 437-440.
Hybrid acoustic models for distant and multichannel large vocabulary speech recognition. P Swietojanski, A Ghoshal, S Renals, Automatic Speech Recognition and Understanding (ASRU). P. Swietojanski, A. Ghoshal, and S. Renals, "Hybrid acoustic models for distant and multichannel large vo- cabulary speech recognition," in Automatic Speech Recognition and Understanding (ASRU), 2013 IEEE Workshop on, Dec 2013, pp. 285-290.
Using neural network front-ends on far field multiple microphones based speech recognition. Yulan Liu, Pengyuan Zhang, Thomas Hain, ICASSP2014 -Speech and Language Processing (ICASSP2014 -SLTC). Florence, ItalyYulan Liu, Pengyuan Zhang, and Thomas Hain, "Us- ing neural network front-ends on far field multiple mi- crophones based speech recognition," in ICASSP2014 -Speech and Language Processing (ICASSP2014 - SLTC), Florence, Italy, May 2014, pp. 5579-5583.
Deep convolutional neural networks for LVCSR. T N Sainath, A R Mohamed, B Kingsbury, B Ramabhadran, Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Conference on. T. N. Sainath, A. R. Mohamed, B. Kingsbury, and B. Ramabhadran, "Deep convolutional neural networks for LVCSR," in Acoustics, Speech and Signal Process- ing (ICASSP), 2013 IEEE International Conference on, 2013, pp. 8614-8618.
Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition. Hasim Sak, Andrew Senior, Fran04oise Beaufays, Eprint Arxiv1402Hasim Sak, Andrew Senior, and Fran04oise Beaufays, "Long short-term memory based recurrent neural net- work architectures for large vocabulary speech recogni- tion," Eprint Arxiv1402, 2014.
Long short-term memory recurrent neural network architectures for large scale acoustic modeling. Hasim Sak, Andrew Senior, Fran04oise Beaufays, InterspeechHasim Sak, Andrew Senior, and Fran04oise Beaufays, "Long short-term memory recurrent neural network ar- chitectures for large scale acoustic modeling," Inter- speech, pp. 338 -342, 2014.
A post-processing system to yield reduced word error rates: Recognizer Output Voting Error Reduction (ROVER). J G Fiscus, IEEE Workshop on Automatic Speech Recognition and Understanding. J. G. Fiscus, "A post-processing system to yield reduced word error rates: Recognizer Output Voting Error Re- duction (ROVER)," in IEEE Workshop on Automatic Speech Recognition and Understanding, 1997.
On optimal frequency-domain multichannel linear filtering for noise reduction. Mehrez Souden, Jacob Benesty, Sofiène Affes, IEEE Transactions on. 182Audio, Speech, and Language ProcessingMehrez Souden, Jacob Benesty, and Sofiène Affes, "On optimal frequency-domain multichannel linear filtering for noise reduction," Audio, Speech, and Language Pro- cessing, IEEE Transactions on, vol. 18, no. 2, pp. 260- 276, 2010.
Spatially pre-processed speech distortion weighted multi-channel wiener filtering for noise reduction. Ann Spriet, Marc Moonen, Jan Wouters, Signal Processing. 8412Ann Spriet, Marc Moonen, and Jan Wouters, "Spatially pre-processed speech distortion weighted multi-channel wiener filtering for noise reduction," Signal Processing, vol. 84, no. 12, pp. 2367-2387, 2004.
Gsvd-based optimal filtering for single and multimicrophone speech enhancement. Simon Doclo, Marc Moonen, IEEE Transactions on. 509Signal ProcessingSimon Doclo and Marc Moonen, "Gsvd-based opti- mal filtering for single and multimicrophone speech en- hancement," Signal Processing, IEEE Transactions on, vol. 50, no. 9, pp. 2230-2244, 2002.
Frequency-domain criterion for the speech distortion weighted multichannel wiener filter for robust noise reduction. Simon Doclo, Ann Spriet, Jan Wouters, Marc Moonen, Speech Communication. 497Simon Doclo, Ann Spriet, Jan Wouters, and Marc Moo- nen, "Frequency-domain criterion for the speech distor- tion weighted multichannel wiener filter for robust noise reduction," Speech Communication, vol. 49, no. 7, pp. 636-656, 2007.
Recurrent deep neural networks for robust speech recognition. Chao Weng, Dong Yu, Shigetaka Watanabe, Biing-Hwang Fred Juang, Acoustics, Speech and Signal Processing. IEEE2014 IEEE International Conference onChao Weng, Dong Yu, Shigetaka Watanabe, and Biing- Hwang Fred Juang, "Recurrent deep neural networks for robust speech recognition," in Acoustics, Speech and Signal Processing (ICASSP), 2014 IEEE International Conference on. IEEE, 2014, pp. 5532-5536.
On diagonal loading for minimum variance beamformers. Xavier Mestre, Miguel Lagunas, Proceedings of the 3rd IEEE International Symposium on. the 3rd IEEE International Symposium onIEEESignal Processing and Information TechnologyXavier Mestre, Miguel Lagunas, et al., "On diagonal loading for minimum variance beamformers," in Signal Processing and Information Technology, 2003. ISSPIT 2003. Proceedings of the 3rd IEEE International Sym- posium on. IEEE, 2003, pp. 459-462.
| [] |
[
"Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. Submission to PolEval task 2",
"Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. Submission to PolEval task 2"
] | [
"Tomasz Korbak tkorbak@ifispan.waw.pl \nIndependent researcher\n\n",
"\nInstitute of Philosophy and Sociology\nPolish Academy of Sciences\nNowyŚwiat 7200-330WarsawPoland\n",
"\nUniversity of Warsaw\nKrakowskie Przedmieście 26/2800-927WarsawPoland\n"
] | [
"Independent researcher\n",
"Institute of Philosophy and Sociology\nPolish Academy of Sciences\nNowyŚwiat 7200-330WarsawPoland",
"University of Warsaw\nKrakowskie Przedmieście 26/2800-927WarsawPoland"
] | [] | We describe a variant of Child-Sum Tree-LSTM deep neural network(Tai et al., 2015)fine-tuned for working with dependency trees and morphologically rich languages using the example of Polish. Fine-tuning included applying a custom regularization technique (zoneout, described by(Krueger et al., 2016), and further adapted for Tree-LSTMs) as well as using pre-trained word embeddings enhanced with sub-word information(Bojanowski et al., 2016). The system was implemented in PyTorch and evaluated on phrase-level sentiment labeling task as part of the PolEval competition. | 10.1007/978-3-030-66527-2_3 | [
"https://arxiv.org/pdf/1711.01985v1.pdf"
] | 9,953,257 | 1711.01985 | 8905d33d294c8e0d1146b2986683933313fc036e |
Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. Submission to PolEval task 2
Tomasz Korbak tkorbak@ifispan.waw.pl
Independent researcher
Institute of Philosophy and Sociology
Polish Academy of Sciences
NowyŚwiat 7200-330WarsawPoland
University of Warsaw
Krakowskie Przedmieście 26/2800-927WarsawPoland
Fine-tuning Tree-LSTM for phrase-level sentiment classification on a Polish dependency treebank. Submission to PolEval task 2
We describe a variant of Child-Sum Tree-LSTM deep neural network(Tai et al., 2015)fine-tuned for working with dependency trees and morphologically rich languages using the example of Polish. Fine-tuning included applying a custom regularization technique (zoneout, described by(Krueger et al., 2016), and further adapted for Tree-LSTMs) as well as using pre-trained word embeddings enhanced with sub-word information(Bojanowski et al., 2016). The system was implemented in PyTorch and evaluated on phrase-level sentiment labeling task as part of the PolEval competition.
Introduction
In this article, we describe a variant of Tree-LSTM neural network (Tai et al., 2015) for phrase-level sentiment classification. The contribution of this paper is evaluating various strategies for fine-tuning this model for a morphologically rich language with relatively loose word order -Polish. We explored the effects of several variants of regularization technique known as zoneout (Krueger et al., 2016) as well as using pre-trained word embeddings enhanced with sub-word information (Bojanowski et al., 2016).
The system was evaluated in PolEval competition. Pol-Eval is a SemEval-inspired evaluation campaign for natural language processing tools for Polish. 1 The task that we undertook was phrase-level sentiment classification, i.e. labeling the sentiment of each node in a given dependency tree. The dataset format was analogous to the seminal Stanford Sentiment Treebank 2 for English as described in (Socher et al., 2013).
The source code of our system is publicly available under github.com/tomekkorbak/treehopper.
Phrase-level sentiment analysis
Sentiment analysis is the task of identifying and extracting subjective information (attitude of the speaker or emotion she expresses) in text. In a typical formulation, it boils down to classifying the sentiment of a piece of text, where sentiment is understood as either binary (positive or negative) or multinomial label and where classification may take place on document level or sentence level. This approach, however, is of limited effectiveness in case of texts expressing multiple (possibly contradictory) opinions about multiple entities (or aspects thereof) (Thet et al.,1 http://poleval.pl 2 https://nlp.stanford.edu/sentiment/ 2010). What is needed is a more fine-grained way of assigning sentiment labels, for instance to phrases that build up a sentence.
Apart from aspect-specificity of sentiment labels, another important consideration is to account for the effect of syntactic and semantic composition on sentiment. Consider the role negation plays in the sentence "The movie was not terrible": it flips the sentiment label of the whole sentence around (Socher et al., 2013). In general, computing the sentiment of a complex phrase requires knowing the sentiment of its subphrases and a procedure of composing them. Applying this approach to full sentences requires a tree representation of a sentence.
PolEval dataset represents sentences as dependency trees. Dependency grammar is a family of linguistics frameworks that model sentences in terms of tokens and (binary, directed) relations between them, with some additional constraint: there must be a single root node with o incoming edges and each non-root node must have a single incoming arc and a unique path to the root node. What this entails is that each phrase will have a single head that governs how its subphrases are to be composed (Jurafsky and Martin, 2000).
PolEval dataset consisted of a 1200 sentence training set and 350 sentence evaluation test. Each token in a sentence is annotated with its head (the token it depends on), relation type (i.e. coordination, conjunction, etc.) and sentiment label (positive, neural, negative). For an example, consider fig. 1. 3. LSTM and Tree-LSTM neural networks 3.1. Recurrent neural networks A recurrent neural network (RNN) is a machine learning model designed to handle sequential data. It can be described as a dynamical system with transition function Figure 1: An entry in Poleval dataset consists of (1) an ordered list of tokens, (2) dependency relations between them, (3) types of these relations (not used by our model, hence not shown) and (4) sentiment labels for each head (-1, 0, 1).
f :
h t = f (h t , x t ; θ)(1)
where h t denotes hidden state at time-step t, x t denotes t-th sample and θ denotes model parameters (weight matrices). The outputŷ t is then a function of current hidden state h t , current sample x t and parameters θ:
y (t) = g(h (t) , x (t) ; θ)(2)
In the most simple case (known as Vanilla RNN, or Elman network, cf. (Elman, 1990)), both f and g can be defined as an affine transformations of a concatenation of hidden states and inputs, [h (t) , x (t) ], that is:
f (h t , x t ; θ) = W h [h t , x t ] + b h (3) g(h t , x t ; θ) = W y [h t , x t ] + b y(4)
for some W h , W y , b h , b y ∈ θ. Importantly, none of these parameters depends on t; they are shared across timesteps.
LSTM cells and learning long-term dependencies
Thanks to recurrent connections, RNNs are capable of maintaining a working memory (or short-term memory, as opposed to long-term memory captured in weights of forward connections) for storing information about earlier time-steps and use it for classifying subsequent ones. One problem is that the distance between two time-steps has a huge effect on learnability of constraints they impose on each other. This particular problem with long-term dependencies is known as vanishing gradient problem (Bengio et al., 1994).
Long short-term memory (LSTM) architecture (Hochreiter and Schmidhuber, 1997) was designed address to the problem of vanishing gradient by enforcing constant error flow across time-steps. This is done by introducing a structure called memory cell; a memory cell has one self-recurrent connection with constant weight that carries short-term memory information through time-steps. Information stored in memory cell is thus relatively stable despite noise, yet it can be superimposed with each time-step. This is regulated by three gates mediating memory cell with inputs and hidden states: input gate, forget gate and output get.
For time-step t, let input gate i t , forget gate f t and output gate o t be defined in terms of the following equations (5-7):
i t = σ(W (i) x (t) + U (i) h t−1 ) (5) f t = σ(W (f ) x (t) + U (f ) h t−1 ) (6) o t = σ(W (o) x (t) + U (o) h t−1 )(7)
where
W (i) , W (f ) , W (o) and U (i) , U (f ) , U (o) denote weight matrices for input-to-cell (where input is x t ) and
hidden-to-cell (where hidden layer is h t ) connections, respectively, for input gate, forget gate and output gate. σ denotes the sigmoid function.
Gates are then used for updating short-term memory. Let new memory cell candidate c t at time-step t be defined as
c t = tanh(W (c) x t + U (c) h t−1 )(8)
where W (c) , U (c) , analogously, are weight matrices for input-to-cell and hidden-to-cell connections and where tanh denotes hyperbolic tangent function. Intuitively, c t can be thought of as summarizing relevant information about word-token x t . Then, c t is used to update c t , according to forget gate and input gate.
c t = f t • c t−1 + i t • c t(9)
where A • B denotes the Hadamard product of two matrices, i.e. element-wise multiplication. Finally, c t is used to compute next hidden state h t , again depending on output gate (defined in equation 7) that takes into account input and hidden states at current time-
step. h t = o t • tanh(c t )(10)
In a sequence labeling task, h t is then used to compute labelŷ t as defined by eq. 4. The forward-propagation for a LSTM network is done by recursively applying equations 5-10 while incrementing t.
Recursive neural networks and tree labeling
Recursive neural networks, or tree-structured neural networks, make a superset of recurrent neural networks, as their computational graphs generalize computational graphs of recurrent neural network from a chain to a tree. Whereas a recurrent neural networks hidden state h t depends only on one previous hidden states, h t−1 , a hidden state of a recursive neural network depends on a set of descending hidden states C(h t ), when C(j) denotes a set of children of a node j.
Tree-structured neural networks have a clear linguistic advantage over chain-structured neural networks: trees make a very natural way of representing the syntax of natural languages, i.e. how more complex phrases are composed of simpler ones. 3 Specifically, in this paper we will be concerned with a tree labeling task, which is analogous generalization of sequence labeling to tree-structured inputs: each node of a tree is assigned with a label, possibly dependent on all of its children.
Tree-LSTMs neural networks
A Tree-LSTM (as described by Tai et al., 2015) is a natural combination of the approaches described in two previous subsections. Here we will focus on a particular variant of Tree-LSTM known as Child-Sum Tree-LSTM. This variant allows a node to have an unbounded number of children and assumes no order over those children. Thus, Child-Sum Tree-LSTM is particularly well-suited for dependency trees. 4 Let C(j) again denote the set of children of the node j. For a given node j, Child-Sum Tree-LSTM takes as inputs vector x j and hidden states h k for every k ∈ C(j). The hidden state h j and cell state c j are computed using the following equations:
h j = k∈C(j) h k (11) i j = σ(W (i) x j + U (i) h j + b j )(12)f jk = σ(W (f ) x j + U (f ) h j + b f )(13)o j = σ(W (o) x j + U (o) h j + b o )(14)u j = tanh(W (u) x j + U (u) h j + b u )(15)c j = i j • u j + k∈C(j) f jk • c k (16) h j = o j • tanh (c j )(17)
Eqs. 12-17 are analogous to eqs. 5-9; they correspond to applying input gate, forget gate, output gate, update gate and computing cell and hidden states.
In a tree labeling task, we will additionally have an output functionŷ
j = W (y) h j + b y(18)
for computing a label of each node.
3 Although recursive neural networks are used primarily in natural language processing, they were also applied in other domains, for instance scene parsing (Socher et al., 2011). 4 The other variant described by (Tai et al., 2015), N -ary Tree-LSTM assumes that each node has at most N children and that children are linearly ordered, making it natural for (binary) dependency trees. The choice between these two variant really boils down to the syntactic theory we assume for representing sentences. As PolEval dataset assumes dependency grammar, we decided to go along with Child-Sum Tree-LSTM.
Experiments
We choose to implement our model in PyTorch 5 due to convenience of using a dynamic computation graphs framework. We evaluated our model on tree labeling as described in subsection 3.3. using PolEval 2017 Task 2 dataset. (For an example entry, see fig. 1).
Regularizing with zoneout
Zoneout (Krueger et al., 2016) regularization technique is a variant of dropout (Srivastava et al., 2014) designed specifically for regularizing recurrent connections of LSTMs or GRUs. Dropout is known to be successful in preventing feature co-adaptation (also known as overfitting) by randomly applying a zero mask to the outputs of a given layer. More formally,
h := d t • h(19)
where d t is a random mask (a tensor with values sampled from Bernoulli distribution). However, dropout usually could not be applied to recurrent hidden and cell states of LSTMs, since aggregating zero mask over a sufficient number of time-steps effectively zeros them out. (This is reminiscent of the vanishing gradient problem).
Zoneout addresses this problem by randomly swapping the current value of a hidden state with its value from a previous time-step rather than zeroing it out. Therefore, contrary to dropout, gradient information and state information are more readily propagated through time. Zoneout has yielded significant performance improvements on various NLP tasks when applied to cell and hidden states of LSTMs. This can be understood as substituting eqs. 8, 10 with the following ones:
c t := d c t • c t + (1 − d c t ) • c t−1 (20) h t := d h t • h t + (1 − d h t ) • h t−1(21)
where 1 denotes a unit tensor and d c t and d h t are random, Bernoulli-sampled masks for a given time-step.
Notably, zoneout was originally designed with sequential LSTMs in mind. We explored several ways of adapting it to tree-structured LSTMs. We will consider only hidden state updates, since cell states updates are isomorphic.
As Tree-LSTM's nodes are no longer linearly ordered, the notion of previous hidden states must be replaced with the notion of hidden states of children nodes. The most obvious approach, that we call "sum-child" will be randomly replacing the hidden states of node j with the sum of its children nodes' hidden states, i.e.
h j := d h j • h j + (1 − d h j ) • k∈C(j) h k(22)
Another approach, called "choose-child" by us, is to randomly choose a single child to replace the node with.
h j := d h j • h j + (1 − d h j ) • h k(23)
where k is a random number sampled from indices of the members of C(j). Apart from that, we explored different values for d h and d c as well as keeping a mask fixed across time-steps, i.e. d t being constant for all t.
Using pre-trained word embeddings
Standard deep learning approaches to distributional lexical semantics (e.g. word2vec, (Mikolov et al., 2013)) were not designed with agglutinative languages, like Polish, in mind and cannot take advantage of compositional relation between words. Consider the example of "chodziłem" and "chodziłam" (Polish masculine and feminine past continuous forms of "walk", respectively). The model has no sense of morphological similarity between these words and has to infer it from distributional information itself. This poses a problem when the number of occurrences of a specific orthographic word form is small or zero and some Polish words can have up to 30 orthographic forms (thus, the effective number of occurrences is 30 times smaller than the number of occurrences when counting lemmas).
One approach we explore is to use word embeddings pre-trained on lemmatized data. The other, more promising approach, is take advantage of morphological information by enhancing word embeddings with subword information. We evaluate fastText word vectors as described by (Bojanowski et al., 2016). Their work extends the model of (Mikolov et al., 2013) with additional representation of morphological structure as a bag of character-level n-gram (for 3 ≤ n ≤ 6). Each character n-gram has its own vectors representations and the resulting word embeddings is a sum of the word vector and its character vectors. Authors have reported significant improvements in language modeling tasks, especially for Slavic languages (8% for Czech and 13% for Russian; Polish was not evaluated) compared to pure word2vec baseline.
Results
We conducted a thorough grid search on a number of other hyperparameters (not reported here in detail due to spatial limitations). We found out that the best results were obtained with minibatch size of 25, Tree-LSTM hidden state and cell state size of 300, learning rate of 0.05, weight decay rate of 0.0001 and L2 regularization rate of 0.0001. No significant difference was found between Adam (Kingma and Ba, 2014) and Adagrad (Duchi et al., 2011) optimization algorithms. It takes between 10 and 20 epochs for the system to converge.
Here we focus on two fine-tunings we introduced: fast-Text word embeddings and zoneout regularization.
The following word embeddings model were used:
• word2vec (Mikolov et al., 2013), 300 dimensions, pre-trained on Polish Wikipedia and National Corpus of Polish (Przepiórkowski et al., 2008) using lemmatized word forms. Lemmatization was done using Concraft morphosyntactic tagger (Waszczuk, 2012).
• word2vec (Mikolov et al., 2013), same as above, but using orthographical word forms.
• fastText (Bojanowski et al., 2016), 300 dimensions, pre-trained on Polish Wikipedia using orthographical word forms and sub-word information.
Our results for different parametrization of pre-trained word embeddings and zoneout are shown in tables 2 and 3, respectively. The effects of word embeddings and zoneout were analyzed separately, i.e. results in table 2 were obtained with no zoneout and results in table 3 were obtained with best word embeddings, i.e. fastText.
Note that these results differ from what is reported in official PolEval benchmark. Our results as evaluated by organizing committee, reported in table 1, left us behind the winner (0.795) by a huge margin. This was due to a bug in our implementation, which was hard to spot as it manifested only in inference mode. The bug broke mapping between word tokens and weights in our embedding matrix. All results reported in tables 2 and 3 were obtained after fixing the bug (the model trained on training dataset and evaluated on evaluation dataset, after ground truth labels were disclosed). Note that these results beat the best reported solution by a small margin.
Conclusions
As far as word2vec embeddings are concerned, both training on lemmatized word forms and further optimizing embedding yielded small improvements; the two effects being cumulative. FastText vectors, however, beat all word2vec configurations by a significant margin. This result is interesting as fastText embeddings were originally trained on a smaller corpus (Wikipedia, as opposed to Wikipedia+NKJP in the case of word2vec). Table 3: Results extracted from a grid search over zoneout hyperparameters. "Mask" denotes the moment mask vector is sampled from Bernoulli distribution: "common" means all node share the same mask, while "distinct" means mask is sampled per node. "Strategy" means zoneout strategy as described in section 4.1.. "d c j " and "d h j " mean zoneout rates for, respectively, hidden and cell states of a Tree-LSTM. No significant differences in training time were observed.
When it comes to zoneout, it barely affected accuracy (improvement of about 0.6 percentage point) and we did not found a hyperparameter configuration that stands out. More work is needed to determine whether zoneout could yield robust improvements for Tree-LSTM. Unfortunately, our system did not manage to win the Task 2 competition, this being due to a simple bug. However, our results obtained after the evaluation indicate that it was very promising in terms of overall design and in fact, could beat other participants by a small margin (if implemented correctly). We intend to prepare and improve it for the next year's competition having learned some important lessons on fine-tuning and regularizing Tree-LSTMs for sentiment analysis.
Table 1: Results of our faulty solution as evaluated by Pol-Eval organizing committee. "Ensemble epochs" means the number of training epochs we averaged the weights over to obtain a snapshot-based ensemble model.emb lr ensemble epochs accuracy
0.2
1
0.678
0.1
1
0.671
0.1
3
0.670
word embeddings
emb lr accuracy
time
word2vec, orthographic
0.0
0.7482 20:52
word2vec, orthographic
0.1
0.7562 20:26
word2vec, lemmatized
0.0
0.7536 20:01
word2vec, lemmatized
0.1
0.7737 20:09
fastText, orthographic
0.0
0.8011 20:04
fastText, orthographic
0.1
0.7993 20:17
Table 2 :
2A comparison of the effect of pre-trained word embedding on model's accuracy. "emb lr" means learning rate of the embedding layer, i.e. 0.0 means the layer was kept fixed and not optimized during training. "time" means wall-clock time of training on a CPU measured in minutes.
http://pytorch.org/
AcknowledgementsThe work of Tomasz Korbak was supported by Polish Ministry of Science and Higher Education grant DI2015010945 within "Diamentowy Grant" programme (2016-2020).
Learning long-term dependencies with gradient descent is difficult. Yoshua Bengio, Patrice Simard, Paolo Frasconi, IEEE Transactions on Neural Networks. 52Bengio, Yoshua, Patrice Simard, and Paolo Frasconi, 1994. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, arXiv:1607.04606arXiv preprintBojanowski, Piotr, Edouard Grave, Armand Joulin, and Tomas Mikolov, 2016. Enriching word vec- tors with subword information. arXiv preprint arXiv:1607.04606.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, J. Mach. Learn. Res. 12Duchi, John, Elad Hazan, and Yoram Singer, 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121- 2159.
Finding structure in time. Jeffrey L Elman, Cognitive Science. 14Elman, Jeffrey L., 1990. Finding structure in time. Cogni- tive Science, 14(2):179-211.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Hochreiter, Sepp and Jürgen Schmidhuber, 1997. Long short-term memory. Neural Computation, 9(8):1735- 1780.
Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Daniel Jurafsky, James H Martin, Prentice Hall PTRUpper Saddle River, NJ, USA1st editionJurafsky, Daniel and James H. Martin, 2000. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. Upper Saddle River, NJ, USA: Prentice Hall PTR, 1st edition.
Adam: A method for stochastic optimization. Diederik P Kingma, Jimmy Ba, abs/1412.6980CoRRKingma, Diederik P. and Jimmy Ba, 2014. Adam: A method for stochastic optimization. CoRR, abs/1412.6980.
Zoneout: Regularizing rnns by randomly preserving hidden activations. David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C Courville, Chris Pal, abs/1606.01305CoRRKrueger, David, Tegan Maharaj, János Kramár, Mo- hammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Hugo Larochelle, Aaron C. Courville, and Chris Pal, 2016. Zoneout: Reg- ularizing rnns by randomly preserving hidden activa- tions. CoRR, abs/1606.01305.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC. CoRR, abs/1301.3781. Przepiórkowski, Adam, Rafal L. Grski, Barbara Lewandowska-Tomaszczyk, and Marek aziskithe Sixth International Conference on Language Resources and Evaluation, LRECMarrakechELRATowards the National Corpus of PolishMikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean, 2013. Efficient estimation of word representa- tions in vector space. CoRR, abs/1301.3781. Przepiórkowski, Adam, Rafal L. Grski, Barbara Lewandowska-Tomaszczyk, and Marek aziski, 2008. Towards the National Corpus of Polish. In Proceedings of the Sixth International Conference on Language Resources and Evaluation, LREC 2008. Marrakech: ELRA.
Parsing natural scenes and natural language with recursive neural networks. Richard Socher, Cliff Chiung, -Yu Lin, Andrew Y Ng, Christopher D Manning, Proceedings of the 28th International Conference on International Conference on Machine Learning, ICML'11. the 28th International Conference on International Conference on Machine Learning, ICML'11USAOmnipressSocher, Richard, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning, 2011. Parsing natural scenes and natural language with recursive neural net- works. In Proceedings of the 28th International Confer- ence on International Conference on Machine Learning, ICML'11. USA: Omnipress.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Y Jean, Jason Wu, Chuang, D Christopher, Manning, Y Andrew, Christopher Potts Ng, Potts, EMNLP. Socher, Richard, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts Potts, 2013. Recursive deep mod- els for semantic compositionality over a sentiment tree- bank. In EMNLP.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 15Srivastava, Nitish, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov, 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929-1958.
Improved semantic representations from tree-structured long short-term memory networks. Kai Tai, Richard Sheng, Christopher D Socher, Manning, abs/1503.00075CoRRTai, Kai Sheng, Richard Socher, and Christopher D. Manning, 2015. Improved semantic representations from tree-structured long short-term memory networks. CoRR, abs/1503.00075.
Aspect-based sentiment analysis of movie reviews on discussion boards. Tun Thet, Jin-Cheon Thura, Christopher S G Na, Khoo, J. Inf. Sci. 366Thet, Tun Thura, Jin-Cheon Na, and Christopher S.G. Khoo, 2010. Aspect-based sentiment analysis of movie reviews on discussion boards. J. Inf. Sci., 36(6):823- 848.
Harnessing the crf complexity with domain-specific constraints. the case of morphosyntactic tagging of a highly inflected language. Jakub Waszczuk, Proceedings of COLING 2012. The COLING 2012 Organizing Committee. COLING 2012. The COLING 2012 Organizing CommitteeWaszczuk, Jakub, 2012. Harnessing the crf complex- ity with domain-specific constraints. the case of mor- phosyntactic tagging of a highly inflected language. In Proceedings of COLING 2012. The COLING 2012 Or- ganizing Committee.
| [] |
[
"Dim Wihl Gat Tun: The Case for Linguistic Expertise in NLP for Underdocumented Languages",
"Dim Wihl Gat Tun: The Case for Linguistic Expertise in NLP for Underdocumented Languages"
] | [
"Clarissa Forbes zforbesc@alumni.ubc.caqfirst.last@ubc.ca \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Z Farhan \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Samir Q Bruce \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Harold Oliver \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Changbing Yang \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Q Edith Coates \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Garrett Nicolai \nIndependent Researcher\nUniversity of British Columbia\n\n",
"Miikka Silfverberg \nIndependent Researcher\nUniversity of British Columbia\n\n"
] | [
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n",
"Independent Researcher\nUniversity of British Columbia\n"
] | [] | Recent progress in NLP is driven by pretrained models leveraging massive datasets and has predominantly benefited the world's political and economic superpowers. Technologically underserved languages are left behind because they lack such resources. Hundreds of underserved languages, nevertheless, have available data sources in the form of interlinear glossed text (IGT) from language documentation efforts. IGT remains underutilized in NLP work, perhaps because its annotations are only semistructured and often language-specific. With this paper, we make the case that IGT data can be leveraged successfully provided that target language expertise is available. We specifically advocate for collaboration with documentary linguists. Our paper provides a roadmap for successful projects utilizing IGT data: (1) It is essential to define which NLP tasks can be accomplished with the given IGT data and how these will benefit the speech community.(2) Great care and target language expertise is required when converting the data into structured formats commonly employed in NLP. (3) Task-specific and user-specific evaluation can help to ascertain that the tools which are created benefit the target language speech community. We illustrate each step through a case study on developing a morphological reinflection system for the Tsimchianic language Gitksan. . 2009. How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation. In | 10.18653/v1/2022.findings-acl.167 | [
"https://arxiv.org/pdf/2203.09632v1.pdf"
] | 247,593,992 | 2203.09632 | 424c932fe14cb40e95ed18f3184aa76331026942 |
Dim Wihl Gat Tun: The Case for Linguistic Expertise in NLP for Underdocumented Languages
Clarissa Forbes zforbesc@alumni.ubc.caqfirst.last@ubc.ca
Independent Researcher
University of British Columbia
Z Farhan
Independent Researcher
University of British Columbia
Samir Q Bruce
Independent Researcher
University of British Columbia
Harold Oliver
Independent Researcher
University of British Columbia
Changbing Yang
Independent Researcher
University of British Columbia
Q Edith Coates
Independent Researcher
University of British Columbia
Garrett Nicolai
Independent Researcher
University of British Columbia
Miikka Silfverberg
Independent Researcher
University of British Columbia
Dim Wihl Gat Tun: The Case for Linguistic Expertise in NLP for Underdocumented Languages
Recent progress in NLP is driven by pretrained models leveraging massive datasets and has predominantly benefited the world's political and economic superpowers. Technologically underserved languages are left behind because they lack such resources. Hundreds of underserved languages, nevertheless, have available data sources in the form of interlinear glossed text (IGT) from language documentation efforts. IGT remains underutilized in NLP work, perhaps because its annotations are only semistructured and often language-specific. With this paper, we make the case that IGT data can be leveraged successfully provided that target language expertise is available. We specifically advocate for collaboration with documentary linguists. Our paper provides a roadmap for successful projects utilizing IGT data: (1) It is essential to define which NLP tasks can be accomplished with the given IGT data and how these will benefit the speech community.(2) Great care and target language expertise is required when converting the data into structured formats commonly employed in NLP. (3) Task-specific and user-specific evaluation can help to ascertain that the tools which are created benefit the target language speech community. We illustrate each step through a case study on developing a morphological reinflection system for the Tsimchianic language Gitksan. . 2009. How well does active learning actually work? Time-based evaluation of cost-reduction strategies for language documentation. In
Introduction
Progress 12 in NLP research has primarily manifested in tools for the world's political and economic superpowers (Blasi et al., 2021), and it is unclear how we can build more inclusive language technologies. Even multilingual pretraining methods (e.g., Artetxe et al., 2018), capable of producing effective models in the absence of large annotated training datasets require 1 Dim wihl gat tun -"This is what the people should do" 2 First two authors contributed equally. unannotated corpora that are prohibitively large for 90% of the world's languages (Joshi et al., 2020).
Nevertheless, many languages in this 90% have a body of resources. Language documentation and linguistic fieldwork are an ongoing task worldwide, and many resources continue to be developed in these traditions (Bird, 2020). We have access to wordlists, bilingual dictionaries for over 1000 languages (Wu et al., 2020), aligned speech recordings for over 700 languages (Black, 2019), multi-parallel texts for 1600+ languages (McCarthy et al., 2020b), and knowledge of related languages (Haspelmath et al., 2005). Indeed, researchers have leveraged these resources to build impressive, useful computational systems for multilingual morphological analyzers (Nicolai and Yarowsky, 2019), adapting pretrained language models for over 1000 languages (Ebrahimi and Kann, 2021), and building massively multilingual speech recognition systems (Adams et al., 2019), among others.
There are additional language documentation resources which have yet to be fully leveraged in the aim to produce more inclusive language technology. Interlinear glossed texts (IGTs) depicted in Figure 1 are semi-structured texts which comprise not only monolingual corpus data (e.g. al'algaltgathl) but also morphemelevel segmentations (e.g. CVC~algal-t=gat=hl), glosses for component-morphemes (e.g. PL~watch-3.II=REPORT=CN), word alignment information (Zhao et al., 2020), and free translations. IGTs remain a major annotated datatype produced in the course of linguistic fieldwork: examples are continuously digitized in large databases for hundreds of languages (Lewis and Xia, 2010), and entire corpora of IGT are periodically published in volume series such as Texts in Indigenous Languages of the Americas. They have the potential to serve as training data for a wide variety of computational systems including bilingual lexicons, morphological analyzers, dependency parsers, part-of-speech taggers, and word-aligners (Georgi, 2014). Yet while they are accessible, they remain severely underutilized for these purposes.
Part of the general hesitancy in adoption of IGT as training data may lie in the fact that the annotation format is only semi-structured and often language-specific. While the general IGT format is governed by the Leipzig glossing rules (Comrie et al., 2015), there remains significant flexibility for the annotator to customize tags and conventions for any given language. This makes IGT challenging as a format for training supervised NLP models.
With this paper, we make the case that IGT data can be leveraged in NLP research and language applications for speech communities, provided that target language expertise is available. Specifically, we argue that it is essential to collaborate with documentary linguists who are familiar with the language-specific annotations in the IGT data in order to leverage the data for NLP tasks. This may furthermore provide a foundation for co-designing language technologies with a given speech community (Bird, 2020).
Our paper provides a roadmap, portrayed in Fig. 2, for navigating three areas of significant uncertainty that arise when incorporating IGT data for inclusive language technology. First, we need to define what NLP tasks can be accomplished with a given set of IGT data, and whether they are of value to the speech community. Second, after selecting useful tasks, we will need to preprocess the data, potentially by converting it to a structured format commonly employed in NLP tasks. Finally, we need task-specific and user-specific evaluation procedures in order to be explicit about the failure modes of the technology, as it is ultimately being developed for end users like speakers and linguists rather than solely comparison with other researchers.
We focus on the first two of these areas, forwarding our argument through a case study on developing a morphological reinflection system for the Gitksan language (Section 2.3) that has applications in language teaching.
Background
NLP for Underdocumented Languages
Computational work on underdocumented and lowresource languages has accelerated in recent years due to increasing recognition of both the role of NLP in language preservation as well as dedicated workshops like ComputEL (Arppe et al., 2021), AmericasNLP (Mager et al., 2021) and SIGTYP (Vylomova et al., 2021). Most of this work aims to assist in language documentation and revitalization, with machine translation being another important research area. Mager et al. (2018) and Littell et al. (2018) present surveys of existing NLP tools for the North American Indigenous languages, many of which are underdocumented, and discuss core challenges: morphological complexity, limited training data, and dialectal variation.
Several authors have trained NLP models on IGT to accelerate language documentation, with automatic glossing being a prominent research direction. The first approaches simply memorized earlier glossing decisions and enabled the annotator to re-use these later (Baines, 2009). Later approaches have relied on structured models like CRFs (McMillan-Major, 2020), RNN encoderdecoders (Moeller and Hulden, 2018) and transformers (Zhao et al., 2020) to generate glosses for unseen tokens. NLP techniques can also be used to generate inflection tables from IGT (Moeller et al., 2020). These find applications both in language documentation and language education, often to facilitate the production of more IGT data. A related approach is to generate morphological analyzers using IGT as a starting-point (Zamaraeva, 2016;Wax, 2014).
Several papers discuss challenges related to IGT as a data type. One of the principal concerns is the noisiness of the annotations (Moeller et al., 2020). This problem is compounded by the fact that annotation schemas employed by linguists preparing IGT tend to be idiosyncratic 3 and often lack internal consistency (Baldridge and Palmer, 2009;Palmer et al., 2009). The design of annotation stan- (1) We first need to define what NLP tasks can be accomplished with a given set of IGT data and whether they are valuable to the speech community (see Section 2.3).
(2) Next, we need to gather the relevant IGT data that was created during linguistic fieldwork with the speech community (see Section 2.3).
(3) Next, the IGT data needs to be converted to a structured format amenable for NLP formats. (4) The model needs to be evaluated not only in terms of standard NLP model selection metrics but also for efficacy for end-users such as efficiency in time-savings and usability (see Section 4). Crucially, all three stakeholders -speech community members, NLP researchers, and linguists -should be involved throughout the process.
dards is important: Zhao et al. (2020) note that this can have an impact on the performance of glossing systems. McMillan-Major (2020) notes a further challenge: IGT often includes not only morphological information, but also syntactic, semantic, and pragmatic annotations, which can be much harder to learn in low-resource settings.
In addition to challenges in the IGT data type itself, there are other challenges in NLP applications for underdocumented languages. Ward and Genabith (2003) discuss many problems related to development of computer-assisted language learning for endangered languages: lack of orthographic standards, limited resources, and limited documentation of the language. van Esch et al. (2019) also discuss NLP tools that can be helpful for documentation of low-resource languages, but they note that restrictive licenses can often be problematic for engineering.
The Gitksan Language
The Gitxsan are one of the Indigenous peoples of the northern interior region of British Columbia, Canada. Their traditional territories consist of upwards of 50,000 square kilometers of land in the upriver Skeena River watershed area. Their traditional language, called Gitksan in the linguistic literature, is the easternmost member of the Tsimshianic family, which spans the entirety of the Skeena and Nass River watersheds to the Pacific Coast.
Today, Gitksan is the most vital Tsimshianic language, but is still critically endangered with an estimated 300-850 speakers (Dunlop et al., 2018). Community revitalization efforts are underway but are primarily undertaken by individuals on an adhoc basis. Initiatives include regular in-school language programming, a few adult language courses, a successful language immersion camp, and several Master-Apprentice pairs. Linguistic documentation on Gitksan and the Tsimshianic languages has been going on intermittently since the 1970s, including the drafting of a never-published grammar (Rigsby, 1986) and waves of formal phonological, syntactic, and semantic work over the past thirty years. There are several community-developed wordlists and workbooks, but no comprehensive dictionary, grammar, or pedagogical curriculum. There is an accepted orthography (Hindle and Rigsby, 1973), and a talking dictionary mobile app in active use by the community (Mother Tongues Dictionaries, formerly Waldayu; Littell et al. (2017)).
Other computational studies interact with the active documentation efforts surrounding Gitksan to produce new frameworks and resources. Dunham et al. (2014) present a database structure for hosting audio and transcribed data in language documentation contexts, adopted for Gitksan and eight other underdocumented languages. Littell et al. (2017) present a dictionary interface which is capable of fuzzy search. They mention this specifically as a way to increase accessibility in a setting where orthographies have not been standardized or where many users are language learners. Forbes et al. (2021) present a finite-state morphological analyzer for Gitksan; they test coverage across different dialects of Gitksan and use handcrafted rules to increase coverage for spelling variants.
Constructing a Gitksan Pedagogical Application from IGT Data
Our project generates language learning exercises for Gitksan grammar. The need for these exercises was identified in discussions with documentary linguists working on Gitksan (the task definition step in Figure 2). Specifically, our goal is to automatically generate exercises for noun and verb inflection. As source material, we use Gitksan IGT data collected by linguists at the University of British Columbia for language documentation purposes (the data step in Figure 2). Examples of this data are shown in Figure 1 and Appendix A. Due to extensive morphological annotation, IGT provides a valuable starting point for our work. However, the annotations are far too detailed for our purposes-many derivational affixes are annotated in the data (further discussed in Section 3.1). These are irrelevant and can be downright harmful for grammar exercises. To remedy this misalignment between the raw IGT data and our NLP task, we collaborate with Gitksan documentary linguists to identify a set of inflected forms with clearly defined grammatical function, while discarding derivational morphology. We then convert the IGT data into a set of inflectional paradigms (the data conversion step in Figure 2). We further discuss this conversion process in Sections 3.2 and 3.3. Since the inflectional paradigms sourced from corpora are sparse, 4 we train models to fill in missing forms (Section 4). This is more widely know as the Paradigm Cell-Filling Problem (PCFP) (e.g., Silfverberg and Hulden, 2018). We then evaluate the system on it's capacity to automatically generate inflections, and discuss limitations of our current evaluation procedure (the evaluation step in Figure 2).
Challenges in Incorporating IGT into NLP Research
Because tokens in IGT are already segmented and annotated, it forms an ostensibly convenient starting-point for further processing and tokenbased grouping. In many ways, IGT is, however, a challenging data type for use in pedagogical and NLP applications. This section presents three specific challenges posed by IGT data when NLP techniques are applied. First, while IGT will contain a wealth of useful information for NLP models, it might also contain information which is far too fine-grained for automatic learning purposes, at least given the quantity of data which are available. Second, IGT often contain idiosyncratic or language-specific conventions which may not be easily converted to or represented in standardized frameworks. Third, because IGT is used as a device for language documentation, it will often contain dialectal variation, an important meta-characteristic which in aggregate cannot be easily distinguished from other types of variation or spelling errors. We argue that handling these issues for successful data preprocessing requires consultation with linguistic experts, and exemplify with instances from the Gitksan IGT and our use-case.
Annotation Granularity
Documentary linguists' goals when annotating IGT is to present an accurate representation of the surface phonology and morphology of a given utterance, as well as the syntactic and semantic information contributed by its component morphemes, with fine attention to detail given the rarity and value of the data. This goal of providing fine-grained annotations and transcriptions, however, can be in conflict with the NLP research aim of building models that can generalize in the real world (i.e., future elicited linguistic data). The fine-grained details are often extraneous for the purposes of building NLP models, and can counterproductively act as noise that makes learning systematic patterns more difficult.
As an example of this mismatch in disciplinary goals, consider the sample IGT token in (1).
(1)
maaxwsxwa maaxws-xw-a fallen.snow-V A L-A T T R 'white'
In this token, the productive stem is deconstructed into a historical root (maaxws) and a derivational suffix (-xw)-along with an inflectional affix (-a). It is unclear from the input that the most readily recognizable lexical stem in this form is the larger unit maaxwsxw 'white, snow-colored', and that the internal boundaries within that stem reference etymological and derivational information not relevant to the typical NLP task. The derivational and inflectional affixes are not differentiated in IGT. 5 At first glance, it might seem reasonable to train an NLP model to automatically generate such a gloss for Gitksan input words in an effort to accelerate language documentation. While this remains one of the most common NLP tasks associated with IGT, it may be difficult for models to deliver high performance if the IGT input, like Gitksan's, contains a substantial proportion of derivational and etymological information, since this information is lexical and unpredictable.
Collaboration with documentary linguists, in addition to being important when a project aims to improve the documentary linguistic workflow, can be useful for identifying these aspects of the data which may be less valuable to learn. This information can be applied in data preprocessing to improve model performance given data scarcity. For the token in (1), an alternative segmentation maaxwsxw-a into a word stem and a productive inflectional affix white-ATTR is more amenable to both automated labeling and inflection tasks, particularly in low-resource conditions. Furthermore, reference to derivational information is unnecessary in our use case of performing automated inflection for use in a pedagogical application. We collaborated with documentary linguists familiar with Gitksan to manually filter morphology into derivational versus inflectional, to determine whether an affix should be classed as part of a lexical stem or should signal a paradigm cell in the inflectional template. This allowed derivational morphology to be effectively excluded before we moved to the paradigm cell-filling task. This filtering process was non-trivial, requiring solid understanding of the target language, its description, and its vocabulary.
Using Existing Annotation Standards
The annotation schemas employed in IGT are often idiosyncratic (Palmer et al., 2009;Comrie et al., 2015), which typically makes them better suited for language documentation than NLP tasks. When aiming to leverage IGT data for use in NLP tasks, we must then consider on a case-by-case basis whether it is more beneficial to convert the IGT data to an NLP-standard format, or work with the IGT annotations largely as-is, adapting them to our specific needs. Relevant to this decision are factors such as how labor-intensive the conversion will be, how well the standard format accommodates linguistic information that has been detailed in the IGT, and whether conversion of the dataset to the standard format aligns with specific project goals and speech community interests.
The possible format that we consider for annotating inflection tables is the Unimorph standard (McCarthy et al., 2020a;Sylak-Glassman, 2016), a popular schema for annotation of inflectional morphology that can facilitate cross-lingual transfer by enabling language-independent annotations. Ultimately, we opted to adapt the Gitksan IGT to our specific needs after determining that conversion would be extremely labor-intensive, and that several types of information in the Gitksan IGT could not be represented in the UniMorph standard. We present three of the most significant issues:
1. Part-of-Speech The Unimorph standard relies on part-of-speech (POS) tags as a major component of word form annotation. However, POS information is frequently not annotated in IGT (Moeller et al., 2020), and no POS information was included in our Gitksan IGT.
For some underdocumented languages, POS information requires substantial experience and manual attention to annotate. For example, our target language Gitksan displays considerable category flexibility, meaning that syntactic and morphological behavior can cross word class boundaries. In Gitksan, the inflectional paradigms of nouns and verbs overlap substantially. As an example, agreement markers can affix to both nouns and verbs, conveying a number of functions. Some are exemplified in (2). As a consequence, in Gitksan it is difficult to use morphological inflection to deduce a lexeme's POS.
(2) Forms with -'y (1SG series II) a. hlguuhlxwi'y -my child (POSSR) b. yee'y -I walked (ABS) c. t'agi'y -x forgot me (ABS, dependent) d. t'agi'y -I forgot x (ERG)
In addition, Gitksan nouns and verbs are syntactically flexible, meaning that Gitksan nouns can function as verbs in text, and vice versa. For example, a noun ganaa'w 'frog' can be used predicatively without a copula in main verb position in the sentence Hlaa ap ganaa'wi'y 'I'm a frog now'. It takes absolutive inflection when it does so. Due to this morphological and syntactic flexibility, a 1SG-inflected noun like ganaa'wi'y could be annotated two ways in UniMorph depending on the context (frog;PSS1S versus frog;1SG;ABS 6 )yet in the IGT, they are uniformly annotated as frog-1SG.II. Reviewing the contextual function of every noun and verb in the IGT dataset to apply the appropriate UniMorph tags would require an infeasible amount of expert reannotation.
2. Inflection vs. derivation Unimorph postulates a strict division into inflectional and derivational morphology (and only annotates inflectional morphology). The IGT format has no such division, because it can be used to represent morphology at any level of granularity the annotator wishes. We have mentioned in Section 3.1 that determining the difference between inflectional and derivational morphology from IGT input is non-trivial. For example, the Gitksan morpheme -xw has a variety of uses which might be considered more derivation-like (D) or more inflection-like (I).
• Creating intransitive predicates from nouns: This morpheme's uses and degree of productivity are still little-understood, so its status as inflectional or derivational remains unclear. 7 For now, we provisionally exclude this morpheme from our inflection tables as 'derivational'. In a UniMorph system, this morpheme's exclusion or inclusion in the annotation would constitute a prematurely strong claim about whether it was inflectional, and the tagset used to annotate it likewise a prematurely strong claim about its function. is determined by prosodic and linear factors. Prenominal clitics are illustrated in example (3).
(3)
Giigwis giikw-i[-t]=s buy-T R-3.I I=P N
Maryhl Mary=hl Mary=C N gayt. gayt hat 'Mary bought a hat.'
In the example above, the proper noun clitic =s attaches to the verb but is syntactically associated with Mary. The common noun clitic =hl attaches to Mary but is associated with gayt 'hat'. Since UniMorph does not annotate such cross-token dependencies (or other clitics), this central feature of Gitksan cannot be represented.
Recommendations Current computational morphology research relies heavily on standardized tagsets like UniMorph, in particular for crosslingual transfer (Anastasopoulos and Neubig, 2019). However, these formats can be either labor-intensive or impossible to apply to underdocumented language datasets, depending on the idiosyncratic conventions of a given IGT and language-specific factors. Our understanding of the language may not be sufficiently mature to implement some of UniMorph's strict requirements, or important phenomena may fall outside of the defined scope of UniMorph. We recommend that NLP projects on underdocumented languages collaborate with language experts to determine where language-agnostic data formats can be applied, and to design project-specific data formats as needed.
Dialectal variation
Dialectal variation is a pervasive feature of languages worldwide, from English (consider African-American English and Standard American English; Blodgett et al., 2016) to Arabic (consider Modern Standard Arabic and the Doha dialect; Kumar et al., 2021). Many Indigenous languages of North America also exhibit vast dialectal variety, with significant variance in the level of mutual intelligibility between languages and dialects (Mithun, 2001, Ch.6).
Although Gitksan has an estimated fewer than 1K speakers, each village has a different way of speaking, and the speech community recognizes two salient dialects (Eastern/Upriver and Western/Downriver). Gitksan dialectal variation is typically reflected in written materials due to the lack of a widely-adopted orthographic standard which would 'flatten' it. 8 For many underdocumented languages, written orthographies have been in use for a relatively short period of time, and communities place different levels of emphasis on literacy and standardization versus conversational fluency. As a consequence, orthographic conventions can vary widely across dialects and writers in low-resource and underdocumented language contexts.
It is desirable in building inclusive language technology to accommodate and reflect variation, rather than aim to model a homogenous standard form of the language. In building pedagogical resources for language revitalization, we furthermore need to mindfully consider potential data biases as well as what kinds of variation are presented to the user, to avoid implictly suggesting that certain dialects favored for preservation and teaching, which risks reinforcing or creating negative social hierarchies (Demszky et al., 2021).
The first step to ensuring dialectal fairness and appropriate handling of variation in NLP applications is to understand what types of variation are at play, and in particular what dialect a given token belongs to. This allows us to proactively control what data is presented to a user and, for example, ensure that data from different dialects is not mixed together inappropriately. This task is non-trivial: expertise in the language is crucial in order to determine what types of variation are dialectal, and which are idiosyncratic or purely orthographic, including typos and spelling errors. As an example from Gitksan, gat and get are highly salient East/West dialect variants, while hun and hon are less-salient variants within the Eastern dialect; amxsiwaa and amxsiiwaa are two nondialectal variants of the same word (spelling error/variant), while sipxw and siipxw are different lexemes. 9 Presently, we include all lexeme variants as separate entries in our inflection tables, enabling us to represent all dialects during training.
Recommendations Distinguishing between different types of variation in the source material is 8 Linguistic description frequently aims to record dialectal and even speaker-level variation. Our datasets are based on IGT data which explicitly annotates such variation in the orthographic representation. 9 In IGT the gloss cannot always be used to differentiate lexemes. Depending on the convention, the same lexeme may appear with different glosses in different contexts (e.g. 'wa: 'find' or 'reach'), and different lexemes may have the same gloss (e.g. yook and gup: 'eat', which differ on other grounds -transitivity). The latter forms which share a gloss must also be differentiated as lexical variants, not dialectal variants. a challenging task but also a crucial one. Expertise in the target language and dialects is required for classifying types of variation, and so language experts are a vital asset for this process. Documentary linguists or community members may have direct information about the dialectal background of speakers that are represented in the data, which is useful for modeling, and will likely have information about how dialectal variation is viewed in the speech community (e.g. it may be highly politicized), which is important for application design.
Variation is not only an important issue when constructing datasets. It is also essential to evaluate the final model's performance according to the principle of dialectal fairness (Choudhury and Deshpande, 2021) Recently, measures for dialect fairness have emerged in the NLP community: Faisal et al. (2021) and Kumar et al. (2021) advocate for computing performance separately for each dialect rather than computing a single macro average performance figure over distinct dialects. They also propose to use standard deviation between system performance on different dialects and the generalized entropy index (Speicher et al., 2018) as measures for dialectal unfairness which we naturally want to minimize.
Steps toward Building a Language Learning Application
The inflectional paradigms collected from the adapted IGT corpus are overly sparse for automatically generating pedagogical exercises. To automatically fill in these paradigms, an example of which is shown in Appendix B, we train and evaluate a morphological reinflection system. 10
Data We train and test reinflection models on the Gitksan morphological paradigms described in Section 3. We generate three splits of the data from our complete set of paradigms: train (N = 858 word forms), validation (N = 302 word forms), and test (N = 124 word forms) data splits.
Training We form training pairs by using the given forms in each table and learn to reinflect each given form in a table to another given form in the same table, following Silfverberg and Hulden (2018). Model parameters are shown in Appendix C.
Evaluation During test time, we predict forms for missing slots based on each of the given forms in the table and take a majority vote of the predictions. We evaluate accuracy on the test set by counting the number of the 124 forms that were correctly predicted. We find that the Transformer model generates 87.09% of the test forms correctly.
Analysis. Our model provides strong performance when measured by the standard metric of accuracy, in particular considering that it is trained on only 858 examples. Accuracy, however, only provides one perspective on the efficacy of the model (Ethayarajh and Jurafsky, 2020). The appropriate evaluation of the system is highly context dependent: For our goal of generating language learning exercises, we want to evaluate whether our system and automatically generated grammar exercises allow for more effective language learning; raw accuracy gleans little insight to the effectiveness of the system for this goal. If in contrast our goal was to facilitate language documentation, we would want to evaluate whether the model gives an overall significant reduction in documentation effort-this largely depends on whether the automatic annotations are of sufficient quality that correcting remaining errors takes less time than annotating all the data from scratch. Further research, in collaboration with documentary linguists and the speech community, is required to determine whether our system can achieve the desired goals of building more practical, inclusive language technology.
Discussion
Incorporating IGT data for NLP Language documentation provides a valuable data source for many so called "left-behind" languages (Joshi et al., 2020), which lack traditional annotated and unannotated NLP datasets. For example, IGT data can be used to train systems for morphological inflection, segmentation and automatic glossing, among other applications. Nevertheless, the annotations in IGT are rarely ideally suited for typical NLP tasks, and may need to be significantly adapted. This will typically be hard without extensive knowledge of the target language and annotation conventions which were employed when the IGT data were generated. Linguists and community language experts are well-positioned to address questions related to IGT usability, the structure of the target language, variation in the data, and other annotations in the source data. Collaboration with language experts is not only vital for successful data preprocessing and conversion to the formats required for the typical NLP task, but can also naturally help define research goals and drive the project toward them. Inclusive Research Goals NLP technologies for underdocumented languages have the capacity to speed up language documentation (e.g., Anastasopoulos, 2019); assisting language revitalization (e.g., Rijhwani et al., 2020;Lane and Bird, 2020); and creating digital infrastructure (e.g., Anastasopoulos and Neubig, 2019). These high-level goals are only a part of what it may mean to create inclusive language technology. Equally valuable as a research goal may be inclusion: for speech communities to be acknowledged and engaged in the course of the the research project. 11 We encourage NLP projects on low-resource, minoritized, and/or endangered languages to begin by understanding the speech community context, proceed with community collaboration or endorsement, and ultimately produce concrete benefits that speech communities recognize. This might include outcomes for language teaching and pedagogy, or training opportunities in technology or research. Evaluation methods can be compiled which address NLP researchers, linguists, and communities' overlapping and divergent goals. For example, pedagogical tools can be directly evaluated for dialect fairness and user/learner improvement. Practical Collaboration We suggest seeking out opportunities to collaborate directly with community members, in order to solicit their specific expertise when setting the research agenda (i.e. task definition) and conducting evaluation (Czaykowska-Higgins, 2009;Bird, 2020). When the NLP researcher has no existing contact or history with the speech community, this can be pursued via collaboration with a documentary linguist with established community relationships and a similar desire to engage in this research model. Recognize that in any collaboration, different individuals contribute different skills and experience (e.g. pedagogy, annotation, knowledge of community attitudes) and may have different goals and preferred ways of participating, which should simply be discussed within the partnership to ensure things run smoothly. Research accessibility In discussing inclusive lan-guage technologies, we also consider the accessibility of NLP workshops to speech communities, in particular where venues have a dedicated focus on low-resource languages. We note that such venues are often inaccessible to communities due to factors such as the cost of registration. Similarlyoriented workshops in linguistics (e.g. SAIL, WS-CLA, family-specific conferences) typically have a tiered registration structure enabling community members to attend for free or minimal cost (e.g. $25). It is worth recognizing that community members are research stakeholders, and ensuring that venues are open to their participation.
Conclusion
Although a majority of the world's languages lack the kind of large annotated and massive unannotated datasets which are used to train modern NLP models for high-resourced languages like English (Joshi et al., 2020;Blasi et al., 2021), many languages have other potential data sources such as language documentation data, which so far have remained under-explored. However, care must be taken when applying this type of data, which originally is not intended for NLP use. This is important to ensure that the resulting technologies actually achieve their intended goals like accelerated language documentation or genuinely helpful computer-assisted language learning.
Collaboration with linguists can provide the expertise necessary to engage in modeling with IGT data for underdocumented languages. Linguists can help define an NLP task with good value propositions, given their familiarity and connections with the speech community. They can provide guidance on navigating the IGT format so that we can extract the most useful information for the task at hand. Finally, they can assist in evaluating whether the model achieves appropriate performance on the speech community use cases, and provide feedback on metrics for model success and fairness across dialects. Throughout the development process, documentary linguists and speech community members should be consulted. This will further a greater understanding of the source data and lead to more equitable and effective technologies.
Figure 1 :
1An example of Gitksan interlinear glossed text (IGT). The text contains four levels of annotation: (1) An orthographic transcription, (2) A segmentation into normalized component morphemes (CVC refers to the reduplicated segment al'), (3) an interlinear gloss and (4) an English translation.
Figure 2 :
2A roadmap for incorporating Interlinear Glossed Text (IGT) data for building more inclusive language technology.
osxw 'have a dog' from os 'dog' (D) • Marking inchoatives: mitxw 'be full' vs. causative midin 'fill' (D) • Marking passives: japxw 'be made' from transitive jap 'do, make' (D?) • Marking verbs with certain preverbs: sik'ihl huutxw 'try to run away' vs. huut 'run away' (I?) • Optional in some possessives: laxyipxwsi'm 'your.pl land' vs. laxyipsi'm 'your.pl land' (?)
Damián Blasi, Antonios Anastasopoulos, and Graham Neubig. 2021. Systematic inequalities in language technology performance across the world's languages. arXiv preprint arXiv:2110.06733. James P Blevins, Petar Milin, and Michael Ramscar. 2017. The zipfian paradigm cell filling problem. In Perspectives on morphological organization, pages 139--158. Brill. Su Lin Blodgett, Lisa Green, and Brendan O'Connor. 2016. Demographic dialectal variation in social media: A case study of African-American English. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Monojit Choudhury and Amit Deshpande. 2021. How linguistically fair are multilingual pre-trained language models? In Proceedings of the AAAI Conference on Artificial Intelligence. Bernard Comrie, Martin Haspelmath, and Balthasar Bickel. 2015. Leipzig glossing rules. Conventions for Interlinear Morpheme-by-Morpheme Glosses. Leipzig: Max Planck Institute for Evolutionary Anthropology. Ewa Czaykowska-Higgins. 2009. Research models, community engagement, and linguistic fieldwork: Reflections on working within canadian indigenous communities. Language documentation & conservation, 3(1):182-215. Dorottya Demszky, Devyani Sharma, J. Clark, Vinodkumar Prabhakaran, and Jacob Eisenstein. 2021. Learning to recognize dialect features. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Dunham, Gina Cook, and Joshua Horner. 2014. LingSync & the online linguistic database: New models for the collection and management of data for language communities, linguists and language learners. In Proceedings of the 2014 Workshop on the Use of Computational Methods in the Study of Endangered Languages. Dunlop, Suzanne Gessner, Tracey Herbert, and Aliana Parker. 2018. Report on the status of BC First Nations languages. Report of the First People's Cultural Council. Abteen Ebrahimi and Katharina Kann. 2021. How to adapt your pretrained multilingual model to 1600 languages. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Ethayarajh and Dan Jurafsky. 2020. Utility is in the eye of the user: A critique of NLP leaderboards. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. Faisal, Sharlina Keshava, Md Mahfuz Ibn Alam, and Antonios Anastasopoulos. 2021. Sd-qa: Spoken dialectal question answering for the real world. In Findings of the Association for Computational Linguistics: EMNLP 2021. Forbes, Henry Davis, Michael Schwan, and the UBC Gitksan Research Laboratory. 2017. Three Gitksan texts. In Papers for the 52nd International Conference on Salish and Neighbouring Languages, pages 47-89. UBC Working Papers in Linguistics. Clarissa Forbes, Garrett Nicolai, and Miikka Silfverberg. 2021. An FST morphological analyzer for the Gitksan language. In Proceedings of the 18th SIG-MORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Ryan Georgi. 2014. From Aari to Zulu : massively multilingual creation of language tools using interlinear glossed text. Ph.D. thesis, University of Washington. Martin Haspelmath, Matthew S Dryer, David Gil, and Bernard Comrie. 2005. The world atlas of language structures. Oxford University Press. Lonnie Hindle and Bruce Rigsby. 1973. A short practical dictionary of the Gitksan language. In Northwest Anthropological Research Notes, volume 7 (1). NARN Inc. Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. Kumar, Antonios Anastasopoulos, Shuly Wintner, and Yulia Tsvetkov. 2021. Machine translation into low-resource language varieties. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. Lane and Steven Bird. 2020. Bootstrapping techniques for polysynthetic morphological analysis. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. William D Lewis and Fei Xia. 2010. Developing odin: A multilingual repository of annotated language data for hundreds of the world's languages. Literary and Linguistic Computing, 25(3):303-319. Patrick Littell, Anna Kazantseva, Roland Kuhn, Aidan Pine, Antti Arppe, Christopher Cox, and Marie-Odile Junker. 2018. Indigenous language technologies in Canada: Assessment, challenges, and successes. In Proceedings of the 27th International Conference on Computational Linguistics.Joel Britt Kawin Fahim Clarissa Sachin William
These systems are well motivated but unlikely to be easily comparable with other annotation schemas.
Due to the Zipfian distribution of language(Blevins et al., 2017).
For an English analogue, consider splitting the lexicalized verb enforce into a prefix enand root force. The enprefix is recognizable, but not productive or relevant to inflection tasks.
. Clitics Gitksan is rich in clitics, annotated with the equals sign in IGT '='. Their attachment6 Other clause type features would be required here but it remains unclear how best to represent Gitksan's clause-typing system with UniMorph labels.7 Elsewhere, some linguistic descriptions present cases of morphology which do not fit into conventional delineations of the inflectional/derivational divide, such as plural/pluractional markers in Halkomelem Salish(Wiltschko, 2008).
Code and data for this experiment is available at https: //github.com/smfsamir/gitksan-data.
Underdocumented languages are often the cultural heritage of typically marginalized peoples, sometimes with a history of their data being exploited for political or commercial purposes. NLP research without community involvement may feel like a continuation of this pattern.
AcknowledgementsWe want to thank Henry Davis, Lisa Matthewson and the Gitksan research lab at the Department of Linguistics at UBC for generous help with this project and access to Gitksan IGT data. We also want to thank for anonymous reviewers for valuable comments. We also want to thank Samantha Quinto for assisting with visual design. This research was supported by funding from the National Endowment for the Humanities (Documenting Endangered Languages Fellowship) and the Social Sciences and Humanities Research Council of Canada (Grant 430-2020-00793). Any views/findings/conclusions expressed in this publication do not necessarily reflect those of the NEH, NSF or SSHRC.A Sample IGT dataThe first four lines of a sample text from the Gitksan interlinear glossed text corpus. This example is revised from initial publication inForbes et al. (2017).Not long after these people arrived, they gathered together the people of Kitwancool. The plan of the so-called government was that they will have Indian people live on a so-called reserve.B Sample inflection tableA Gitksan inflection table for 'wa ('to find, reach') generated from IGT and displayed in TSV format. Many cells in the table are empty since they were unattested in the IGT data.C Fairseq parametersModel We use the Fairseq(Ott et al., 2019)model implementation of Transformer(Vaswani et al., 2017). Both the encoder and decoder have 4 layers with 4 attention heads, an embedding size of 256 and hidden layer size of 512. We train with the Adam optimizer starting of the learning rate at 0.001. We chose the batch size (400) and maximum updates (20000) based on the highest accuracy on the development data.
Massively multilingual adversarial speech recognition. Oliver Adams, Matthew Wiesner, Shinji Watanabe, David Yarowsky, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOliver Adams, Matthew Wiesner, Shinji Watanabe, and David Yarowsky. 2019. Massively multilingual ad- versarial speech recognition. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies.
Waldayu and Waldayu Mobile: Modern digital dictionary interfaces for endangered languages. Patrick Littell, Aidan Pine, Henry Davis, Proceedings of the 2nd Workshop on the Use of Computational Methods in the Study of Endangered Languages. the 2nd Workshop on the Use of Computational Methods in the Study of Endangered LanguagesPatrick Littell, Aidan Pine, and Henry Davis. 2017. Waldayu and Waldayu Mobile: Modern digital dic- tionary interfaces for endangered languages. In Pro- ceedings of the 2nd Workshop on the Use of Com- putational Methods in the Study of Endangered Lan- guages, pages 141-150.
Multilingual denoising pre-training for neural machine translation. Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, Luke Zettlemoyer, Transactions of the Association for Computational Linguistics. 8Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.
Challenges of language technologies for the indigenous languages of the Americas. Manuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, Ivan Meza-Ruiz, Proceedings of the 27th International Conference on Computational Linguistics. the 27th International Conference on Computational LinguisticsManuel Mager, Ximena Gutierrez-Vasques, Gerardo Sierra, and Ivan Meza-Ruiz. 2018. Challenges of language technologies for the indigenous languages of the Americas. In Proceedings of the 27th Interna- tional Conference on Computational Linguistics.
Manuel Mager, Arturo Oncevay, Annette Rios, Ivan Vladimir Meza Ruiz, Alexis Palmer, 2021. Proceedings of the First Workshop on Natural Language Processing for Indigenous Languages of the Americas. Graham Neubig, and Katharina KannManuel Mager, Arturo Oncevay, Annette Rios, Ivan Vladimir Meza Ruiz, Alexis Palmer, Graham Neu- big, and Katharina Kann, editors. 2021. Proceed- ings of the First Workshop on Natural Language Pro- cessing for Indigenous Languages of the Americas.
Unimorph 3.0: Universal morphology. D Arya, Christo Mccarthy, Matteo Kirov, Amrit Grella, Patrick Nidhi, Kyle Xia, Ekaterina Gorman, Vylomova, J Sabrina, Garrett Mielke, Miikka Nicolai, Silfverberg, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceArya D McCarthy, Christo Kirov, Matteo Grella, Am- rit Nidhi, Patrick Xia, Kyle Gorman, Ekaterina Vy- lomova, Sabrina J Mielke, Garrett Nicolai, Miikka Silfverberg, et al. 2020a. Unimorph 3.0: Universal morphology. In Proceedings of the 12th Language Resources and Evaluation Conference.
The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. D Arya, Rachel Mccarthy, Dylan Wicks, Aaron Lewis, Winston Mueller, Oliver Wu, Garrett Adams, Matt Nicolai, David Post, Yarowsky, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceArya D. McCarthy, Rachel Wicks, Dylan Lewis, Aaron Mueller, Winston Wu, Oliver Adams, Garrett Nico- lai, Matt Post, and David Yarowsky. 2020b. The Johns Hopkins University Bible corpus: 1600+ tongues for typological exploration. In Proceed- ings of the 12th Language Resources and Evaluation Conference.
Automating gloss generation in interlinear glossed text. Angelina Mcmillan-Major, Proceedings of the Society for Computation in Linguistics. the Society for Computation in Linguistics3Angelina McMillan-Major. 2020. Automating gloss generation in interlinear glossed text. Proceed- ings of the Society for Computation in Linguistics, 3(1):338-349.
The languages of native North America. Marianne Mithun, Cambridge University PressMarianne Mithun. 2001. The languages of native North America. Cambridge University Press.
Automatic glossing in a low-resource setting for language documentation. Sarah Moeller, Mans Hulden, Proceedings of the Workshop on Computational Modeling of Polysynthetic Languages. the Workshop on Computational Modeling of Polysynthetic LanguagesSarah Moeller and Mans Hulden. 2018. Automatic glossing in a low-resource setting for language docu- mentation. In Proceedings of the Workshop on Com- putational Modeling of Polysynthetic Languages.
Igt2p: From interlinear glossed texts to paradigms. Sarah Moeller, Ling Liu, Changbing Yang, Katharina Kann, Mans Hulden, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingSarah Moeller, Ling Liu, Changbing Yang, Katharina Kann, and Mans Hulden. 2020. Igt2p: From inter- linear glossed texts to paradigms. In Proceedings of the 2020 Conference on Empirical Methods in Natu- ral Language Processing, pages 5251-5262.
Learning morphosyntactic analyzers from the Bible via iterative annotation projection across 26 languages. Garrett Nicolai, David Yarowsky, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsGarrett Nicolai and David Yarowsky. 2019. Learning morphosyntactic analyzers from the Bible via itera- tive annotation projection across 26 languages. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics.
fairseq: A fast, extensible toolkit for sequence modeling. Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, Michael Auli, Myle Ott, Sergey Edunov, Alexei Baevski, Angela Fan, Sam Gross, Nathan Ng, David Grangier, and Michael Auli. 2019. fairseq: A fast, extensible toolkit for sequence modeling.
Evaluating automation strategies in language documentation. Alexis Palmer, Taesun Moon, Jason Baldridge, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesWorkshop on Active Learning for Natural Language ProcessingAlexis Palmer, Taesun Moon, and Jason Baldridge. 2009. Evaluating automation strategies in language documentation. In Proceedings of the 2019 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, 2009 Workshop on Active Learning for Natural Language Processing.
Gitxsan Grammar. Bruce Rigsby, University of QueenslandBruce Rigsby. 1986. Gitxsan Grammar. University of Queensland.
Ocr post-correction for endangered language texts. Shruti Rijhwani, Antonios Anastasopoulos, Graham Neubig, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingShruti Rijhwani, Antonios Anastasopoulos, and Gra- ham Neubig. 2020. Ocr post-correction for endan- gered language texts. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing.
An encoder-decoder approach to the paradigm cell filling problem. Miikka Silfverberg, Mans Hulden, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingMiikka Silfverberg and Mans Hulden. 2018. An encoder-decoder approach to the paradigm cell fill- ing problem. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing.
A unified approach to quantifying algorithmic unfairness: Measuring individual &group unfairness via inequality indices. Till Speicher, Hoda Heidari, Nina Grgic-Hlaca, Krishna P Gummadi, Adish Singla, Adrian Weller, Muhammad Bilal Zafar, Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data MiningTill Speicher, Hoda Heidari, Nina Grgic-Hlaca, Kr- ishna P Gummadi, Adish Singla, Adrian Weller, and Muhammad Bilal Zafar. 2018. A unified approach to quantifying algorithmic unfairness: Measuring in- dividual &group unfairness via inequality indices. In Proceedings of the 24th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining.
The composition and use of the universal morphological feature schema (unimorph schema). John Sylak-Glassman, Johns Hopkins UniversityJohn Sylak-Glassman. 2016. The composition and use of the universal morphological feature schema (uni- morph schema). Johns Hopkins University.
Future directions in technological support for language documentation. Ben Daan Van Esch, Nay Foley, San, Proceedings of the Workshop on Computational Methods for Endangered Languages. the Workshop on Computational Methods for Endangered LanguagesDaan van Esch, Ben Foley, and Nay San. 2019. Fu- ture directions in technological support for language documentation. In Proceedings of the Workshop on Computational Methods for Endangered Languages.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Roi Reichart. Ekaterina Vylomova, Elizabeth Salesky, Sabrina Mielke, Gabriella Lapesa, Ritesh Kumar, Harald Hammarström, Ivan Vulić, Anna Korhonen, Proceedings of the Third Workshop on Computational Typology and Multilingual NLP. Maria Ponti, and Ryan Cotterell, editors. 2021the Third Workshop on Computational Typology and Multilingual NLPEkaterina Vylomova, Elizabeth Salesky, Sabrina Mielke, Gabriella Lapesa, Ritesh Kumar, Harald Hammarström, Ivan Vulić, Anna Korhonen, Roi Re- ichart, Edoardo Maria Ponti, and Ryan Cotterell, ed- itors. 2021. Proceedings of the Third Workshop on Computational Typology and Multilingual NLP.
Call for endangered languages: Challenges and rewards. Computer Assisted language learning. Monica Ward, Josef Genabith, 16Monica Ward and Josef Genabith. 2003. Call for en- dangered languages: Challenges and rewards. Com- puter Assisted language learning, 16(2-3):233-258.
Automated grammar engineering for verbal morphology. David Allen Wax, University of WashingtonMaster's thesisDavid Allen Wax. 2014. Automated grammar engi- neering for verbal morphology. Master's thesis, Uni- versity of Washington.
The syntax of noninflectional plural marking. Natural Language and Linguistic Theory. Martina Wiltschko, 26Martina Wiltschko. 2008. The syntax of non- inflectional plural marking. Natural Language and Linguistic Theory, 26(3):639-694.
Multilingual dictionary based construction of core vocabulary. Winston Wu, Garrett Nicolai, David Yarowsky, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceWinston Wu, Garrett Nicolai, and David Yarowsky. 2020. Multilingual dictionary based construction of core vocabulary. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference.
Inferring morphotactics from interlinear glossed text: Combining clustering and precision grammars. Olga Zamaraeva, Proceedings of the 14th SIG-MORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. the 14th SIG-MORPHON Workshop on Computational Research in Phonetics, Phonology, and MorphologyOlga Zamaraeva. 2016. Inferring morphotactics from interlinear glossed text: Combining clustering and precision grammars. In Proceedings of the 14th SIG- MORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology.
Automatic interlinear glossing for under-resourced languages leveraging translations. Xingyuan Zhao, Satoru Ozaki, Antonios Anastasopoulos, Graham Neubig, Lori Levin, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsXingyuan Zhao, Satoru Ozaki, Antonios Anastasopou- los, Graham Neubig, and Lori Levin. 2020. Auto- matic interlinear glossing for under-resourced lan- guages leveraging translations. In Proceedings of the 28th International Conference on Computational Linguistics.
| [] |
[
"Translating Web Search Queries into Natural Language Questions",
"Translating Web Search Queries into Natural Language Questions"
] | [
"Adarsh Kumar adkuma@microsoft.com \nAI & Research\nMicrosoft Hyderabad\nIndia\n",
"Sandipan Dandapat sadandap@microsoft.com \nAI & Research\nMicrosoft Hyderabad\nIndia\n",
"Sushil Chordia sushilc@microsoft.com \nAI & Research\nMicrosoft Hyderabad\nIndia\n"
] | [
"AI & Research\nMicrosoft Hyderabad\nIndia",
"AI & Research\nMicrosoft Hyderabad\nIndia",
"AI & Research\nMicrosoft Hyderabad\nIndia"
] | [] | Users often query a search engine with a specific question in mind and often these queries are keywords or sub-sentential fragments. In this paper, we are proposing a method to generate well-formed natural language question from a given keyword-based query, which has the same question intent as the query.Conversion of keyword based web query into a well formed question has lots of applications in search engines, Community Question Answering (CQA) website and bots communication. We found a synergy between query-to-question problem with standard machine translation (MT) task. We have used both Statistical MT (SMT) and Neural MT (NMT) models to generate the questions from query. We have observed that MT models performs well in terms of both automatic and human evaluation. | null | [
"https://www.aclweb.org/anthology/L18-1151.pdf"
] | 21,706,647 | 2002.02631 | e05645571dfabb5fe03789dcd75b685de3acca10 |
Translating Web Search Queries into Natural Language Questions
Adarsh Kumar adkuma@microsoft.com
AI & Research
Microsoft Hyderabad
India
Sandipan Dandapat sadandap@microsoft.com
AI & Research
Microsoft Hyderabad
India
Sushil Chordia sushilc@microsoft.com
AI & Research
Microsoft Hyderabad
India
Translating Web Search Queries into Natural Language Questions
Natural Language GenerationMachine TranslationNLP
Users often query a search engine with a specific question in mind and often these queries are keywords or sub-sentential fragments. In this paper, we are proposing a method to generate well-formed natural language question from a given keyword-based query, which has the same question intent as the query.Conversion of keyword based web query into a well formed question has lots of applications in search engines, Community Question Answering (CQA) website and bots communication. We found a synergy between query-to-question problem with standard machine translation (MT) task. We have used both Statistical MT (SMT) and Neural MT (NMT) models to generate the questions from query. We have observed that MT models performs well in terms of both automatic and human evaluation.
Introduction
Search engines have improved a lot in last decade in all aspects. Earlier, the primary task of a search engine was to extract most relevant links for the query and present them as results. Lately, instead of just giving relevant links related to the query, search engines are trying to directly answer to any question asked. For example, for the query "japan's capital" in modern search engines (eg. Bing and Google) directly answer "Tokyo", instead of providing a link containing the answer. Thus, search engines are evolving to save time for users and increase their productivity. To further enhance the user-experience and increase productivity, search engines apart from showing the answer for a particular question, are trying to show related questions, to help users in their exploration. For example, for the query "fever symptoms", user mostly wants answer to the question "What are the symptoms of fever?" and for the same query, questions like "How do you treat fever?", "What causes high fever?" are highly related. To show related questions, search engines need to have a well framed question corpus from which they can extract relevant questions given a query. (White et al., 2015) have shown that more than 10% of queries issued on a search engine has question intent whereas only 3% of them are formulated as natural language questions. Most of these queries are primarily keywords or sentence fragments. Hence, a corpus of questions can not be created directly using the search queries with question intent due to the issue of grammatical correctness and incomplete sentence formation. To overcome this problem, we are proposing a technique to convert query with question intent, into a well-formed question. This technique can be used to generate well formed questions asked by the user, which can be used by search engines. Apart from the direct application in search engines, query keywords to question conversion has applications in Question Answering (QA) systems, bots communication, Community Question Answer (CQA) websites etc. In CQA websites, when users have typed some keywords to search for questions, one can generate the questions and help them in framing the question using question corpus.
Digital assistants can use this technology to refine the intent of query in natural language and help navigate the user to his/her exact needs. Query to question conversion was first suggested by (Lin, 2008), where he pointed out it's application in CQA websites and richer query expansion. Lin's idea was further extended by (Zhao et al., 2011), in which they have followed a template-based approach. They generate templates from query, question pairs from search logs and CQA websites and instantiate the template on the input query. At the same time, (Zheng et al., 2011) also used a similar template-based technique. They generate templates from the question collected from CQA websites. They used a single variable templates, which essentially replaced a single word by some placeholder. Thus, the framework heavily relies on existing questions. Another similar work was done by (Kalady et al., 2010) in which they derived question from a well formed sentence using parse tree and named entity recognitions. Their system is limited to certain types of questions. Most of the techniques used to generate question from query are rule-based which are limited by the variety of question rules/templates, grammatical correctness, relevance between query and generated question etc. In this paper we propose a novel statistical approach to generate well-formed question from search keywords. The primary contribution of our work is that we have reduced the problem of query to question conversion into a translation problem. Furthermore, we also have shown how to build query, question parallel corpus from web search log that retain users' intention between query and question pair. Table 1 shows some of the extracted pairs. We have made a detailed comparison between different translation framework with respect to our problem.
Approach
The query to question generation problem can be formally stated as follows: given a sequence of query keywords k (k 1 , k 2 , . . . , k n ) we want to generate the corresponding natural language question q (q 1 , q 2 , . . . , q m ). This can be seen as a translation problem between source language sentence Queries Questions fever symptoms
What are the symptoms of fever ? japan capital
What is the capital of japan ? string to int c# How to convert string to int in C# ? cancer types
What are different types of cancer ? Table 1: Example of queries and related questions k and target language sentence q. Note that both k and q are in English language while q is a syntactically and semantically correct sentence of the language but k is a grammatically ill-formed query. In this work, we first use a SMTbased (Koehn et al., 2003) approach. We have used the most widely used vanilla Moses 1 to build the SMT system. We consider this as the baseline system and call it SMT.
We use a NMT-based approach as described by . Our NMT-based model uses bidirectional RNN with attention model Sutskever et al., 2014;Schuster and Paliwal, 1997). Given an input sequence k from source language, i.e. queries, we want to generate a sequence q of target language, i.e. questions, which has similar question intent. We want to find the q which maximizes arg max q p(q|k). We train a neural model which learns to maximize the conditional probability for sequence pairs in our parallel training corpus. After the model is trained, on giving a sequence k from source language, it generates a sequence q of target language which maximizes the conditional probability.
Our neural machine translation model consists of an encoder and a decoder. Encoder learns a fixed length representation for variable length input sequences and decoder takes that fixed length learned representation as input and generates the output sequence. For example, for input sequence vectors k (k 1 , k 2 , . . . , k n ), encoder encodes this into a fixed dimension vector rep. In general RNN's are used, such that :
h t = f (k t , h t−1 ) (1) rep = z(h 1 , h 2 , ...h T )(2)
h t is the hidden state at time t and k t is input sequence at time t. f and q are non-linear functions. In our model we are using f as LSTM (Hochreiter and Schmidhuber, 1997) and define z as in equation (3):
z(h 1 , h 2 , ..., h t ) = h t(3)
The encoder tries to store the context of the input sequence into vector rep. During training, decoder learns to maximize the conditional probability. Decoder defines a conditional probability over the translation sequence k as follows :
p(q) = Tt=1
p(q t |q 1 , q 2 , ...q t−1 , rep)
= T t−1 g(q t−1 , s t , rep)(4)
1 http://www.statmt.org/moses/ where q = (q 1 , q 2 , . . . , q T ) and g is non-linear. We are using attention model , in which conditional probability gets changed to following:
p(q i |q 1 , q 2 , . . . , q i−1 , k) = g(q i−1 , s i , rep i )(5)
where s i is :
s i = g(q i , s i−1 , rep i )(6)
The context vector rep i is computed as below :
rep i = Tx j=1 α ij h j(7)
The weight α ij of each annotation h j is computed by
α ij = exp(e ij ) Tx m=1 exp(e im )(8)
where
e ij = a(s i−1 , h j )(9)
This approach allows decoder to decide which part of input it wants to pay attention. We have used BiRNN, which has two function − → f and ← − f , where − → f reads the input sequence from k 1 to k T and produces forward hidden states(h f1 , h f2 , . . . , h f T ), i.e. in usual order, and the ← − f reads in opposite direction, i.e. k T to k 1 and generates hidden backward vectors (h b1 , h b2 , . . . , h b T ). At time t, we get the final hidden vector by concatenating forward as well as backward hidden vector at time t. This way BiRNN helps in storing the context of not only the preceding words but also the following words. Each manuscript should be submitted on white A4 paper. The fully justified text should be formatted in two parallel columns, each 8.25 cm wide, and separated by a space of 0.63 cm. Left, right, and bottom margins should be 1.9 cm. and the top margin 2.5 cm. The font for the main body of the text should be Times New Roman 10 with interlinear spacing of 12 pt. Articles must be between 4 and 8 pages in length, regardless of the mode of presentation (oral or poster).
Experimental Setup and Results
First we conduct our baseline experiment using Moses SMT system to compare the results with our NMT-based model. The Moses SMT system uses KenLM (Heafield et al., 2013) as the default language model and MERT (Och, 2003) to reestimate the model parameters. We shall call it SMT. In our particular NMT-based approach, we implemented a BiRNN model using LSTM with attention. We used 2 layered deep LSTMs with 512 cells at each layer. We kept the embedding dimension to be 300. Our input vocabulary size for both source and target language, i.e., queries and question had 150,000 words. We used stochastic gradient descent with initial learning rate of 0.5 and learning rate decay factor of 0.99. We kept batch size to be 128 and trained the model for a total of 6 epochs.
Data Used
In this case, parallel data refers to the (k,q) pair where k is a query with question intent and q is the corresponding natural language question with same question intent. We used Bing's web search logs to create our parallel data. Bing's Search Log stores 3 basic things :
• Queries (k) searched on bing • The URLs (U ) which were shown for those queries in search result page • URL (u ∈ U ) which was clicked by the user for the respective query
We filtered all the queries (k), which landed on a CQA website, which contains some question (q) and its answer. We extracted the question (q) from that clicked CQA website and create the pair (k, q) for our dataset. Our hypothesis behind this was that after querying in any search engine, users click on those links which they find satisfactory and those queries (k) after which a user clicks on a website containing a question (q), can be assumed to have a question intent. To make sure the questions in our dataset are grammatically correct, we only considered reputed CQA websites like WikiAnswers, 2 Quora, 3 and Yahoo Answers. 4 The hypothesis being that moderators on these CQA websites are pretty strict in maintaining quality questions. We only kept (k, q) pairs in which query (k) had less than 10 words to avoid garbage queries. We also made sure that we only select those (k, q) pairs, in which question started with either a "wh" word or other question words (e.g. what, where, who, how, is, can, did, list, are etc.). After all this filtering, we were left with around 13 Million query-question pair (k, q). We used randomly drawn 5000 sentences for test and development set (each 2500 sentences), disjoint from the training data. We found around 50% of the queries have less than 5 words. The average length of the query and question are 5.6 and 8.5, respectively. Also, 85% of the questions are of "what (53%)", "how(21%)","is(6%)" and "who(5%)" types.
Results
In order to evaluate the performance of our system, we have used the most widely used MT evaluation metric BLEU (Papineni et al., 2002). BLEU uses modified n-gram precision between the hypothesis and the reference. Note that the value of BLEU ranges from 0 to 100. First, in order to estimate the difficulty of the task we conducted an experiment (we shall call it Identity Model), we replicated input as the hypothesis translation, since both source query and target question are in English. This gives 19.33 BLEU score. This is due to large amount of vocabulary overlap between the query and its corresponding question.
The baseline SMT gives a BLEU score of 52.49 while NMT system has a BLEU score of 58.63. The NMT system has a 6.14 absolute BLEU point improvement compared to the SMT system. Both SMT and NMT system has a significant improvement over the identity model. The higher BLEU score (> 50) by both SMT and NMT models are achieved due to the overlap between query and question keywords (as reflected in the BLEU score of the identity model).
Human Evaluation
We conducted a human evaluation to judge the quality of the generated output. We manually evaluated approximately 1000 query/question pairs with the help of 12 people (more than 5 years of experience of using search engines). For each query-generated output pair, we asked participants following questions :
• Is the question grammatically correct?
• How similar is the intent between query and generated output?
First question was a Yes-No based question and for the second question, participants were asked to judge the question intent similarity on a scale of 1 − 5 between the pair, with 5 being highly similar. In terms of grammatical correctness of the output generated from the two models, around Query Generated Question by SMT Generated Question by NMT Golden Truth grams in 1 lb how many grams are in 1 lb? how many grams are in 1 pound? how many grams are in 1 pound ? anesthesiologist salary dubai what is the salary of an anesthesiologist in dubai?
what is the salary of an anesthesiologist in dubai?
how much does an anesthesiologist make in dubai? richest man in kansas what is the richest men in kansas? who is the richest man in kansas?
Who is the most rich man of kansas? small bone in human body located what is the small bone in the body located?
where is the smallest bone in human body located?
where is the smallest bone in human body located? first woman rapper what was the first woman in the rapper who was the first woman rapper? who was the first woman rapper? Table 2: System Generated Output Produced by Different Models 63% of output generated from SMT were grammatically correct, while with NMT, almost 86% of output were grammatically correct. SMT often make errors due to incorrect choice of question words as shown in examples in Table 2. SMT often choose "what" due to its high frequency in the corpus (cf. Section 3.1). In terms of intent similarity, around 72% of the question generated by NMT model received very high score (4 and 5) in intent similarity by human evaluators, compared to only 45% in case of SMT. Figure 3 shows the distribution of scores both model got from human evaluators. We observed that NMT model performed better than baseline SMT in terms of BLEU score evaluation, as well as human based judgement.
Conclusions
In this paper we have described machine-translation based approach for automatic generation of well-formed question from keyword-based query. We used automatically extracted parallel data from search logs to train the models. Our experiments shows that NMT models work better compared to the baseline statistical model. The present model generates the most likely question from a search query which has explicit question intent. For future works we wish to add text from Search Result Page also as input along with the raw query, with the assumption being that the given text will provide more contextual information about the query.
Fig. 1
1plots the Query Length Distributions and Fig. 2 plots the percentage of different types of questions in our dataset.
2Figure 1 :Figure 3 :
13https://answers.wikia.com/wiki/Wikianswers 3 https://www.quora.com 4 https://in.answers.yahoo.com/ Query Intent Similarity Score Distribution
Neural machine translation by jointly learning to align and translate. D Bahdanau, K Cho, Y Bengio, arXiv:1409.0473arXiv preprintBahdanau, D., Cho, K., and Bengio, Y. (2014). Neural ma- chine translation by jointly learning to align and trans- late. arXiv preprint arXiv:1409.0473.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, arXiv:1406.1078arXiv preprintCho, K., Van Merriënboer, B., Gulcehre, C., Bah- danau, D., Bougares, F., Schwenk, H., and Ben- gio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
Scalable modified Kneser-Ney language model estimation. K Heafield, I Pouzyrevsky, J H Clark, P Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaHeafield, K., Pouzyrevsky, I., Clark, J. H., and Koehn, P. (2013). Scalable modified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 690-696, Sofia, Bulgaria, August.
Long shortterm memory. S Hochreiter, J Schmidhuber, Neural computation. 98Hochreiter, S. and Schmidhuber, J. (1997). Long short- term memory. Neural computation, 9(8):1735-1780.
Natural language question generation using syntax and keywords. S Kalady, A Elikkottil, R Das, Proceedings of QG2010: The Third Workshop on Question Generation. QG2010: The Third Workshop on Question Generationquestiongeneration. orgKalady, S., Elikkottil, A., and Das, R. (2010). Natural lan- guage question generation using syntax and keywords. In Proceedings of QG2010: The Third Workshop on Question Generation, pages 1-10. questiongeneration. org.
Statistical phrase-based translation. P Koehn, F J Och, D Marcu, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyAssociation for Computational Linguistics1Koehn, P., Och, F. J., and Marcu, D. (2003). Statisti- cal phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the As- sociation for Computational Linguistics on Human Lan- guage Technology-Volume 1, pages 48-54. Association for Computational Linguistics.
Automatic question generation from queries. C.-Y Lin, Workshop on the question generation shared task. Lin, C.-Y. (2008). Automatic question generation from queries. In Workshop on the question generation shared task, pages 156-164.
Minimum error rate training in statistical machine translation. F J Och, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational LinguisticsAssociation for Computational Linguistics1Och, F. J. (2003). Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics- Volume 1, pages 160-167. Association for Computa- tional Linguistics.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311- 318. Association for Computational Linguistics.
Bidirectional recurrent neural networks. M Schuster, K K Paliwal, IEEE Transactions on Signal Processing. 4511Schuster, M. and Paliwal, K. K. (1997). Bidirectional re- current neural networks. IEEE Transactions on Signal Processing, 45(11):2673-2681.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in neural information processing systems. Sutskever, I., Vinyals, O., and Le, Q. V. (2014). Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104- 3112.
Questions vs. queries in informational search tasks. R W White, M Richardson, W Yih, Proceedings of the 24th International Conference on World Wide Web. the 24th International Conference on World Wide WebACMWhite, R. W., Richardson, M., and Yih, W.-t. (2015). Questions vs. queries in informational search tasks. In Proceedings of the 24th International Conference on World Wide Web, pages 135-136. ACM.
Automatically generating questions from queries for community-based question answering. S Zhao, H Wang, C Li, T Liu, Y Guan, IJCNLP. Zhao, S., Wang, H., Li, C., Liu, T., and Guan, Y. (2011). Automatically generating questions from queries for community-based question answering. In IJCNLP, pages 929-937.
K2q: Generating natural language questions from keywords with user refinements. Z Zheng, X Si, E Y Chang, X Zhu, IJCNLP. Zheng, Z., Si, X., Chang, E. Y., and Zhu, X. (2011). K2q: Generating natural language questions from keywords with user refinements. In IJCNLP, pages 947-955.
| [] |
[
"An Investigation of Recurrent Neural Architectures for Drug Name Recognition",
"An Investigation of Recurrent Neural Architectures for Drug Name Recognition"
] | [
"Raghavendra Chalapathy \nUniversity of Sydney\nJ12/1 Cleveland St2008DarlingtonNSW\n",
"Ehsan Zare Borzeshi ezborzeshi@cmcrc.com \nCapital Markets\nCRC 3/55 Harrington St2000SydneyNSW\n",
"Massimo Piccardi massimo.piccardi@uts.edu.au \nUniversity of Technology Sydney\nPO Box 1232007BroadwayNSW\n"
] | [
"University of Sydney\nJ12/1 Cleveland St2008DarlingtonNSW",
"Capital Markets\nCRC 3/55 Harrington St2000SydneyNSW",
"University of Technology Sydney\nPO Box 1232007BroadwayNSW"
] | [] | Drug name recognition (DNR) is an essential step in the Pharmacovigilance (PV) pipeline. DNR aims to find drug name mentions in unstructured biomedical texts and classify them into predefined categories. State-of-the-art DNR approaches heavily rely on hand-crafted features and domain-specific resources which are difficult to collect and tune. For this reason, this paper investigates the effectiveness of contemporary recurrent neural architecturesthe Elman and Jordan networks and the bidirectional LSTM with CRF decoding -at performing DNR straight from the text. The experimental results achieved on the authoritative SemEval-2013 Task 9.1 benchmarks show that the bidirectional LSTM-CRF ranks closely to highly-dedicated, hand-crafted systems. | 10.18653/v1/w16-6101 | null | 971,589 | 1609.07585 | bca032d849696da511be4fa8dcca441ca2bcc400 |
An Investigation of Recurrent Neural Architectures for Drug Name Recognition
24 Sep 2016
Raghavendra Chalapathy
University of Sydney
J12/1 Cleveland St2008DarlingtonNSW
Ehsan Zare Borzeshi ezborzeshi@cmcrc.com
Capital Markets
CRC 3/55 Harrington St2000SydneyNSW
Massimo Piccardi massimo.piccardi@uts.edu.au
University of Technology Sydney
PO Box 1232007BroadwayNSW
An Investigation of Recurrent Neural Architectures for Drug Name Recognition
24 Sep 2016
Drug name recognition (DNR) is an essential step in the Pharmacovigilance (PV) pipeline. DNR aims to find drug name mentions in unstructured biomedical texts and classify them into predefined categories. State-of-the-art DNR approaches heavily rely on hand-crafted features and domain-specific resources which are difficult to collect and tune. For this reason, this paper investigates the effectiveness of contemporary recurrent neural architecturesthe Elman and Jordan networks and the bidirectional LSTM with CRF decoding -at performing DNR straight from the text. The experimental results achieved on the authoritative SemEval-2013 Task 9.1 benchmarks show that the bidirectional LSTM-CRF ranks closely to highly-dedicated, hand-crafted systems.
Introduction
Pharmacovigilance (PV) is defined by the World Health Organization as the science and activities concerned with the detection, assessment, understanding and prevention of adverse effects of drugs or any other drug-related problems. Drug name recognition (DNR) is a fundamental step in the PV pipeline, similarly to the well-studied Named Entity Recognition (NER) task for general natural language processing (NLP). DNR aims to find drug mentions in unstructured biomedical texts and classify them into predefined categories in order to link drug names with their effects and explore drug-drug interactions (DDIs). Conventional approaches to DNR sub-divide as rule-based, dictionary-based and machine learning-based. Intrinsically, rule-based systems are hard to scale, time-consuming to assemble and ineffective in the presence of informal sentences and abbreviated phrases. Dictionarybased systems identify drug names by matching text chunks against drug dictionaries. These systems typically achieve high precision, but suffer from low recall (i.e., they miss a significant number of mentions) due to spelling errors or drug name variants not present in the dictionaries (Liu et al., 2015a). Conversely, machine-learning approaches have the potential to overcome all these limitations since their foundations are intrinsically robust to variants. The current state-of-the-art machine learning approaches follow a two-step process of feature engineering and classification (Segura-Bedmar et al., 2015;Abacha et al., 2015;Rocktäschel et al., 2013). Feature engineering refers to the task of representing text by dedicated numeric vectors using domain knowledge. Similarly to the design of rule-based systems, this task requires much expert knowledge, is typically challenging and time-consuming, and has a major impact on the final accuracy. For this reason, this paper explores the performance of contemporary recurrent neural networks (RNNs) at providing end-to-end DNR straight from text, without any manual feature engineering stage. The tested RNNs include the popular Elman and Jordan networks and the bidirectional long short-term memory (LSTM) with decoding provided by a conditional random field (CRF) (Elman, 1990;Jordan, 1986;Lample et al., 2016;Collobert et al., 2011). The experimental results over the SemEval-2013 Task 9.1 benchmarks show an interesting accuracy from the LSTM-CRF that exceeds that of various manuallyengineered systems and approximates the best result in the literature.
Related Work
Most of the research on drug name recognition to date has focussed on domain-dependent aspects and specialized text features. The benefit of leveraging such tailored features was made evident by the results from the SemEval-2013 Task 9.1 (Recognition and classification of pharmacological substances, known as DNR task) challenge. The system that ranked first, WBI-NER (Rocktäschel et al., 2013), adopted very specialized features derived from an improved version of the ChemSpot tool (Rocktäschel et al., 2012), a collection of drug dictionaries and ontologies. Similarly, many other recent approaches (Abacha et al., 2015;Liu et al., 2015b;Segura-Bedmar et al., 2015) have been based on various combinations of general and domain-specific features.
In the broader field of machine learning, the recent years have witnessed a rapid proliferation of deep neural networks, with unprecedented results in tasks as diverse as visual, speech and named-entity recognition (Hinton et al., 2012;Krizhevsky et al., 2012;Lample et al., 2016). One of the main advantages of neural networks is that they can learn the feature representations automatically from the data, thus avoiding the laborious feature engineering stage (Mesnil et al., 2015;Lample et al., 2016). Given these promising results, the main goal of this paper is to provide the first performance investigation of popular RNNs such as the Elman and Jordan networks and the bidirectional LSTM-CRF over DNR tasks.
The Proposed Approach
DNR can be formulated as a joint segmentation and classification task over a predefined set of classes. As an example, consider the input sentence provided in Table 1. The notation follows the widely adopted in/out/begin (IOB) entity representation with, in this instance, Cimetidine as the drug, ALFENTA as the brand, and words volatile inhalation anesthetics together as the group. In this paper, we approach the DNR task by recurrent neural networks and we therefore provide a brief description hereafter. In an RNN, each word in the input sentence is first mapped to a random real-valued vector of arbitrary dimension, d. Then, a measurement for the word, noted as x(t), is formed by concatenating the word's own vector with a window of preceding and following vectors (the "context"). An example of input vector with a context window of size s = 3 is:
w 3 (t) = [Cimetidine, reduces, ef f ect], 'reduces ′ → x reduces ∈ R d , 'Cimetidine ′ → x Cimetidine ∈ R d , 'ef f ect ′ → x ef f ect ∈ R d , x(t) = [x Cimetidine , x reduces , x ef f ect ] ∈ R 3d
(1)
where w 3 (t) is the context window centered around the t-th word, ′ reduces ′ , and x word represents the numerical vector for word.
For the Elman network, both x(t) and the output from the hidden layer at time t − 1, h(t − 1), are input into the hidden layer for frame t. The recurrent connection from the past time frame enables a shortterm memory, while hidden-to-hidden neuron connections make the network Turing-complete. This architecture, common in RNNs, is suitable for prediction of sequences. Formally, the hidden layer is described as:
h(t) = f (U • x(t) + V • h(t − 1))(2)
where U and V are randomly-initialized weight matrices between the input and the hidden layer, and between the past and current hidden layers, respectively. Function f (·) is the sigmoid function:
f (x) = 1 1 + e −x(3)
that adds non-linearity to the layer. Eventually, h(t) is input in the output layer:
y(t) = g(W • h(t)), with g(z m ) = e zm Σ K k=1 e z k(4)
and convolved with the output weight matrix, W . The output is normalized by a multi-class logistic function, g(·), to become a proper probability over the class set. The output dimensionality is therefore determined by the number of entity classes (i.e., 4 for the DNR task).The Jordan network is very similar to the Elman network, except that the feedback is sourced from the output layer rather than the previous hidden layer:
h(t) = f (U • x(t) + V • y(t − 1)).(5)
Although the Elman and Jordan networks can learn long-term dependencies, their exponential decay biases them toward their most recent inputs (Bengio et al., 1994).
The LSTM was designed to overcome this limitation by incorporating a gated memory-cell to capture long-range dependencies within the data (Hochreiter and Schmidhuber, 1997). In the bidirectional LSTM, for any given sentence, the network computes both a left, − → h (t), and a right, ← − h (t), representations of the sentence context at every input, x(t). The final representation is created by concatenating them as
h(t) = [ − → h (t); ← − h (t)]
. All these networks utilize the h(t) layer as an implicit feature for entity class prediction: although this model has proved effective in many cases, it is not able to provide joint decoding of the outputs in a Viterbi-style manner (e.g., an I-group cannot follow a B-brand; etc). Thus, another modification to the bidirectional LSTM is the addition of a conditional random field (CRF) (Lafferty et al., 2001) as the output layer to provide optimal sequential decoding. The resulting network is commonly referred to as the bidirectional LSTM-CRF (Lample et al., 2016).
Experiments
Datasets
The DDIExtraction 2013 shared task challenge from SemEval-2013 Task 9.1 (Segura-Bedmar et al., 2013) has provided a benchmark corpus for DNR and DDI extraction. The corpus contains manually-annotated pharmacological substances and drug-drug interactions (DDIs) for a total of 18, 502 pharmacological substances and 5, 028 DDIs.
It collates two distinct datasets:
DDI-DrugBank and DDI-MedLine . Table 2 summarizes the basic statistics of the training and test datasets used in our experiments. For proper comparison, we follow the same settings as (Segura-Bedmar et al., 2015), using the training data of the DNR task along with the test data for the DDI task for training and validation of DNR. We split this joint dataset into a training and validation sets with approximately 70% of sentences for training and the remaining for validation.
Evaluation Methodology
Our models have been blindly evaluated on unseen DNR test data using the strict evaluation metrics. With this evaluation, the predicted entities have to match the ground-truth entities exactly, both in boundary and class. To facilitate the replication of our experimental results, we have used a publicly-available library for the implementation 1 (i.e., the Theano neural network toolkit (Bergstra et al., 2010)). The experiments have been run over a range of values for the hyper-parameters, using the validation set for selection (Bergstra and Bengio, 2012). The hyperparameters include the number of hidden-layer nodes, H ∈ {25, 50, 100}, the context window size, s ∈ {1, 3, 5}, and the embedding dimension, d ∈
Methods
DDI-DrugBank DDI-MedLine
Precision Recall F 1 Score Precision Recall F 1 Score WBI-NER (Rocktäschel et al., 2013) 88 {50, 100, 300, 500, 1000}. Two additional parameters, the learning and drop-out rates, were sampled from a uniform distribution in the range [0.05, 0.1].
The embedding and initial weight matrices were all sampled from the uniform distribution within range [−1, 1]. Early training stopping was set to 100 epochs to mollify over-fitting, and the model that gave the best performance on the validation set was retained. The accuracy is reported in terms of microaverage F 1 score computed using the CoNLL score function (Nadeau and Sekine, 2007). Table 3 shows the performance comparison between the explored RNNs and state-of-the-art DNR systems. As an overall note, the RNNs have not reached the same accuracy as the top system, WBI-NER (Rocktäschel et al., 2013). However, the bidirectional LSTM-CRF has achieved the second-best score on DDI-DrugBank and the third-best on DDI-MedLine. These results seem interesting on the ground that the RNNs provide DNR straight from text rather than from manually-engineered features. Given that the RNNs learn entirely from the data, the better performance over the DDI-DrugBank dataset is very likely due to its larger size. Accordingly, it is reasonable to expect higher relative performance should larger corpora become available in the future. Table 4 also breaks down the results by entity class for the bidirectional LSTM-CRF. The low score on the brand class for DDI-MedLine and on the drug n class (i.e., active substances not approved for human use) for DDI-DrugBank are likely attributable to the very small sample size (Table 2). This issue is also shared by the state-of-the-art DNR systems.
Results and Analysis
Conclusion
This paper has investigated the effectiveness of recurrent neural architectures, namely the Elman and Jordan networks and the bidirectional LSTM-CRF, for drug name recognition. The most appealing feature of these architectures is their ability to provide end-to-end recognition straight from text, sparing effort from laborious feature construction. To the best of our knowledge, ours is the first paper to explore RNNs for entity recognition from pharmacological text. The experimental results over the SemEval-2013 Task 9.1 benchmarks look promising, with the bidirectional LSTM-CRF ranking closely to the state of the art. A potential way to further improve its performance would be to initialize its training with unsupervised word embeddings such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). This approach has proved effective in many other domains and still dispenses with expert annotation effort; we plan this exploration for the near future.
Table 1 :
1Example sentence in a DNR task with entity classes represented in IOB format.DDI-DrugBank
DDI-MedLine
Training+Test for DDI task Test for DNR Training+Test for DDI task Test for DNR
documents
730
54
175
58
sentences
6577
145
1627
520
drug n
124
6
520
115
group
3832
65
234
90
brand
1770
53
36
6
drug
9715
180
1574
171
Table 2 :
2Statistics of training and test datasets used for SemEval-2013 Task 9.1.
Table 3 :
3Performancecomparison between the recurrent neural networks (bottom three lines) and state-of-the-art systems (top
three lines) over the SemEval-2013 Task 9.1.
Bidirectional LSTM-CRF
Entities
DDI-DrugBank
DDI-MedLine
Precision Recall F 1 Score Precision Recall F 1 Score
group
76.92
90.91
83.33
59.52
53.76
56.50
drug
90.59
84.62
87.50
65.22
61.05
63.06
brand
91.30
79.25
84.85
0.0
0.0
0.0
drug n
0.0
0.0
0.0
40.20
45.45
42.67
Table 4 :
4SemEval-2013 Task 9.1 results by entity for the bidirectional LSTM-CRF.
https://github.com/raghavchalapathy/dnr
Text mining for pharmacovigilance: Using machine learning for drug name recognition and drug-drug interaction extraction and classification. Journal of Biomedical Informatics. Abacha et al.2015] Asma Ben Abacha, Md Faisal Mahbub Chowdhury, Aikaterini Karanasiou, Yassine Mrabet, Alberto Lavelli, and Pierre Zweigenbaum58References [Abacha et al.2015] Asma Ben Abacha, Md Faisal Mah- bub Chowdhury, Aikaterini Karanasiou, Yassine Mra- bet, Alberto Lavelli, and Pierre Zweigenbaum. 2015. Text mining for pharmacovigilance: Using machine learning for drug name recognition and drug-drug interaction extraction and classification. Journal of Biomedical Informatics, 58:122-132.
Learning long-term dependencies with gradient descent is difficult. [ Bengio, IEEE Transactions on Neural Networks. 52[Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term dependen- cies with gradient descent is difficult. IEEE Transac- tions on Neural Networks, 5(2):157-166.
Random search for hyper-parameter optimization. James Bergstra, Yoshua Bengio, Journal of Machine Learning Research. 13Bergstra and Bengio2012[Bergstra and Bengio2012] James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research, 13:281-305.
Theano: A CPU and GPU math compiler in Python. Bergstra, The 9th Python in Science Conference. [Bergstra et al.2010] James Bergstra, Olivier Breuleux, Frédéric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde- Farley, and Yoshua Bengio. 2010. Theano: A CPU and GPU math compiler in Python. In The 9th Python in Science Conference, pages 1-7.
Natural language processing (almost) from scratch. [ Collobert, Journal of Machine Learning Research. 12[Collobert et al.2011] Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493-2537.
Finding structure in time. Jeffrey L Elman, Cognitive Science. 142Jeffrey L Elman. 1990. Finding structure in time. Cognitive Science, 14(2):179-211.
The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Herrero-Zazo, Journal of Biomedical Informatics. 465Herrero-Zazo et al.2013] María Herrero-Zazo, Isabel Segura-Bedmar, Paloma Martínez, and Thierry De- clerck. 2013. The DDI corpus: An annotated corpus with pharmacological substances and drug-drug interactions. Journal of Biomedical Informatics, 46(5):914-920.
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Hinton, IEEE Signal Processing Magazine. 296Hinton et al.2012] Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Sig- nal Processing Magazine, 29(6):82-97.
Long short-term memory. Sepp Hochreiter and Jürgen Schmidhuber. 9[Hochreiter and Schmidhuber1997] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735-1780.
Serial order: A parallel distributed processing approach. Michael I Jordan, Imagenet classification with deep convolutional neural networks. In NIPS. Krizhevsky et al.2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E HintonSan DiegoUniversity of California, Institute for Cognitive ScienceTechnical reportMichael I. Jordan. 1986. Serial order: A parallel distributed processing approach. Technical re- port, San Diego: University of California, Institute for Cognitive Science. [Krizhevsky et al.2012] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. 2012. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1097-1105.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. [ Lafferty, ICML. [Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and label- ing sequence data. In ICML, pages 282-289.
Neural architectures for named entity recognition. [ Lample, NAACL-HLT. [Lample et al.2016] Guillaume Lample, Miguel Balles- teros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named en- tity recognition. In NAACL-HLT.
Drug name recognition: Approaches and resources. [ Liu, Information. 64[Liu et al.2015a] Shengyu Liu, Buzhou Tang, Qingcai Chen, and Xiaolong Wang. 2015a. Drug name recognition: Approaches and resources. Information, 6(4):790-810.
Feature engineering for drug name recognition in biomedical texts: Feature conjunction and feature selection. Computational and Mathematical Methods in Medicine. 2015Xiaolong Wang, and Xiaoming Fanet al.2015b] Shengyu Liu, Buzhou Tang, Qingcai Chen, Xiaolong Wang, and Xiaoming Fan. 2015b. Feature engineering for drug name recognition in biomedical texts: Feature conjunction and feature se- lection. Computational and Mathematical Methods in Medicine, 2015:1-9.
Using recurrent neural networks for slot filling in spoken language understanding. [ Mesnil, IEEE/ACM Transactions on Audio, Speech, and Language Processing. 233[Mesnil et al.2015] Grégoire Mesnil, Yann Dauphin, Kaisheng Yao, Yoshua Bengio, Li Deng, Dilek Hakkani-Tur, Xiaodong He, Larry Heck, Gokhan Tur, Dong Yu, et al. 2015. Using recurrent neural net- works for slot filling in spoken language understand- ing. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 23(3):530-539.
Distributed representations of words and phrases and their compositionality. [ Mikolov, NIPS. 30A survey of named entity recognition and classification[Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Dis- tributed representations of words and phrases and their compositionality. In NIPS, pages 3111-3119. [Nadeau and Sekine2007] David Nadeau and Satoshi Sekine. 2007. A survey of named entity recogni- tion and classification. Linguisticae Investigationes, 30(1):3-26.
GloVe: Global vectors for word representation. Pennington, EMNLP. [Pennington et al.2014] Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global vectors for word representation. In EMNLP, pages 1532-1543.
ChemSpot: A hybrid system for chemical named entity recognition. Tim Rocktäschel, Michael Weidlich, Ulf Leser, Bioinformatics. 2812Rocktäschel et al.2012[Rocktäschel et al.2012] Tim Rocktäschel, Michael Wei- dlich, and Ulf Leser. 2012. ChemSpot: A hybrid sys- tem for chemical named entity recognition. Bioinfor- matics, 28(12):1633-1640.
WBI-NER: The impact of domain-specific features on the performance of identifying and classifying mentions of drugs. Tim Rocktäschel, Torsten Huber, Michael Weidlich, Ulf Leser, The 7th International Workshop on Semantic Evaluation. Rocktäschel et al.2013[Rocktäschel et al.2013] Tim Rocktäschel, Torsten Hu- ber, Michael Weidlich, and Ulf Leser. 2013. WBI- NER: The impact of domain-specific features on the performance of identifying and classifying mentions of drugs. In The 7th International Workshop on Se- mantic Evaluation, pages 356-363.
. Segura-Bedmar, Isabel Segura-Bedmar. Segura-Bedmar et al.2013] Isabel Segura-Bedmar,
Paloma Martínez, María Herrero Zazo, Semeval-2013 task 9: Extraction of drug-drug interactions from biomedical texts. Paloma Martínez, and María Herrero Zazo. 2013. Semeval-2013 task 9: Extraction of drug-drug inter- actions from biomedical texts (DDIExtraction 2013).
The 7th International Workshop on Semantic Evaluation. In The 7th International Workshop on Semantic Evaluation.
. Segura-Bedmar, Isabel Segura-Bedmar. Segura-Bedmar et al.2015] Isabel Segura-Bedmar,
Exploring word embedding for drug name recognition. Vıctor Suárez-Paniagua, Paloma Martınez, The 6th International Workshop on Health Text Mining and Information Analysis. 64Vıctor Suárez-Paniagua, and Paloma Martınez. 2015. Exploring word embedding for drug name recogni- tion. In The 6th International Workshop on Health Text Mining and Information Analysis, page 64.
| [
"https://github.com/raghavchalapathy/dnr"
] |
[
"Squeezed Very Deep Convolutional Neural Networks for Text Classification",
"Squeezed Very Deep Convolutional Neural Networks for Text Classification"
] | [
"Andréa B Duque ",
"David Macêdo ",
"Luã Lázaro ",
"J Santos ",
"Cleber Zanchettin ",
"\nCentro de Informática\nCentro de Informática\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil\n",
"\nCentro de Informática\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil\n",
"\nCentro de Informática\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil\n",
"\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil\n"
] | [
"Centro de Informática\nCentro de Informática\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil",
"Centro de Informática\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil",
"Centro de Informática\nUniversidade Federal de Pernambuco\n50.740-560RecifePEBrazil",
"Universidade Federal de Pernambuco\n50.740-560RecifePEBrazil"
] | [] | Most of the research in convolutional neural networks has focused on increasing network depth to improve accuracy, resulting in a massive number of parameters which restricts the trained network to platforms with memory and processing constraints. We propose to modify the structure of the Very Deep Convolutional Neural Networks (VDCNN) model to fit mobile platforms constraints and keep performance. In this paper, we evaluate the impact of Temporal Depthwise Separable Convolutions and Global Average Pooling in the network parameters, storage size, and latency. The squeezed model (SVDCNN) is between 10x and 20x smaller, depending on the network depth, maintaining a maximum size of 6MB. Regarding accuracy, the network experiences a loss between 0.4% and 1.3% and obtains lower latencies compared to the baseline model. | 10.1007/978-3-030-30487-4_16 | [
"https://arxiv.org/pdf/1901.09821v1.pdf"
] | 59,316,539 | 1901.09821 | 2ead783089fc757052abb908287a2fb743a4ebef |
Squeezed Very Deep Convolutional Neural Networks for Text Classification
Andréa B Duque
David Macêdo
Luã Lázaro
J Santos
Cleber Zanchettin
Centro de Informática
Centro de Informática
Universidade Federal de Pernambuco
50.740-560RecifePEBrazil
Centro de Informática
Universidade Federal de Pernambuco
50.740-560RecifePEBrazil
Centro de Informática
Universidade Federal de Pernambuco
50.740-560RecifePEBrazil
Universidade Federal de Pernambuco
50.740-560RecifePEBrazil
Squeezed Very Deep Convolutional Neural Networks for Text Classification
Most of the research in convolutional neural networks has focused on increasing network depth to improve accuracy, resulting in a massive number of parameters which restricts the trained network to platforms with memory and processing constraints. We propose to modify the structure of the Very Deep Convolutional Neural Networks (VDCNN) model to fit mobile platforms constraints and keep performance. In this paper, we evaluate the impact of Temporal Depthwise Separable Convolutions and Global Average Pooling in the network parameters, storage size, and latency. The squeezed model (SVDCNN) is between 10x and 20x smaller, depending on the network depth, maintaining a maximum size of 6MB. Regarding accuracy, the network experiences a loss between 0.4% and 1.3% and obtains lower latencies compared to the baseline model.
I. INTRODUCTION
The general trend in deep learning approaches has been developing models with increasing layers. Deeper neural networks have achieved high-quality results in different tasks such as image classification, detection, and segmentation. Deep models can also learn hierarchical feature representations from images [1]. In the Natural Language Processing (NLP) field, the belief that compositional models can also be used to textrelated tasks is more recent.
The increasing availability of text data motivates research for models able to improve accuracy in different language tasks. Following the image classification Convolutional Neural Network (CNN) tendency, the research in text classification has placed effort into developing deeper networks. The first CNN based approach for text was a shallow network with one layer [2]. Following this work, deeper architectures were proposed [3], [4]. Conneau et al. [3] were the first to propose Very Deep Convolutional Neural Networks (VDCNN) applied to text classification. VDCNN accuracy increases with depth. The approach with 29 layers is the state-of-the-art accuracy of CNNs for text classification. *Authors contributed equally and are both first writers.
However, regardless of making networks deeper to improve accuracy, little effort has been made to build text classification models to constrained resources. It is a very different scenario compared to image approaches, where size and speed constrained models have been proposed [5], [6]. In real-world applications, size and speed are constraints to an efficient mobile and embedded deployment of deep models [6].
Several relevant real-world applications depend on text classification tasks such as sentiment analysis, recommendation and opinion mining. The appeal for these applications combined with the boost in mobile devices usage motivates the need for research in restrained text classification models. Concerning mobile development, there are numerous benefits to developing smaller models. Some of the most relevant are requiring fewer data transferring while updating the client model [5] and increasing usability by diminishing the inference time. Such advantages would boost the usage of deep neural models in text-based applications for embedded platforms.
In this paper, we investigate modifications on the network proposed by Conneau et al. [3] with the aim of reducing its number of parameters, storage size and latency with minimal performance degradation. To achieve these improvements we used Temporal Depthwise Separable Convolution and Global Average Pooling techniques. Therefore, our main contribution is to propose the Squeezed Very Deep Convolutional Neural Networks (SVDCNN), a text classification model which requires significantly fewer parameters compared to the stateof-the-art CNNs.
Section II provides an overview of the state-of-the-art in CNNs for text classification. Section III presents the VDCNN model. Section IV explains the proposed model SVDCNN and the subsequent impact in the total number of parameters of the network. Section V details how we perform experiments. Section VI analyses the results and lastly, Section VII, presents conclusions and direction for future works.
II. RELATED WORK
CNNs were originally designed for Computer Vision with the aim of considering feature extraction and classification as one task [7]. Although CNNs are very successful in image classification tasks, its use in text classification is relatively new and has some peculiarities. Contrasting with traditional image bi-dimensional representations, texts are onedimensionally represented. Due to this property, the convolutions are designed as temporal convolutions. Furthermore, it is necessary to generate a numerical representation from the text so the network can be trained using this representation. This representation, namely embeddings, is usually obtained through the application of a lookup table, generated from a given dictionary.
An early approach for text classification tasks consisted of a shallow neural network working on the word level and using only one convolutional layer [2]. The author reported results in smaller datasets. Later, Zhang et al. [4] proposed the first CNN performing on a character level (Char-CNN), which allowed them to train up to 6 convolutional layers, followed by three fully connected classification layers. Char-CNN uses convolutional kernels of size 3 and 7, as well as simple maxpooling layers. Conneau et al. (2016) proposed the Very Deep CNN (VD-CNN) [3] also on a character level, presenting improvements compared to Char-CNN. Conneau et al. (2016) have shown that text classification accuracy increases when the proposed model becomes deeper. VDCNN uses only small kernel convolutions and pooling operations. The proposed architecture relies on the VGG and ResNet philosophy [8], [9]: The number of feature maps and the temporal resolution is modeled so that their product is constant. This approach makes it easier to control the memory footprint of the network. Both Zhang and Conneau et al. CNNs utilized standard convolutional blocks and fully connected layers to combine convolution information [3], [4]. This architecture choice increases the number of parameters and storage size of the models. However, size and speed was not the focus of those works.
The idea of developing smaller and more efficient CNNs without losing representative accuracy is a less explored research direction in NLP, but it has already been a trend for computer vision applications [5], [6], [10]. Most approaches consist in compressing pre-trained networks or training small networks directly [6]. A recent tendency in deep models is replacing standard convolutional blocks with Depthwise Separable Convolutions (DSCs). The purpose is to reduce the number of parameters and consequently the model size. DSCs were initially introduced in [11] and since then have been successfully applied to image classification and [6], [10], [12] machine translation [13] to reduce the computation in convolutional blocks. Another approach is the use of a Global Average Pooling (GAP) layer at the output of the network to replace fully connected layers. This approach has become a standard architectural decision for newer CNNs [8], [14].
III. VDCNN MODEL FOR TEXT CLASSIFICATION
The VDCNN is a modular architecture for text classification tasks developed to offer different depth levels (9, 17, 29 and 49). Fig. 1 presents the architecture for depth 9. The network begins with a lookup table, which generates the embeddings for the input text and stores them in a 2D tensor of size (f 0, s). The number of input characters (s) is fixed to 1,024 while the embedding dimension (f 0) is 16. The embedding dimension can be seen as the number of RGB channels of an image.
The following layer (3, Temp Convolution, 64) applies 64 temporal convolutions of kernel size 3, so the output tensor has size 64 * s. Its primary function is to fit the lookup table output with the modular network segment input composed by convolutional blocks. Each aforenamed block is a sequence of two temporal convolutional layers, each one accompanied by a temporal batch normalization layer [15] and a ReLU activation. Besides, the different network depths are obtained varying the number of convolutional blocks. As a convention, the depth of a network is given as its total number of convolutions. For instance, the architecture of depth 17 has two convolutional blocks of each level of feature maps, which results in 4 convolutional layers for each level (see Table I). Considering the first convolutional layer of the network, we obtain the depth 2 * (2+2+2+2)+1 = 17. The different depth architectures provided by VDCNN model are summarized in Table I. The following rule is employed to minimize the network's memory footprint: Before each convolutional block doubling the number of feature maps, a pooling layer halves the temporal dimension. This strategy is inspired by the VGG and ResNets philosophy and results in three levels of feature maps: 128, 256 and 512 (see Fig. 1). Additionally, the VDCNN network also contains shortcut connections [8] for each convolutional blocks implemented through the usage of 1 × 1 convolutions. Lastly, for the classification task, the k most valuable features (k = 8) are extracted using k-max pooling, generating a one-dimensional vector which supplies three fully connected layers with ReLU hidden units and softmax outputs. The number of hidden units is 2,048, and they do not use dropout but rather batch normalization after convolutional layers perform the network regularization.
IV. SVDCNN MODEL FOR TEXT CLASSIFICATION
The primary objective is reducing the number of parameters so that the resulting network has a significative lower storage size. We first propose to modify the convolutional blocks of VDCNN model by the usage of Temporal Depthwise Separable Convolutions (TDSCs). Next, we reduce the number of fully connected layers using the Global Average Pooling (GAP) technique. The resulting proposed architecture is called Squeezed Very Deep Convolutional Neural Networks (SVD-CNN).
a) Temporal Depthwise Separable Convolutions (TD-SCs): The use of TDSCs over standard convolutions allowed reducing the number of parameters without relevant accuracy loss [6]. TDSCs work decompounding the standard convolution into two parts: Depthwise and Pointwise. The first one is responsible for applying a convolutional filter to each channel of the input at a time. For an image input, one possibility of channels are the RGB components, whereas in a text input the dimensions of the embedding can be used instead. For both cases mentioned above, the result is one feature map by channel. The second convolution unifies the generated feature maps successively applying 1x1 convolutions so that the target amount of feature maps can be achieved. TDSCs are DSCs which work with one-dimensional convolutions. Although DSCs hold verified results in image classification networks, the usage of its temporal version for text related tasks is less explored. Fig. 2a presents the architecture of a temporal standard convolution while Fig. 2b presents the TDSC.
For a more formal definition, let P tsc be the number of parameters of a temporal standard convolution, where In and Out are the numbers of Input and Output channels respectively, and D k is the kernel size:
P tsc = In * Out * D k(1)
Alternatively, a TDSC achieves fewer parameters (P tdsc ):
P tdsc = In * Dk + In * Out(2)
In the VDCNN model, one convolutional block is composed of two temporal standard convolutional layers. The first one doubles the number of feature maps while the second keeps the same value received as input. Besides, each convolutional layer is followed by a Batch Normalization and a ReLU layers. In our model, we proposed changing the temporal standard convolutions by TDSCs. Fig. 3 presents the standard convolutional block on the left and the proposed convolutional block using TDSC on the right. The pattern used in the figure for the convolutional layers is the following: "Kernel Size, Conv type, Output Feature Maps"; as a brief example consider "3x1, Temporal Conv, 256", which means a Temporal Convolution with kernel size 3 and 256 feature maps as output. From Equation 1, we have the number of parameters of the original convolutional block (P convblock ) as follows:
P convblock = In * Out * 3 + Out * Out * 3(3)
Moreover, from equation 2, the number of parameters of the proposed convolutional block (P convblock−tdsc ) that uses TDSC being: P convblock−tdsc = In * 3+In * Out+Out * 3+Out * Out (4) For illustration, following the same characteristics of Fig. 3, consider that the number of input channels In is equal to 128 and the number of output channels Out is equal to 256. Our proposed approach accumulates a total of 99,456 parameters. In contrast, there are 294,912 parameters in the original convolutional block. The use of TDSC yields a reduction of 66.28% in the network size.
Lastly, since each standard temporal convolution turns into two (Depthwise and Pointwise), the number of convolutions per convolutional block has doubled. Nevertheless, these two convolutions work as one because it is not possible to use them separately keeping the same propose. In this way, we count them as one layer in the network depth. This decision holds the provided depth architectures the same as the VDCNN model summarized in Table I, contributing to a proper comparison between the models. b) Global Average Pooling (GAP): The VDCNN model uses a k-max pooling layer (k = 8) followed by three fully connected (FC) layers to perform the classification task (Fig. 4a). Although this approach is the traditional architecture choice for text classification CNNs, it introduces a significant number of parameter in the network. The resulting number of the FC layers parameters (P f c ) aforementioned is presented below, for a problem with four target classes: P f c = 512 * k * 2, 048 + 2, 048 * 2, 048 + 2, 048 * 4 P f c = 12, 591, 104 Instead of maintaining these fully connected layers, we directly aggregate the output of the last convolutional block through the usage of an average pooling layer. This method, known as Global Average Pooling, contributes substantially to the parameters reduction without degrading the network accuracy significantly [16]. The number of resulting feature maps given by the average pooling layer was the same as the original k-max pooling layer (k = h = 8). Fig. 4b presents this proposed modification. The number of parameters obtained by the usage of GAP (P gap ) is revealed as follows:
P gap = 4, 096 * 4 P gap = 16, 384(6)
Our proposed approach accumulates a total of 16,384 parameters. In contrast, there are 12,591,104 parameters in the original classification method. The use of GAP yields a reduction of 99.86%.
V. EXPERIMENTS
The experiment goal is to investigate the impact of modifying the convolutional block of VDCNN to TDSCs and using GAP instead of the original fully connected layers. We evaluate Char-CNN, VDCNN, and SVDCNN according to the number of parameters, storage size, inference time and accuracy. The source code of the proposed model is available in the GitHub repository SVDCNN 1 The original VDCNN paper reported the number of parameters of the convolutional layers, in which we reproduce in this article. For SVDCNN and Char-CNN, we calculated the abovementioned number from the network architecture implemented in PyTorch. As for the FC layer's parameters, the number is obtained as the summation of the product of the input and output size of each FC layer for each CNN.
Considering the network parameters P and assuming that one float number on Cuda environment takes 4 bytes, we can calculate the network storage in megabytes, for all the models, as follows:
S = P * 4 ÷ 1, 024 2(7)
Regarding the inference time, its average and standard deviation were calculated as the time to predict one instance of the AG's News dataset throughout 1,000 repetitions. The SVDCNN experimental settings are similar to the original VDCNN paper, using the same dictionary and the same embedding size of 16 [3]. The training is also performed with SGD, utilizing size batch of 64, with a maximum of 100 epochs. We use an initial learning rate of 0.01, a momentum of 0.9 and a weight decay of 0.001. All the experiments were performed on an NVIDIA GTX 1060 GPU + Intel Core i7 4770s CPU.
The model's performance is evaluated on three large-scale public datasets also used by Zhang et al. [4] in the introduction of Char-CNN and VDCNN models. Table II presents the details of the utilized datasets: AG's News, Yelp Polarity and Yelp Full. Table IV presents the number of parameters, storage size, and accuracy for the SVDCNN, VDCNN, and Char-CNN in all datasets. The use of TDSCs promoted a significant reduction in convolutional parameters compared to VDCNN. For the most in-depth network evaluated, which contains 29 convolutional layers (depth 29), the number of parameters of these convolutional layers had a reduction of 66.08%, from 4.6 to 1.56 million parameters. This quantity is slightly larger than the one obtained from the Char-CNN, 1.40 million parameters, but this network has only six convolutional layers (depth 6).
VI. RESULTS
The network reduction obtained by the GAP is even more representative since both compared models use three FC layers for their classification tasks. Considering a dataset with four target classes, and comparing SVDCNN with VDCNN, the number of parameters of the FC layers has passed from 12.59 to 0.02 million parameters, representing a reduction of 99.84%. Following with the same comparison, but to Char-CNN, the proposed model is 99.82% smaller, 0.02 against 11.36 million of FC parameters.
The reduction of the total parameters impacts directly on the storage size of the networks. While our most in-depth model (29) occupies only 6MB, VDCNN with the same depth occupies 64.16MB of storage. Likewise, Char-CNN (which has depth 6) occupies 43.25MB. This reduction is a significant result because many embedded platforms have several memory constraints. For example, FPGAs often have less than 10MB of on-chip memory and no off-chip memory or storage [6].
Regarding accuracy results, usually, a model with such parameter reduction should present some loss of accuracy in comparison to the original model. Nevertheless, the performance difference between VDCNN and SVDCNN models varies between 0.4 and 1.3%, which is pretty modest considering the parameters and storage size reduction aforementioned. In Table IV, it is possible to see the accuracy scores obtained by the compared models. Another two fundamental results obtained are a) The base property of VDCNN model is preserved on its squeezed model: the performance still increasing up with the depth and b) The performance evaluated for the most extensive dataset, i.e., Yelp Review (62.30%), still overcomes the accuracy of the Char-CNN model (62.05%).
Deep learning processing architecture has the property of being high parallelizable; it is expected smaller latencies when performing inferences in hardware with high parallelization power. Despite this property, the model ability to use all hardware parallel potential available also depends on the network architecture. The more parameters per layers, the more parallelizable a model tends to be, while the increase of the depth gets the opposite result. Another natural comprehension fact is if a model has few parameters, there exists less content to be processed, and then we have a faster inference time.
Concerning mobile devices, the presence of dedicated hardware for deep learning is not entirely feasible. This hardware usually requires more energy and dissipates more heat, two undesirable features for a mobile platform. Therefore, obtaining fewer inference times, even out of environments with high parallelization capabilities, is a pretty desirable characteristic for a model designed to work on mobile platforms. The latency ratio between CPU and GPU inference times indicates how undependable of dedicated hardware a model is, with higher values meaning more independence.
The inference times obtained for the three models compared are available in Table III. As explained in Section IV a), each convolutional layer of the convolutional blocks was substituted by two convolutions. This change could impact the inference time negatively, but the significant parameter reduction allows the SVDCNN to obtain better results than the VDCNN model. The CPU inference time obtained by the proposed model was smaller than the base model for the depth 9 (25.88ms against 29,13ms) and depth 17 (47.80ms against 48.05ms),
VII. CONCLUSION
In this paper, we presented a squeezed version of the VDCNN model considering the number of parameters and size. The new model proprieties became it feasible for mobile platforms. To achieve this goal, we analyzed the impact of including Temporal Depthwise Separable Convolutions and a Global Average Pooling layer in a very deep convolutional neural network for text classification. The SVDCNN model reduces about 92.45% the number of parameters and storage size while presents an inference time ratio (CPU/GPU), 31.94% higher.
For future works, we plan to evaluate other techniques able to reduce storage size, such as model compression. Moreover, the model accuracy over even more massive datasets will be evaluated as well as the efficiency of its depth 49 configuration.
ACKNOWLEDGMENT
We would like to thank FACEPE and CNPq (Brazilian research agencies) for financial support.
Fig. 1 :
1Depth 9 VDCNN architecture.
Fig. 2: a) Temporal Standard Convolution; b) Temporal Depthwise Separable Convolution.
convolutional block of the VDCNN; b) Modified convolutional block of the SVDCNN.
Fig. 4: a) VDCNN classification layers; b) SVDCNN classification layers.
TABLE I :
INumber of convolutional layers for each different VDCNN depth architectureDepth
9 17 29 49
Convolutional Block 512
2
4
4
6
Convolutional Block 256
2
4
4 10
Convolutional Block 128
2
4 10 16
Convolutional Block 64
2
4 10 16
First Convolutional Layer 1
1
1
1
TABLE II :
IIDatasets used in experimentsDataset
#Train #Test #Classes Classification Task
AG's News
120k
7.6k
4
News categorization
Yelp Polarity 560k
38k
2
Sentiment analysis
Yelp Full
650k
50k
5
Sentiment analysis
TABLE III :
IIITime results for AG's News dataset 10.32ms ± 0.43 313.53ms ± 4.97 0.03Inference Time
GPU
CPU Ratio
SVDCNN
9
5.53ms ± 0.16
25.88ms ± 0.52
0.21
17
9.84ms ± 0.28
47.80ms ± 1.01
0.21
29
15.14ms ± 0.44
74.03ms ± 1.15
0.20
VDCNN
9
4.48ms ± 0.19
29.13ms ± 0.87
0.15
17
7.08ms ± 0.20
48.05ms ± 1.26
0.15
29
10.26ms ± 0.26
65.80ms ± 1.51
0.16
Char-CNN
6
TABLE IV :
IVNumber of parameters, storage and accuracy results for all evaluated CNNs while the Ratio was higher for all depths (0.20 against 0.15 in average). These results, as explained above, are pretty significant for mobile platforms. Looking to Char-CNN, this model got notably inferior results compared to the proposed method, with 313.53ms of CPU inference time and Ratio of 0.03.SVDCNN
VDCNN
Char-CNN
9
17
29
9
17
29
6
Parameters
#Conv Params [M]
0.71
1.43
1.56
2.20
4.40
4.60
1.37
#FC Params [M]
0.02
0.02
0.02
12.59 12.59 12.59
11.34
#Total Params [M]
0.73
1.45
1.58
14.79 16.99 17.19
12.71
Storage
Storage Size [MB]
2.80
5.52
6.03
54.75 62.74 64.16
43.25
Accuracy
Ag News
90.13 90.43 90.55
90.83 91.12 91.27
92.36
Yelp Polarity
94.99 95.04 95.26
95.12 95.50 95.72
95.64
Yelp Full
61.97 63.00 63.20
63.27 63.93 64.26
62.05
Link: https://github.com/lazarotm/SVDCNN
Visualizing and understanding convolutional networks. M D Zeiler, R Fergus, European conference on computer vision. SpringerM. D. Zeiler and R. Fergus, "Visualizing and understanding convolu- tional networks," in European conference on computer vision. Springer, 2014, pp. 818-833.
Convolutional neural networks for sentence classification. Y Kim, arXiv:1408.5882arXiv preprintY. Kim, "Convolutional neural networks for sentence classification," arXiv preprint arXiv:1408.5882, 2014.
Very deep convolutional networks for text classification. A Conneau, H Schwenk, L Barrault, Y Lecun, arXiv:1606.01781arXiv preprintA. Conneau, H. Schwenk, L. Barrault, and Y. Lecun, "Very deep convolutional networks for text classification," arXiv preprint arXiv:1606.01781, 2016.
Character-level convolutional networks for text classification. X Zhang, J Zhao, Y Lecun, Advances in neural information processing systems. X. Zhang, J. Zhao, and Y. LeCun, "Character-level convolutional networks for text classification," in Advances in neural information processing systems, 2015, pp. 649-657.
Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size. F N Iandola, S Han, M W Moskewicz, K Ashraf, W J Dally, K Keutzer, arXiv:1602.07360arXiv preprintF. N. Iandola, S. Han, M. W. Moskewicz, K. Ashraf, W. J. Dally, and K. Keutzer, "Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size," arXiv preprint arXiv:1602.07360, 2016.
Mobilenets: Efficient convolutional neural networks for mobile vision applications. A G Howard, M Zhu, B Chen, D Kalenichenko, W Wang, T Weyand, M Andreetto, H Adam, arXiv:1704.04861arXiv preprintA. G. Howard, M. Zhu, B. Chen, D. Kalenichenko, W. Wang, T. Weyand, M. Andreetto, and H. Adam, "Mobilenets: Efficient convo- lutional neural networks for mobile vision applications," arXiv preprint arXiv:1704.04861, 2017.
Gradient-based learning applied to document recognition. Y Lecun, L Bottou, Y Bengio, P Haffner, Proceedings of the IEEE. 8611Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionK. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.1556arXiv preprintK. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," arXiv preprint arXiv:1409.1556, 2014.
Reducing squeezenet storage size with depthwise separable convolutions. A G Santos, C O Souza, C Zanchettin, D Macedo, A L Oliveira, T Ludermir, 2018 International Joint Conference on Neural Networks (IJCNN). A. G. Santos, C. O. de Souza, C. Zanchettin, D. Macedo, A. L. Oliveira, and T. Ludermir, "Reducing squeezenet storage size with depthwise separable convolutions," in 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1-6.
Rigid-motion scattering for image classification. L Sifre, S Mallat, CiteseerPh.D. dissertationL. Sifre and S. Mallat, "Rigid-motion scattering for image classification," Ph.D. dissertation, Citeseer, 2014.
Xception: Deep learning with depthwise separable convolutions. F Chollet, 357arXiv preprintF. Chollet, "Xception: Deep learning with depthwise separable convo- lutions," arXiv preprint, pp. 1610-02 357, 2017.
Depthwise separable convolutions for neural machine translation. L Kaiser, A N Gomez, F Chollet, arXiv:1706.03059arXiv preprintL. Kaiser, A. N. Gomez, and F. Chollet, "Depthwise separable convolu- tions for neural machine translation," arXiv preprint arXiv:1706.03059, 2017.
Densely connected convolutional networks. G Huang, Z Liu, L Van Der Maaten, K Q Weinberger, CVPR. 13G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger, "Densely connected convolutional networks." in CVPR, vol. 1, no. 2, 2017, p. 3.
Batch normalization: Accelerating deep network training by reducing internal covariate shift. S Ioffe, C Szegedy, arXiv:1502.03167arXiv preprintS. Ioffe and C. Szegedy, "Batch normalization: Accelerating deep network training by reducing internal covariate shift," arXiv preprint arXiv:1502.03167, 2015.
Network in network. M Lin, Q Chen, S Yan, arXiv:1312.4400arXiv preprintM. Lin, Q. Chen, and S. Yan, "Network in network," arXiv preprint arXiv:1312.4400, 2013.
| [
"https://github.com/lazarotm/SVDCNN"
] |
[
"Think Visually: Question Answering through Virtual Imagery",
"Think Visually: Question Answering through Virtual Imagery"
] | [
"Ankit Goyal ankgoyal@umich.edu \nComputer Science and Engineering\nUniversity of Michigan\nAnn Arbor\n",
"Jian Wang \nComputer Science and Engineering\nUniversity of Michigan\nAnn Arbor\n",
"Jia Deng jiadeng@umich.edu \nComputer Science and Engineering\nUniversity of Michigan\nAnn Arbor\n"
] | [
"Computer Science and Engineering\nUniversity of Michigan\nAnn Arbor",
"Computer Science and Engineering\nUniversity of Michigan\nAnn Arbor",
"Computer Science and Engineering\nUniversity of Michigan\nAnn Arbor"
] | [
"Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)"
] | In this paper, we study the problem of geometric reasoning in the context of question-answering. We introduce Dynamic Spatial Memory Network (DSMN), a new deep network architecture designed for answering questions that admit latent visual representations. DSMN learns to generate and reason over such representations. Further, we propose two synthetic benchmarks, FloorPlanQA and ShapeIntersection, to evaluate the geometric reasoning capability of QA systems. Experimental results validate the effectiveness of our proposed DSMN for visual thinking tasks 1 . | 10.18653/v1/p18-1242 | [
"https://www.aclweb.org/anthology/P18-1242.pdf"
] | 44,096,233 | 1805.11025 | 497243ed80033921c3c82c278780381a7d9d783e |
Think Visually: Question Answering through Virtual Imagery
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 15 -20. 2018. 2018. 2598
Ankit Goyal ankgoyal@umich.edu
Computer Science and Engineering
University of Michigan
Ann Arbor
Jian Wang
Computer Science and Engineering
University of Michigan
Ann Arbor
Jia Deng jiadeng@umich.edu
Computer Science and Engineering
University of Michigan
Ann Arbor
Think Visually: Question Answering through Virtual Imagery
Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)
the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)Melbourne, AustraliaAssociation for Computational LinguisticsJuly 15 -20. 2018. 2018. 2598
In this paper, we study the problem of geometric reasoning in the context of question-answering. We introduce Dynamic Spatial Memory Network (DSMN), a new deep network architecture designed for answering questions that admit latent visual representations. DSMN learns to generate and reason over such representations. Further, we propose two synthetic benchmarks, FloorPlanQA and ShapeIntersection, to evaluate the geometric reasoning capability of QA systems. Experimental results validate the effectiveness of our proposed DSMN for visual thinking tasks 1 .
Introduction
The ability to reason is a hallmark of intelligence and a requirement for building question-answering (QA) systems. In AI research, reasoning has been strongly associated with logic and symbol manipulation, as epitomized by work in automated theorem proving (Fitting, 2012). But for humans, reasoning involves not only symbols and logic, but also images and shapes. Einstein famously wrote: "The psychical entities which seem to serve as elements in thought are certain signs and more or less clear images which can be 'voluntarily' reproduced and combined... Conventional words or other signs have to be sought for laboriously only in a secondary state..." And the history of science abounds with discoveries from visual thinking, from the Benzene ring to the structure of DNA (Pinker, 2003).
There are also plenty of ordinary examples of human visual thinking. Consider a square room with a door in the middle of its southern wall. Suppose you are standing in the room such that the eastern wall of the room is behind you. Where is the door with respect to you? The answer is 'to your left.' Note that in this case both the question and answer are just text. But in order to answer the question, it is natural to construct a mental picture of the room and use it in the process of reasoning. Similar to humans, the ability to 'think visually' is desirable for AI agents like household robots. An example could be to construct a rough map and navigation plan for an unknown environment from verbal descriptions and instructions.
In this paper, we investigate how to model geometric reasoning (a form of visual reasoning) using deep neural networks (DNN). Specifically, we address the task of answering questions through geometric reasoning-both the question and answer are expressed in symbols or words, but a geometric representation is created and used as part of the reasoning process.
In order to focus on geometric reasoning, we do away with natural language by designing two synthetic QA datasets, FloorPlanQA and ShapeIntersection. In FloorPlanQA, we provide the blueprint of a house in words and ask questions about location and orientation of objects in it. For ShapeIntersection, we give a symbolic representation of various shapes and ask how many places they intersect. In both datasets, a reference visual representation is provided for each sample.
Further, we propose Dynamic Spatial Memory Network (DSMN), a novel DNN that uses virtual imagery for QA. DSMN is similar to existing memory networks (Kumar et al., 2016;Sukhbaatar et al., 2015;Henaff et al., 2016) in that it uses vector embeddings of questions and memory modules to perform reasoning. The main novelty of DSMN is that it creates virtual images for the input question and uses a spatial memory to aid the reasoning process.
We show through experiments that with the aid of an internal visual representation and a spatial memory, DSMN outperforms strong baselines on both FloorPlanQA and ShapeIntersection. We also demonstrate that explicitly learning to create visual representations further improves performance. Finally, we show that DSMN is substantially better than the baselines even when visual supervision is provided for only a small proportion of the samples.
It's important to note that our proposed datasets consist of synthetic questions as opposed to natural texts. Such a setup allows us to sidestep difficulties in parsing natural language and instead focus on geometric reasoning. However, synthetic data lacks the complexity and diversity of natural text. For example, spatial terms used in natural language have various ambiguities that need to resolved by context (e.g. how far is "far" and whether "to the left" is relative to the speaker or the listener) (Shariff, 1998;Landau and Jackendoff, 1993), but our synthetic data lacks such complexities. Therefore, our method and results do not automatically generalize to real-life tasks involving natural language. Additional research is needed to extend and validate our approach on natural data.
Our contributions are three-fold: First, we present Dynamic Spatial Memory Network (DSMN), a novel DNN that performs geometric reasoning for QA. Second, we introduce two synthetic datasets that evaluate a system's visual thinking ability. Third, we demonstrate that on synthetic data, DSMN achieves superior performance for answering questions that require visual thinking.
Related Work
Natural language datasets for QA: Several natural language QA datasets have been proposed to test AI systems on various reasoning abilities (Levesque et al., 2011;Richardson et al., 2013). Our work differs from them in two key aspects: first, we use synthetic data instead of natural data; and second, we specialize in geometrical reasoning instead of general language understanding. Using synthetic data helps us simplify language parsing and thereby focus on geometric reasoning. However, additional research is necessary to generalize our work to natural data.
Synthetic datasets for QA: Recently, synthetic datasets for QA are also becoming crucial in AI. In particular, bAbI has driven the development of several recent DNN-based QA systems (Kumar et al., 2016;Sukhbaatar et al., 2015;Henaff et al., 2016). bAbI consists of 20 tasks to evaluate different reasoning abilities. Two tasks, Positional Reasoning (PR) and Path Finding (PF), are related to geometric reasoning. However, each Positional Reasoning question contains only two sentences, and can be solved through simple logical deduction such as 'A is left of B implies B is right of A'. Similarly, Path Finding involves a search problem that requires simple spatial deductions such as 'A is east of B implies B is west of A'. In contrast, the questions in our datasets involve longer descriptions, more entities, and more relations; they are thus harder to answer with simple deductions. We also provide reference visual representation for each sample, which is not available in bAbI.
Mental Imagery and Visual Reasoning: The importance of visual reasoning has been long recognized in AI (Forbus et al., 1991;Lathrop and Laird, 2007). Prior works in NLP (Seo et al., 2015;Lin and Parikh, 2015) have also studied visual reasoning. Our work is different from them as we use synthetic language instead of natural language. Our synthetic language is easier to parse, allowing our evaluation to mainly reflect the performance of geometric reasoning. On the other hand, while our method and conclusions can potentially apply to natural text, this remains to be validated and involves nontrivial future work. There are other differences to prior works as well. Specifically, (Seo et al., 2015) combined information from textual questions and diagrams to build a model for solving SAT geometry questions. However, our task is different as diagrams are not provided as part of the input, but are generated from the words/symbols themselves. Also, (Lin and Parikh, 2015) take advantage of synthetic images to gather semantic common sense knowledge (visual common sense) and use it to perform fill-inthe-blank (FITB) and visual paraphrasing tasks. Similar to us, they also form 'mental images'. However, there are two differences (apart from natural vs synthetic language): first, their benchmark tests higher level semantic knowledge (like "Mike is having lunch when he sees a bear." =⇒ "Mike tries to hide."), while ours is more focused on geometric reasoning. Second, their model is based on hand-crafted features while we use a DNN. Spatial language for Human-Robot Interaction: Our work is also related to prior work on making robots understand spatial commands (e.g. "put that box here", "move closer to the box") and complete tasks such as navigation and assembly. Earlier work (Müller et al., 2000;Gribble et al., 1998;Zelek, 1997) in this domain used template-based commands, whereas more recent work (Skubic et al., 2004) tried to make the commands more natural. This line of work differs from ours in that the robot has visual perception of its environment that allows grounding of the textual commands, whereas in our case the agent has no visual perception, and an environment needs to be imagined. Image Generation: Our work is related to image generation using DNNs which has a large body of literature, with diverse approaches (Reed et al., 2016;Gregor et al., 2015). We also generate an image from the input. But in our task, image generation is in the service of reasoning rather than an end goal in itself-as a result, photorealism or artistic style of generated images is irrelevant and not considered. Visual Question Answering: Our work is also related to visual QA (VQA) (Johnson et al., 2016;Antol et al., 2015;Lu et al., 2016). Our task is different from VQA because our questions are in terms of words/symbols whereas in VQA the questions are visual, consisting of both text descriptions and images. The images involved in our task are internal and virtual, and are not part of the input or output. Memory and Attention: Memory and attention have been increasingly incorporated into DNNs, especially for tasks involving algorithmic inference and/or natural language (Graves et al., 2014;Vaswani et al., 2017). For QA tasks, memory and attention play an important role in state-ofthe-art (SOTA) approaches. (Sukhbaatar et al., 2015) introduced End-To-End Memory Network (MemN2N), a DNN with memory and recurrent attention mechanism, which can be trained end-toend for diverse tasks like textual QA and language modeling. Concurrently, (Kumar et al., 2016) introduced Dynamic Memory Network (DMN), which also uses attention and memory. (Xiong et al., 2016) proposed DMN+, with several im- Figure 1: An example in the ShapeIntersection dataset.
provements over the previous version of DMN and achieved SOTA results on VQA (Antol et al., 2015) and bAbI . Our proposed DSMN is a strict generalization of DMN+ (see Sec. 4.1). On removing the images and spatial memory from DSMN, it reduces to DMN+. Recently (Gupta et al., 2017) also used spatial memory in their deep learning system, but for visual navigation. We are using spatial memory for QA.
Datasets
We introduce two synthetically-generated QA datasets to evaluate a system's goemetrical reasoning ability: FloorPlanQA and ShapeIntersection. These datasets are not meant to test natural language understanding, but instead focus on geometrical reasoning. Owing to their synthetic nature, they are easy to parse, but nevertheless they are still challenging for DNNs like DMN+ (Xiong et al., 2016) and MemN2N (Sukhbaatar et al., 2015) that achieved SOTA results on existing QA datasets (see Table 2a). The proposed datasets are similar in spirit to bAbI , which is also synthetic. In spite of its synthetic nature, bAbI has proved to be a crucial benchmark for the development of new models like MemN2N, DMN+, variants of which have proved successful in various natural domains (Kumar et al., 2016;Perez and Liu, 2016). Our proposed dataset is first to explicitly test 'visual thinking', and its synthetic nature helps us avoid the expensive and tedious task of collecting human annotations. Meanwhile, it is important to note that conclusions drawn from synthetic data do not automatically translate to natural data, and methods developed on synthetic benchmarks need additional validation on natural domains.
The proposed datasets also contain visual representations of the questions. Each of them has 38,400 questions, evenly split into a training set, a validation set and a test set (12,800 each).
Component Template House door
The house door is in the middle of the {nr, sr, er, wr} wall of the house. The house door is located in the {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr} side of the house, such that it opens towards {n, s, e, w}.
Room door
The door for this room is in the middle of its {nr, sr, er, wr} wall. This room's door is in the middle of its {nr, sr, er, wr} wall. The door for this room is located in its {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr} side, such that it opens towards {n, s, e, w}. This room's door is located in its {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr} side, such that it opens towards {n, s, e, w}. Small room Room {1, 2, 3} is small in size and it is located in the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house. Room {1, 2, 3} is located in the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house and is small in size. Medium room Room {1, 2, 3} is medium in size and it extends from the {n, s, e, w, c, n-e, s-e, n-w, s-w} to the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house. Room {1, 2, 3} extends from the {n, s, e, w, c, n-e, s-e, n-w, s-w} to the {n, s, e, w, c, n-e, s-e, n-w, s-w} of the house and is medium in size. Large room Room {1, 2, 3} is large in size and it stretches along the {n-s, e-w}direction in the {n, s, e, w, c} of the house. Room {1, 2, 3} stretches along the {n-s, e-w} direction in the {n, s, e, w, c} of the house and is large in size.
Object
A {cu, cd, sp, co} is located in the middle of the {nr, sr, er, wr} part of the house. A {cu, cd, sp, co} is located in the {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr, cr} part of the house. A {cu, cd, sp, co} is located in the middle of the {nr, sr, er, wr} part of this room. A {cu, cd, sp, co} is located in the {n-er, s-er, n-wr, s-wr, n-er, s-er, n-wr, s-wr, cr} part of this room. Table 1: Templates used by the description generator for FloorPlanQA. For compactness we used the following notations, n -north, s -south, e -east, w -west, c -center, nr -northern, sr -southern, ereastern, wr -western, cr -central, cu -cube, cd -cuboid, sp -sphere and co -cone.
FloorPlanQA: Each sample in FloorPlanQA involves the layout of a house that has multiple rooms (max 3). The rooms are either small, medium or large. All the rooms and the house have a door. Additionally, each room and empty-space in the house (i.e. the space in the house that is not part of any room) might also contain an object (either a cube, cuboid, sphere, or cone).
Each sample has four components, a description, a question, an answer, and a visual representation. Each sentence in the description describes either a room, a door or an object. A question is of the following template: Suppose you are entering the {house, room 1, room 2, room 3}, where is the {house door, room 1 door, room 2 door, room 3 door, cube, cuboid, sphere, cone} with respect to you?. The answer is either of left, right, front, or back. Other characteristics of FloorPlanQA are summarized in Fig. 2.
The visual representation of a sample consists of an ordered set of image channels, one per sentence in the description. An image channel pictorially represents the location and/or orientation of the described item (room, door, object) w.r.t. the house. An example is shown in Fig. 2.
To generate samples for FloorPlanQA, we define a probabilistic generative process which produces tree structures representing layouts of houses, similar to scene graphs used in computer graphics. The root node of a tree represents an en-tire house, and the leaf nodes represent rooms. We use a description and visual generator to produce respectively the description and visual representation from the tree structure. The templates used by the description generator are described in Table 1. Furthermore, the order of sentences in a description is randomized while making sure that the description still makes sense. For example, in some sample, the description of room 1 can appear before that of the house-door, while in another sample, it could be reversed. Similarly, for a room, the sentence describing the room's door could appear before or after the sentence describing the object in the room (if the room contains one). We perform rejection sampling to ensure that all the answers are equally likely, and thus removing bias.
ShapeIntersection: As the name suggests, ShapeIntersection is concerned with counting the number of intersection points between shapes. In this dataset, the description consists of symbols representing various shapes, and the question is always "how many points of intersection are there among these shapes?"
There are three types of shapes in ShapeIntersection: rectangles, circles, and lines. The description of shapes is provided in the form of a sequence of 1D vectors, each vector representing one shape. A vector in ShapeIntersection is analogous to a sentence in FloorPlanQA. Hence, A cube is located in the south-eastern part of the house.
Room 1 is located in the north-west of the house and is small in size.
The door for this room is in the middle of its southern wall.
The house door is located in the north-eastern side of the house, such that it opens towards east. for ShapeIntersection, the term 'sentence' actually refers to a vector. Each sentence describing a shape consists of 5 real numbers. The first number stands for the type of shape: 1 -line, 2 -circle, and 3 -rectangle. The subsequent four numbers specify the size and location of the shape. For example, in case of a rectangle, they represent its height, its width, and coordinates of its bottom-left corner.
Note that one can also describe the shapes using a sentence, e.g. "there is a rectangle at (5, 5), with a height of 2 cm and width of 8 cm." However, as our focus is to evaluate 'visual thinking', we work directly with the symbolic encoding. In a given description, there are 6.5 shapes on average, and at most 6 lines, 3 rectangles and 3 circles. All the shapes in the dataset are unique and lie on a 10 × 10 canvas. While generating the dataset, we do rejection sampling to ensure that the number of intersections is uniformly distributed from 0 to the maximum possible number of intersections, regardless of the number of lines, rectangles, and circles. This ensures that the number of intersections cannot be estimated from the number of lines, circles or rectangles.
Similar to FloorPlanQA, the visual representation for a sample in this dataset is an ordered set of image channels. Each channel is associated with a sentence, and it plots the described shape. An example is shown in Figure 1.
Dynamic Spatial Memory Network
We propose Dynamic Spatial Memory Network (DSMN), a novel DNN designed for QA with geometric reasoning. What differentiates DSMN from other QA DNNs is that it forms an internal visual representation of the input. It then uses a spatial memory to reason over this visual representation.
A DSMN can be divided into five modules: the input module, visual representation module, question module, spatial memory module, and answer module. The input module generates an embedding for each sentence in the description. The vi-sual representation module uses these embeddings to produce an intermediate visual representation for each sentence. In parallel, the question module produces an embedding for the question. The spatial memory module then goes over the question embedding, the sentence embeddings, and the visual representation multiple times to update the spatial memory. Finally, the answer module uses the spatial memory to output the answer. Fig. 3 illustrates the overall architecture of DSMN. Input Module: This module produces an embedding for each sentence in the description. It is therefore customized based on how the descriptions are provided in a dataset. Since the descriptions are in words for FloorPlanQA, a position encoding (PE) layer is used to produce the initial sentence embeddings. This is done to ensure a fair comparison with DMN+ (Xiong et al., 2016) and MemN2N (Sukhbaatar et al., 2015), which also use a PE layer. A PE layer combines the wordembeddings to encode the position of words in a sentence (Please see (Sukhbaatar et al., 2015) for more information). For ShapeIntersection, the description is given as a sequence of vectors. Therefore, two FC layers (with ReLU in between) are used to obtain the initial sentence embeddings.
These initial sentence embeddings are then fed into a bidirectional Gated Recurrent Unit (GRU) (Cho et al., 2014) to propagate the information across sentences. Let − → s i and ← − s i be the respective output of the forward and backward GRU at i th step. Then, the final sentence embedding for the i th sentence is given by s i = − → s i + ← − s i . Question Module: This module produces an embedding for the question. It is also customized to the dataset. For FloorPlanQA, the embeddings of the words in the question are fed to a GRU, and the final hidden state of the GRU is used as the question embedding. For ShapeIntersection, the question is always fixed, so we use an all-zero vector as the question embedding. Visual Representation Module: This module generates a visual representation for each sentence in the description. It consists of two subcomponents: an attention network and an encoderdecoder network. The attention network gathers information from previous sentences that is important to produce the visual representation for the current sentence. For example, suppose the current sentence describes the location of an object with respect to a room. Then in order to infer the location of the object with respect to the house, one needs the location of the room with respect to the house, which is described in some previous sentence.
The encoder-decoder network encodes the visual information gathered by the attention network, combines it with the current sentence embedding, and decodes the visual representation of the current sentence. An encoder (En(.)) takes an image as input and produces an embedding, while a decoder (De(.)) takes an embedding as input and produces an image. An encoder is composed of series of convolution layers and a decoder is composed of series of deconvolution layers.
Suppose we are currently processing the sentence s t . This means we have already processed the sentences s 1 , s 2 , . . . , s t−1 and produced the corresponding visual representations S 1 , S 2 , . . . , S t−1 . We also add s 0 and S 0 , which are all-zero vectors to represent the null sentence. The attention network produces a scalar attention weight a i for the i th sentence which is given by
a i = Softmax(w s t z i + b s ) where z i = [|s i − s t |; s i • s t ].
Here, w s is a vector, b s is a scalar, • represents element-wise multiplication, |.| represents element-wise absolute value, and [v1; v2] represents the concatenation of vectors v1 and v2.
The gathered visual information isS t = t−1 i=0 a i S i . It is fed into the encoder-decoder network. The visual representation for s t is given by S t = De s s t ; En s (S t ) . The parameters of En s (.), De s (), w s , and b s are shared across multiple iterations.
In the proposed model, we make the simplifying assumption that the visual representation of the current sentence does not depend on future sentences. In other words, it can be completely determined from the previous sentences in the description. Both FloorPlanQA and ShapeIntersection satisfy this assumption. Spatial Memory Module: This module gathers relevant information from the description and up-dates memory accordingly. Similar to DMN+ and MemN2N, it collects information and updates memory multiple times to perform transitive reasoning. One iteration of information collection and memory update is referred as a 'hop'.
The memory consists of two components: a 2D spatial memory and a tag vector. The 2D spatial memory can be thought of as a visual scratch pad on which the network 'sketches' out the visual information. The tag vector is meant to represent what is 'sketched' on the 2D spatial memory. For example, the network can sketch the location of room 1 on its 2D spatial memory, and store the fact that it has sketched room 1 in the tag vector.
As mentioned earlier, each step of the spatial memory module involves gathering of relevant information and updating of memory. Suppose we are in step t. Let M (t−1) represent the 2D spatial memory and m (t−1) represent the tag vector after step t − 1. The network gathers the relevant information by calculating the attention value for each sentence based on the question and the current memory. For sentence s i , the scalar attention value g
(t) i equal to Softmax(w t y p (t) i + b y ), where p (t)
i is given as
p (t) i = |m (t−1) − s i |; m (t−1) • s i ; |q − s i |; q • s i ; En (t) p 1 (|M (t−1) − S i |); En (t) p 2 (M (t−1) • S i )(1)
M (0) and m (0) represent initial blank memory, and their elements are all zero. Then, gathered information is represented as a context tag vector, c (t) = AttGRU(g i (t) s i ) and 2D context, C (t) = n i=0 g i (t) S i . Please refer to (Xiong et al., 2016) for information about AttGRU(.). Finally, we use the 2D context and context tag vector to update the memory as follows:
m (t) = ReLU W m (t) m (t−1) ; q; c (t) ; En c (C (t) ) + b m (t)(2)M (t) = De (t) m m (t) ; En (t) m (M (t−1) )(3)
Answer Module: This module uses the final memory and question embedding to generate the output. The feature vector used for predicting the answer is given by f , where M (T ) and m (T ) represent the final memory. To obtain the output, an FC layer is applied to f in case of regression, while the FC layer is followed by softmax in case of classification. To keep DSMN similar to DMN+, we apply a dropout layer on sentence encodings (s i ) and f .
DSMN as a strict generalization of DMN
DSMN is a strict generalization of a DMN+. If we remove the visual representation of the input along with the 2D spatial memory, and just use vector representations with memory tags, then a DSMN reduces to DMN+. This ensures that comparison with DMN+ is fair.
DSMN with or without intermediate visual supervision
As described in previous sections, a DSMN forms an intermediate visual representation of the input. Therefore, if we have a 'ground-truth' visual representation for the training data, we could use it to train our network better. This leads to two different ways for training a DSMN, one with intermediate visual supervision and one without it. Without intermediate visual supervision, we train the network in an end-to-end fashion by using a loss (L w/o vi ) that compares the predicted answer with the ground truth. With intermediate visual supervision, we train our network using an additional visual representation loss (L vi ) that measures how close the generated visual representation is to the ground-truth representation. Thus, the loss used for training with intermediate supervision is given by
L w vi = λ vi L vi + (1 − λ vi )L w/o vi ,
Experiments
Baselines: LSTM (Hochreiter and Schmidhuber, 1997) is a popular neural network for sequence processing tasks. We use two versions of LSTM-based baselines. LSTM-1 is a common version that is used as a baseline for textual QA (Sukhbaatar et al., 2015;Graves et al., 2016). In LSTM-1, we concatenate all the sentences and the question to a single string. For FloorPlanQA, we do word embedding look-up, while for ShapeIntersection, we project each real number into higher dimension via a series of FC layers. The sequence of vectors is fed into an LSTM. The final output vector of the LSTM is then used for prediction. We develop another version of LSTM that we call LSTM-2, in which the question is concatenated to the description. We use a two-level hierarchy to embed the description. We first extract an embedding for each sentence. For FloorPlanQA, we use an LSTM to get the sentence embeddings, and for ShapeIntersection, we use a series of FC layers. We then feed the sentence embeddings into an LSTM, whose output is used for prediction.
Further, we compare our model to DMN+ (Xiong et al., 2016) and MemN2N (Sukhbaatar et al., 2015), which achieved state-of-the-art results on bAbI . In particular, we compare the 3-hop versions of DSMN, DMN+, and MemN2N. Training Details: We used ADAM (Kingma and Ba, 2014) to train all models, and the learning rate for each model is tuned for each dataset. We tune the embedding size and l 2 regularization weight for each model and dataset pair separately. For reproducibility, the value of the best-tuned hyperparameters is mentioned in the supplementary material. As reported by (Sukhbaatar et al., 2015;Kumar et al., 2016;Henaff et al., 2016), we also observe that the results of memory networks are unstable across multiple runs. Therefore for each hyperparameter choice, we run all the models 10 times and select the run with the best performance on the validation set. Fig. 4, DSMN* outperforms DMN+ by a large margin, even when intermediate visual supervision is provided for only 1% of the training samples. This can be useful when obtaining visual representations is expensive and time-consuming. One possible justification for why visual supervision (even in a small amount) helps a lot is that it constrains the high-dimensional space of possible intermediate visual representations. With limited data and no explicit supervision, automatically learning these high-dimensional representations can be difficult.
Additonally, we performed ablation study (see Table 2b) on the usefulness of final memory tag vector (m (T ) ) and 2D spatial memory (M (T ) ) in the answer feature vector f (see Eqn. 4). We removed each of them one at a time, and retrained (with hyperparameter tuning) the DSMN and DSMN* models. Note that they are removed only from the final feature vector f , and both of them are still coupled. The model with both tag and 2D spatial memory (f = En f (M (T ) ); m (T ) ; q ) performs slightly better than the only tag vector model (f = m (T ) ; q ). Also, as expected the only 2D spatial memory model (f = En f (M (T ) ); q ) performs much better for DSMN* than DSMN becuase of the intermdiate supervision.
Further, Table 2c shows the effect of varying the number of memory 'hops' for DSMN and DSMN* on FloorPlanQA. The performance of both DSMN and DSMN* increases with the number of 'hops'. Note that even the 1-hop DSMN* performs well (better than baselines). Also, note that the difference in performance between 2-hop DSMN* and 3-hop DSMN* is not much. A possible justification for why DSMN* performs well even with fewer memory 'hops' is that DSMN* completes some 'hops of reasoning' in the visual representation module itself. Suppose one needs to find the location of an object placed in a room, w.r.t. the house. To do so, one first needs to find the location of the room w.r.t. the house, and then the location of the object w.r.t. the room. However, if one has already 'sketched' out the location of the object in the house, one can directly fetch it. It is during sketching the object's location that one has completed a 'hop of reasoning'. For a sample from FloorPlanQA, we visualize the attention maps in the memory module of 3-hop DMN+ and 3-hop DSMN* in Fig. 5. To infer the location of room 1's door, DSMN* directly fetches sentence 3, while DMN+ tries to do so by fetching two sentences (one for the room's door location w.r.t the room and one for the room's location w.r.t the house). Conclusion: We have investigated how to use DNNs for modeling visual thinking. We have introduced two synthetic QA datasets, FloorPlanQA and ShapeIntersection, that test a system's ability to think visually. We have developed DSMN, a novel DNN that reasons in the visual space for answering questions. Experimental results have demonstrated the effectiveness of DSMN for geometric reasoning on synthetic data.
Figure 3 :
3f = En f (M (T ) ); m (T ) ; q The architecture of the proposed Dynamic Spatial Memory Network (DSMN).
Table 2 :
2Experimental results showing compari-
son with baselines, and ablation study of DSMN
For FloorPlanQA, all models are trained up to a maximum of 1600 epochs, with early stopping after 80 epochs if the validation accuracy did not increase. The maximum number of epochs for ShapeIntersection is 800 epochs, with early stopping after 80 epochs. Additionally, we modify the input module and question module of DMN+ and MemN2N to be same as ours for the ShapeIntersection dataset.For MemN2N, we use the publicly available im-(a) Test set rmse on ShapeIntersection.(b) Test set accuracy on FloorPlanQA.Figure 4: Performance of DSMN* with varying percentage of intermediate visual supervision.plementation 2 and train it exactly as all other models (same optimizer, total epochs, and early stopping criteria) for fairness. While the reported best result for MemN2N is on the version with position encoding, linear start training, and randominjection of time index noise(Sukhbaatar et al., 2015), the version we use has only position encoding. Note that the comparison is still meaningful because linear start training and time index noise are not used in DMN+ (and as a result, neither in our proposed DSMN). Results: The results for FloorPlanQA and ShapeIntersection are summarized in Table 2a. For brevity, we will refer to the DSMN model trained without intermediate visual supervision as DSMN, and the one with intermediate visual supervision as DSMN*. We see that DSMN (i.e the one without intermediate supervision) outperforms DMN+, MemN2N and the LSTM baselines on both datasets. However, we consider DSMN to be only slightly better than DMN+ because both are observed to be unstable across multiple runs and so the gap between the two has a large variance. Finally, DSMN* outperforms all other approaches by a large margin on both datasets, which demonstrates the utility of visual supervision in proposed tasks. While the variation can be significant across runs, if we run each model 10 times and choose the best run, we observe consistent results. We visualized the intermediate visual representations, but when no visual supervision is pro-Figure 5: Attention values on each sentence during different memory 'hops' for a sample from Floor-PlanQA. Darker color indicates more attention. To answer, one needs the location of room 1's door and the house door. To infer the location of room 1's door, DSMN* directly jumps to sent. 3. Since DMN+ does not form a visual representation, it tries to infer the location of room 1's door w.r.t the house by finding the location of the room's door w.r.t the room (sent. 3) and the location of the room w.r.t the house (sent. 2). Both DSMN* and DMN+ use one hop to infer the location of the house door (sent. 1). vided, they were not interpretable (sometimes they looked like random noise, sometimes blank). In the case when visual supervision is provided, the intermediate visual representation is well-formed and similar to the ground-truth. We further investigate how DSMN* performs when intermediate visual supervision is available for only a portion of training samples. As shown in
Code and datasets: https://github.com/ umich-vl/think_visually
https://github.com/domluna/memn2n
Vqa: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, Lawrence Zitnick, Devi Parikh, ICCV. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question an- swering. In ICCV, pages 2425-2433.
On the properties of neural machine translation. Kyunghyun Cho, Bart Van Merriënboer, Dzmitry Bahdanau, Yoshua Bengio, arXiv:1409.1259Encoder-decoder approaches. arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. arXiv preprint arXiv:1409.1259.
First-order logic and automated theorem proving. Melvin Fitting, Springer Science & Business MediaMelvin Fitting. 2012. First-order logic and automated theorem proving. Springer Science & Business Me- dia.
Qualitative spatial reasoning: The clock project. D Kenneth, Paul Forbus, Boi Nielsen, Faltings, Artificial Intelligence. 511-3Kenneth D Forbus, Paul Nielsen, and Boi Faltings. 1991. Qualitative spatial reasoning: The clock project. Artificial Intelligence, 51(1-3):417-471.
Alex Graves, Greg Wayne, Ivo Danihelka, arXiv:1410.5401Neural turing machines. arXiv preprintAlex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401.
Hybrid computing using a neural network with dynamic external memory. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska-Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, Nature. Alex Graves, Greg Wayne, Malcolm Reynolds, Tim Harley, Ivo Danihelka, Agnieszka Grabska- Barwińska, Sergio Gómez Colmenarejo, Edward Grefenstette, Tiago Ramalho, John Agapiou, et al. 2016. Hybrid computing using a neural network with dynamic external memory. Nature, pages 471- 476.
Karol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, Daan Wierstra, arXiv:1502.04623Draw: A recurrent neural network for image generation. arXiv preprintKarol Gregor, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. 2015. Draw: A recurrent neural network for image generation. arXiv preprint arXiv:1502.04623.
Integrating vision and spatial reasoning for assistive navigation. Robert L William S Gribble, Micheal Browning, Emilio Hewett, Benjamin J Remolina, Kuipers, Assistive Technology and artificial intelligence. William S Gribble, Robert L Browning, Micheal Hewett, Emilio Remolina, and Benjamin J Kuipers. 1998. Integrating vision and spatial reasoning for assistive navigation. In Assistive Technology and ar- tificial intelligence, pages 179-193.
Cognitive mapping and planning for visual navigation. Saurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, Jitendra Malik, arXiv:1702.03920arXiv preprintSaurabh Gupta, James Davidson, Sergey Levine, Rahul Sukthankar, and Jitendra Malik. 2017. Cognitive mapping and planning for visual navigation. arXiv preprint arXiv:1702.03920.
Tracking the world state with recurrent entity networks. Mikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, Yann Lecun, arXiv:1612.03969arXiv preprintMikael Henaff, Jason Weston, Arthur Szlam, Antoine Bordes, and Yann LeCun. 2016. Tracking the world state with recurrent entity networks. arXiv preprint arXiv:1612.03969.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, pages 1735-1780.
Justin Johnson, Bharath Hariharan, Laurens Van Der Maaten, Li Fei-Fei, Lawrence Zitnick, Ross Girshick, arXiv:1612.06890Clevr: A diagnostic dataset for compositional language and elementary visual reasoning. arXiv preprintJustin Johnson, Bharath Hariharan, Laurens van der Maaten, Li Fei-Fei, C Lawrence Zitnick, and Ross Girshick. 2016. Clevr: A diagnostic dataset for compositional language and elementary visual rea- soning. arXiv preprint arXiv:1612.06890.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980arXiv preprintDiederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
Ask me anything: Dynamic memory networks for natural language processing. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, Richard Socher, ICML. Ankit Kumar, Ozan Irsoy, Peter Ondruska, Mohit Iyyer, James Bradbury, Ishaan Gulrajani, Victor Zhong, Romain Paulus, and Richard Socher. 2016. Ask me anything: Dynamic memory networks for natural language processing. In ICML, pages 1378- 1387.
Whence and whither in spatial language and spatial cognition?. Barbara Landau, Ray Jackendoff, Behavioral and brain sciences. 16Barbara Landau and Ray Jackendoff. 1993. Whence and whither in spatial language and spatial cogni- tion? Behavioral and brain sciences, 16:255-265.
Towards incorporating visual imagery into a cognitive architecture. D Scott, John E Lathrop, Laird, International conference on cognitive modeling. 25Scott D Lathrop and John E Laird. 2007. Towards in- corporating visual imagery into a cognitive architec- ture. In International conference on cognitive mod- eling, page 25.
The winograd schema challenge. J Hector, Ernest Levesque, Leora Davis, Morgenstern, AAAI Spring Symposium. 4647Hector J Levesque, Ernest Davis, and Leora Morgen- stern. 2011. The winograd schema challenge. In AAAI Spring Symposium, volume 46, page 47.
Don't just listen, use your imagination: Leveraging visual common sense for non-visual tasks. Xiao Lin, Devi Parikh, ICCV. Xiao Lin and Devi Parikh. 2015. Don't just listen, use your imagination: Leveraging visual common sense for non-visual tasks. In ICCV, pages 2984-2993.
Hierarchical question-image coattention for visual question answering. Jiasen Lu, Jianwei Yang, Dhruv Batra, Devi Parikh, NIPS. Jiasen Lu, Jianwei Yang, Dhruv Batra, and Devi Parikh. 2016. Hierarchical question-image co- attention for visual question answering. In NIPS, pages 289-297.
Coarse qualitative descriptions in robot navigation. Rolf Müller, Thomas Röfer, Axel Lankenau, Alexandra Musto, Klaus Stein, Andreas Eisenkolb, Spatial Cognition II. Rolf Müller, Thomas Röfer, Axel Lankenau, Alexandra Musto, Klaus Stein, and Andreas Eisenkolb. 2000. Coarse qualitative descriptions in robot navigation. In Spatial Cognition II, pages 265-276.
Dialog state tracking, a machine reading approach using memory network. Julien Perez, Fei Liu, arXiv:1606.04052arXiv preprintJulien Perez and Fei Liu. 2016. Dialog state tracking, a machine reading approach using memory network. arXiv preprint arXiv:1606.04052.
The language instinct: How the mind creates language. Steven Pinker, Penguin UKSteven Pinker. 2003. The language instinct: How the mind creates language. Penguin UK.
Lajanugen Logeswaran, Bernt Schiele, and Honglak Lee. Scott Reed, Zeynep Akata, Xinchen Yan, arXiv:1605.05396Generative adversarial text to image synthesis. arXiv preprintScott Reed, Zeynep Akata, Xinchen Yan, Lajanu- gen Logeswaran, Bernt Schiele, and Honglak Lee. 2016. Generative adversarial text to image synthe- sis. arXiv preprint arXiv:1605.05396.
Mctest: A challenge dataset for the open-domain machine comprehension of text. Matthew Richardson, J C Christopher, Erin Burges, Renshaw, EMNLP. 34Matthew Richardson, Christopher JC Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In EMNLP, volume 3, page 4.
Solving geometry problems: Combining text and diagram interpretation. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, Clint Malcolm, EMNLP. Minjoon Seo, Hannaneh Hajishirzi, Ali Farhadi, Oren Etzioni, and Clint Malcolm. 2015. Solving geome- try problems: Combining text and diagram interpre- tation. In EMNLP, pages 1466-1476.
Natural-language spatial relations between linear and areal objects: the topology and metric of english-language terms. A Rashid, B M Shariff, International journal of geographical information science. 12A Rashid BM Shariff. 1998. Natural-language spatial relations between linear and areal objects: the topol- ogy and metric of english-language terms. Interna- tional journal of geographical information science, 12:215-245.
Marjorie Skubic, Dennis Perzanowski, Samuel Blisard, Alan Schultz, William Adams, Magda Bugajska, Derek Brock, Spatial language for human-robot dialogs. IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews). Marjorie Skubic, Dennis Perzanowski, Samuel Blis- ard, Alan Schultz, William Adams, Magda Buga- jska, and Derek Brock. 2004. Spatial language for human-robot dialogs. IEEE Transactions on Sys- tems, Man, and Cybernetics, Part C (Applications and Reviews), pages 154-167.
End-to-end memory networks. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, NIPS. Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. 2015. End-to-end memory networks. In NIPS, pages 2440-2448.
Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, arXiv:1706.03762Attention is all you need. arXiv preprintAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
Towards ai-complete question answering: A set of prerequisite toy tasks. Jason Weston, Antoine Bordes, Sumit Chopra, Alexander M Rush, Bart Van Merriënboer, Armand Joulin, Tomas Mikolov, arXiv:1502.05698arXiv preprintJason Weston, Antoine Bordes, Sumit Chopra, Alexan- der M Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. arXiv preprint arXiv:1502.05698.
Dynamic memory networks for visual and textual question answering. Caiming Xiong, Stephen Merity, Richard Socher, ICML. Caiming Xiong, Stephen Merity, and Richard Socher. 2016. Dynamic memory networks for visual and textual question answering. In ICML, pages 2397- 2406.
Human-robot interaction with minimal spanning natural language template for autonomous and tele-operated control. S John, Zelek, IROS. John S Zelek. 1997. Human-robot interaction with minimal spanning natural language template for au- tonomous and tele-operated control. In IROS, pages 299-305.
| [
"https://github.com/domluna/memn2n"
] |
[
"Dependency Parsing with Dilated Iterated Graph CNNs",
"Dependency Parsing with Dilated Iterated Graph CNNs"
] | [
"Emma Strubell strubell@cs.umass.edu \nCollege of Information and Computer Sciences\nUniversity of Massachusetts Amherst\n\n",
"Andrew Mccallum mccallum@cs.umass.edu \nCollege of Information and Computer Sciences\nUniversity of Massachusetts Amherst\n\n"
] | [
"College of Information and Computer Sciences\nUniversity of Massachusetts Amherst\n",
"College of Information and Computer Sciences\nUniversity of Massachusetts Amherst\n"
] | [] | Dependency parses are an effective way to inject linguistic knowledge into many downstream tasks, and many practitioners wish to efficiently parse sentences at scale. Recent advances in GPU hardware have enabled neural networks to achieve significant gains over the previous best models, these models still fail to leverage GPUs' capability for massive parallelism due to their requirement of sequential processing of the sentence. In response, we propose Dilated Iterated Graph Convolutional Neural Networks (DIG-CNNs) for graphbased dependency parsing, a graph convolutional architecture that allows for efficient end-to-end GPU parsing. In experiments on the English Penn TreeBank benchmark, we show that DIG-CNNs perform on par with some of the best neural network parsers. | 10.18653/v1/w17-4301 | [
"https://arxiv.org/pdf/1705.00403v2.pdf"
] | 7,534,444 | 1705.00403 | 37f04bebe3bf0b8b70f33adf255cec3faa938667 |
Dependency Parsing with Dilated Iterated Graph CNNs
Emma Strubell strubell@cs.umass.edu
College of Information and Computer Sciences
University of Massachusetts Amherst
Andrew Mccallum mccallum@cs.umass.edu
College of Information and Computer Sciences
University of Massachusetts Amherst
Dependency Parsing with Dilated Iterated Graph CNNs
Dependency parses are an effective way to inject linguistic knowledge into many downstream tasks, and many practitioners wish to efficiently parse sentences at scale. Recent advances in GPU hardware have enabled neural networks to achieve significant gains over the previous best models, these models still fail to leverage GPUs' capability for massive parallelism due to their requirement of sequential processing of the sentence. In response, we propose Dilated Iterated Graph Convolutional Neural Networks (DIG-CNNs) for graphbased dependency parsing, a graph convolutional architecture that allows for efficient end-to-end GPU parsing. In experiments on the English Penn TreeBank benchmark, we show that DIG-CNNs perform on par with some of the best neural network parsers.
Introduction
By vastly accelerating and parallelizing the core numeric operations for performing inference and computing gradients in neural networks, recent developments in GPU hardware have facilitated the emergence of deep neural networks as state-ofthe-art models for many NLP tasks, such as syntactic dependency parsing. The best neural dependency parsers generally consist of two stages: First, they employ a recurrent neural network such as a bidirectional LSTM to encode each token in context; next, they compose these token representations into a parse tree. Transition based dependency parsers (Nivre, 2009;Chen and Manning, 2014;Andor et al., 2016) produce a well-formed tree by predicting and executing a series of shiftreduce actions, whereas graph-based parsers (Mc- Darker cell indicates more layers include that cell's representation. Heads and labels corresponding to gold tree are indicated. Donald et al., 2005;Kiperwasser and Goldberg, 2016;Dozat and Manning, 2017) generally employ attention to produce marginals over each possible edge in the graph, followed by a dynamic programming algorithm to find the most likely tree given those marginals.
Because of their dependency on sequential processing of the sentence, none of these architectures fully exploit the massive parallel processing capability that GPUs possess. If we wish to maximize GPU resources, graph-based dependency parsers are more desirable than their transitionbased counterparts since attention over the edgefactored graph can be parallelized across the entire sentence, unlike the transition-based parser which must sequentially predict and perform each transition. By encoding token-level representations with an Iterated Dilated CNN (ID-CNN) (Strubell et al., 2017), we can also remove the sequential dependencies of the RNN layers. Unlike Strubell et al. (2017) who use 1-dimensional convolutions over the sentence to produce token representations, our network employs 2-dimensional convolutions over the adjacency matrix of the sentence's parse tree, modeling attention from the bottom up. By training with an objective that encourages our model to predict trees using only simple matrix operations, we additionally remove the additional computational cost of dynamic programming inference. Combining all of these ideas, we present Dilated Iterated Graph CNNs (DIG-CNNs): a combined convolutional neural network architecture and training objective for efficient, end-to-end GPU graph-based dependency parsing.
We demonstrate the efficacy of these models in experiments on English Penn TreeBank, in which our models perform similarly to the state-of-theart.
Dilated Convolutions
Though common in other areas such as computer vision, 2-dimensional convolutions are rarely used in NLP since it is usually unclear how to process text as a 2-dimensional grid. However, 2dimensional convolutional layers are a natural model for embedding the adjacency matrix of a sentence's parse.
A 2-dimensional convolutional neural network layer transforms each input element, in our case an edge in the dependency graph, as a linear function of the width r w and height r h window of surrounding input elements (other possible edges in the dependency graph). In this work we assume square convolutional windows: r h = r w .
Dilated convolutions perform the same operation, except rather than transforming directly adjacent inputs, the convolution is defined over a wider input window by skipping over δ inputs at a time, where δ is the dilation width. A dilated convolution of width 1 is equivalent to a simple convolution. Using the same number of parameters as a simple convolution with the same radius, the δ > 1 dilated convolution incorporates broader context into the representation of a token than a simple convolution.
Iterated Dilated CNNs
Stacking many dilated CNN layers can easily incorporate information from a whole sentence. For example, with a radius of 1 and 4 layers of dilated convolutions, the effective input window size for each token is width 31, which exceeds the average sentence length (23) in the Penn TreeBank corpus. However, simply increasing the depth of the CNN can cause considerable over-fitting when data is sparse relative to the growth in model parameters.
To address this, we employ Iterated Dilated CNNs (ID-CNNs) (Strubell et al., 2017), which instead apply the same small stack of dilated convolutions repeatedly, each time taking the result of the last stack as input to the current iteration. Applying the parameters recurrently in this way increases the size of the window of context incorporated into each token representation while allowing the model to generalize well. Their training objective additionally computes a loss for the output of each application, encouraging parameters that allow subsequent stacks to resolve dependency violations from their predecessors.
Dilated Iterated Graph CNNs
We describe how to extend ID-CNNs (Strubell et al., 2017) to 2-dimensional convolutions over the adjacency matrix of a sentence's parse tree, allowing us to model the parse tree through the whole network, incorporating evidence about nearby head-dependent relationships in every layer of the network, rather than modeling at the token level followed by a single layer of attention to produce head-dependent compatibilities between tokens. ID-CNNs allow us to efficiently incorporate evidence from the entire tree without sacrificing generalizability.
Model architecture
Let x = [x 1 , . .
. , x T ] be our input text 1 Let y = [y 1 , . . . , y T ] be labels with domain size D for the edge between each token x i and its head x j . We predict the most likely y, given a conditional model P (y|x) where the tags are conditionally independent given some features for x:
P (y|x) = T t=1 P (y t |F (x)),(1)
The local conditional distributions of Eqn. (1) come from a straightforward extension of ID-CNNs (Strubell et al., 2017) to 2-dimensional convolutions. This network takes as input a sequence of T vectors x t , and outputs a T × T matrix of per-class scores h ij for each pair of tokens in the sentence.
We denote the kth dilated convolutional layer of dilation width δ as D (k) δ . The first layer in the network transforms the input to a graph by concatenating all pairs of vectors in the sequence x i , x j and applying a 2-dimensional dilation-1 convolution D
c ij (0) = D (0) 1 [x i ; x j ](2)
We denote vector concatenation with [·; ·]. Next, L c layers of dilated convolutions of exponentially increasing dilation width are applied to c ij (0) , folding in increasingly broader context into the embedded representation of e ij at each layer. Let r() denote the ReLU activation function (Glorot et al., 2011). Beginning with c t (0) = i t we define the stack of layers with the following recurrence:
c ij (k) = r D (k−1) 2 Lc−1 c t (k−1)(3)
and add a final dilation-1 layer to the stack:
c ij (Lc+1) = r D (Lc) 1 c t (Lc)(4)
We refer to this stack of dilated convolutions as a block B(·), which has output resolution equal to its input resolution. To incorporate even broader context without over-fitting, we avoid making B deeper, and instead iteratively apply B L b times, introducing no extra parameters. Starting with b t (1) = B (i t ), we define the output of block m:
b ij (m) = B b t (m−1)(5)
We apply a simple affine transformation W o to this final representation to obtain label scores for each edge e ij :
h ij (L b ) = W o b t (L b )(6)
We can obtain the most likely head (and its label) for each dependent by computing the argmax over all labels for all heads for each dependent:
h t = arg max j h ij (L b )(7)
Training
Our main focus is to apply the DIG-CNN as feature extraction for the conditional model described in Sec. 3.1, where tags are conditionally independent given deep features, since this will enable prediction that is parallelizable across all possible edges. Here, maximum likelihood training is straightforward because the likelihood decouples into the sum of the likelihoods of independent logistic regression problems for every edge, with natural parameters given by Eqn. (6):
1 T T t=1 log P (y t | h t )(8)
We could also use the DIG-CNN as input features for an MST parser, where the partition function and its gradient are computed using Kirchhoffs Matrix-Tree Theorem (Tutte, 1984), but our aim is to approximate inference in a treestructured graphical model using greedy inference and expressive features over the input in order to perform inference as efficiently as possible on a GPU.
To help bridge the gap between these two techniques, we use the training technique described in (Strubell et al., 2017). The tree-structured graphical model has preferable sample complexity and accuracy since prediction directly reasons in the space of structured outputs. Instead, we compile some of this reasoning in output space into DIG-CNN feature extraction. Instead of explicit reasoning over output labels during inference, we train the network such that each block is predictive of output labels. Subsequent blocks learn to correct dependency violations of their predecessors, refining the final sequence prediction.
To do so, we first define predictions of the model after each of the L b applications of the block. Let h t (m) be the result of applying the matrix W o from (6) to b t (m) , the output of block m. We minimize the average of the losses for each application of the block:
1 L b L b k=1 1 T T t=1 log P (y t | h t (m) ).(9)
By rewarding accurate predictions after each application of the block, we learn a model where later blocks are used to refine initial predictions. The loss also helps reduce the vanishing gradient problem (Hochreiter, 1998) for deep architectures.
We apply dropout (Srivastava et al., 2014) to the raw inputs x ij and to each block's output b t (m) to help prevent overfitting.
Related work
Currently, the most accurate parser in terms of labeled and unlabeled attachment scores is the neural network graph-based dependency parser of Dozat and Manning (2017). Their parser builds token representations with a bidirectional LSTM over word embeddings, followed by head and dependent MLPs. Compatibility between heads and dependents is then scored using a biaffine model, and the highest scoring head for each dependent is selected.
Previously, (Chen and Manning, 2014) pioneered neural network paring with a transitionbased dependency parser which used features from a fast feed-forward neural network over word, token and label embeddings. Many improved upon this work by increasing the size of the network and using a structured training objective (Weiss et al., 2015;Andor et al., 2016). (Kiperwasser and Goldberg, 2016) were the first to present a graph-based neural network parser, employing an MLP with bidirectional LSTM inputs to score arcs and labels. (Cheng et al., 2016) propose a similar network, except with additional forward and backward encoders to allow for conditioning on previous predictions. (Kuncoro et al., 2016) take a different approach, distilling a consensus of many LSTM-based transition-based parsers into one graph-based parser. (Ma and Hovy, 2017) employ a similar model, but add a CNN over characters as an additional word representation and perform structured training using the Matrix-Tree Theorem. Hashimoto et al. (2017) train a large network which performs many NLP tasks including part-of-speech tagging, chunking, graph-based parsing, and entailment, observing benefits from multitasking with these tasks.
Despite their success in the area of computer vision, in NLP convolutional neural networks have mainly been relegated to tasks such as sentence classification, where each input sequence is mapped to a single label (rather than a label for each token) Kim (2014); Kalchbrenner et al. (2014); Zhang et al. (2015); Toutanova et al. (2015). As described above, CNNs have also been used to encode token representations from embeddings of their characters, which similarly perform a pooling operation over characters. Lei et al. (2015) present a CNN variant where convolutions adaptively skip neighboring words. While the flexibility of this model is powerful, its adaptive behavior is not well-suited to GPU acceleration.
More recently, inspired by the success of deep dilated CNNs for image segmentation in computer vision (Yu and Koltun, 2016;, convolutional neural networks have been employed as fast models for tagging, speech generation and machine translation. (van den Oord et al., 2016) use dilated CNNs to efficiently generate speech, and Kalchbrenner et al. (2016) describes an encoder-decoder model for machine translation which uses dilated CNNs over bytes in both the encoder and decoder. Strubell et al. (2017) first described the one-dimensional ID-CNN architecture which is the basis for this work, demonstrating its success as a fast and accurate NER tagger. Gehring et al. (2017) report state-ofthe-art results and much faster training from using many CNN layers with gated activations as encoders and decoders for a sequence-to-sequence model. While our architecture is similar to the encoder architecture of these models, ours is differentiated by (1) being tailored to smaller-data regimes such as parsing via our iterated architecture and loss, and (2) employing two-dimensional convolutions to model the adjacency matrix of the parse tree. We are the first to our knowledge to use dilated convolutions for parsing, or to use twodimensional dilated convolutions for NLP. (2016)
Model
UAS LAS Kiperwasser and Goldberg
English PTB Results
We compare our models labeled and unlabeled attachment scores to the neural network graph-based dependency parsers described in Sec. 4. Without enforcing trees at test time, our model performs just under the LSTM-based parser of Kiperwasser and Goldberg (2016), and a few points lower than the state-of-the-art. When we post-process our model's outputs into trees, like all the other models in our table, our results increase to perform slightly above Kiperwasser and Goldberg (2016). We believe our model's relatively poor performance compared to existing models is due to its limited incorporation of context from the entire sentence. While each bidirectional LSTM token representation observes all tokens in the sentence, our reported model observes a relatively small window, only 9 tokens. We hypothesize that this window is not sufficient for producing accurate parses. Still, we believe this is a promising architecture for graph-based parsing, and with further experimentation could meet or exceed the stateof-the-art while running faster by better leveraging GPU architecture.
Conclusion
We present DIG-CNNs, a fast, end-to-end convolutional architecture for graph-based dependency parsing. Future work will experiment with deeper CNN architectures which incorporate broader sentence context in order to increase accuracy without sacrificing speed.
Figure 1 :
1Receptive field for predicting the headdependent relationship between likes and eating.
In practice, we include a dummy root token at the beginning of the sentence which serves as the head of the root. We do not predict a head for this dummy token.
Experimental Results5.1 Data and EvaluationWe train our parser on the English Penn Tree-Bank on the typical data split: training on sections 2-21, testing on section 23 and using section 22 for development. We convert constituency trees to dependencies using the Stanford dependency framework v3.5 (de Marneffe and Manning, 2008), and use part-of-speech tags from the Stanford left3words part-of-speech tagger. As is the norm for this dataset, our evaluation excludes punctuation. Hyperparameters that resulted in the best performance on the validation set were selected via grid search. A more detailed description of optimization and data pre-processing can be found in the Appendix.
AcknowledgmentsWe thank Patrick Verga and David Belanger for helpful discussions. This work was supported in part by the Center for Intelligent Information Retrieval, in part by DARPA under agreement number FA8750-13-2-0020, in part by Defense Advanced Research Agency (DARPA) contract number HR0011-15-2-0036, in part by the National Science Foundation (NSF) grant number DMR-1534431, and in part by the National Science Foundation (NSF) grant number IIS-1514053. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect those of the sponsor.
Globally normalized transition-based neural networks. Daniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, Michael Collins, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsDaniel Andor, Chris Alberti, David Weiss, Aliaksei Severyn, Alessandro Presta, Kuzman Ganchev, Slav Petrov, and Michael Collins. 2016. Globally nor- malized transition-based neural networks. In Pro- ceedings of the 54th Annual Meeting of the Associa- tion for Computational Linguistics.
A fast and accurate dependency parser using neural networks. Danqi Chen, Christopher D Manning, EMNLP. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In EMNLP.
Semantic image segmentation with deep convolutional nets and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, ICLR. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. 2015. Semantic image segmentation with deep convolu- tional nets and fully connected crfs. In ICLR.
Bi-directional attention with agreement for dependency parsing. Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, Li Deng, EMNLP. Hao Cheng, Hao Fang, Xiaodong He, Jianfeng Gao, and Li Deng. 2016. Bi-directional attention with agreement for dependency parsing. In EMNLP.
The stanford typed dependencies representation. Marie-Catherine De Marneffe, Christopher D Manning, COLING 2008 Workshop on Crossframework and Cross-domain Parser Evaluation. Marie-Catherine de Marneffe and Christopher D. Man- ning. 2008. The stanford typed dependencies rep- resentation. In COLING 2008 Workshop on Cross- framework and Cross-domain Parser Evaluation.
Deep biaffine attention for neural dependency parsing. Timothy Dozat, Christopher D Manning, ICLR. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency pars- ing. In ICLR.
Convolutional sequence to sequence learning. Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N Dauphin, arXiv:1705.03122arXiv preprintJonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. 2017. Convolutional sequence to sequence learning. arXiv preprint: arXiv:1705.03122 .
Deep sparse rectifier neural networks. Xavier Glorot, Antoine Bordes, Yoshua Bengio, AIS-TATS. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In AIS- TATS.
A joint many-task model: Growing a neural network for multiple nlp tasks. Kazuma Hashimoto, Caiming Xiong, Yoshimasa Tsuruoka, Richard Socher, arXiv:1611.01587arXiv preprintKazuma Hashimoto, Caiming Xiong, Yoshimasa Tsu- ruoka, and Richard Socher. 2017. A joint many-task model: Growing a neural network for multiple nlp tasks. arXiv preprint: arXiv:1611.01587 .
The vanishing gradient problem during learning recurrent neural nets and problem solutions. Sepp Hochreiter, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems. 602Sepp Hochreiter. 1998. The vanishing gradient prob- lem during learning recurrent neural nets and prob- lem solutions. International Journal of Uncer- tainty, Fuzziness and Knowledge-Based Systems 6(02):107-116.
Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron Van Den Oord, Alex Graves, Koray Kavukcuoglu, arXiv:1610.10099Neural machine translation in linear time. arXiv preprintNal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. arXiv preprint arXiv:1610.10099 .
A convolutional neural network for modelling sentences. Nal Kalchbrenner, Edward Grefenstette, Phil Blunsom, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsNal Kalchbrenner, Edward Grefenstette, and Phil Blun- som. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computa- tional Linguistics.
Convolutional neural networks for sentence classification. Yoon Kim, EMNLP. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In EMNLP.
Simple and accurate dependency parsing using bidirectional lstm feature representations. Eliyahu Kiperwasser, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 4Eliyahu Kiperwasser and Yoav Goldberg. 2016. Sim- ple and accurate dependency parsing using bidirec- tional lstm feature representations. Transactions of the Association for Computational Linguistics 4:313-327.
Distilling an ensemble of greedy dependency parsers into one mst parser. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, Noah A Smith, EMNLP. Adhiguna Kuncoro, Miguel Ballesteros, Lingpeng Kong, Chris Dyer, and Noah A. Smith. 2016. Dis- tilling an ensemble of greedy dependency parsers into one mst parser. In EMNLP.
Molding cnns for text: non-linear, non-consecutive convolutions. Tao Lei, Regina Barzilay, Tommi Jaakkola, Empirical Methods in Natural Language Processing. Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2015. Molding cnns for text: non-linear, non-consecutive convolutions. Empirical Methods in Natural Lan- guage Processing .
Xuezhe Ma, Eduard Hovy, arXiv preprint: 1701.00874Neural probabilistic model for non-projective mst parsing. Xuezhe Ma and Eduard Hovy. 2017. Neural proba- bilistic model for non-projective mst parsing. arXiv preprint: 1701.00874 .
Non-projective dependency parsing using spanning tree algorithms. Ryan Mcdonald, Fernando Pereira, Kiril Ribarov, Proc. Human Language Technology Conf. and Conf. Empirical Methods Natural Language Process. (HLT/EMNLP). Human Language Technology Conf. and Conf. Empirical Methods Natural Language ess. (HLT/EMNLP)Ryan McDonald, Fernando Pereira, Kiril Ribarov, and Jan Hajic. 2005. Non-projective depen- dency parsing using spanning tree algorithms. In Proc. Human Language Technology Conf. and Conf. Empirical Methods Natural Language Pro- cess. (HLT/EMNLP). pages 523-530.
Non-projective dependency parsing in expected linear time. Joakim Nivre, Proceedings of the 47th Annual Meeting of the ACL and the 4th IJC-NLP of the AFNLP. the 47th Annual Meeting of the ACL and the 4th IJC-NLP of the AFNLPJoakim Nivre. 2009. Non-projective dependency pars- ing in expected linear time. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJC- NLP of the AFNLP.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey E Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search 15(1):1929-1958.
Fast and accurate sequence labeling with iterated dilated convolutions. Emma Strubell, Patrick Verga, David Belanger, Andrew Mccallum, arXiv:1702.02098arXiv preprintEmma Strubell, Patrick Verga, David Belanger, and Andrew McCallum. 2017. Fast and accurate se- quence labeling with iterated dilated convolutions. arXiv preprint: arXiv:1702.02098 .
Representing text for joint embedding of text and knowledge bases. Kristina Toutanova, Danqi Chen, Patrick Pantel, Hoifung Poon, Pallavi Choudhury, Michael Gamon, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsKristina Toutanova, Danqi Chen, Patrick Pantel, Hoi- fung Poon, Pallavi Choudhury, and Michael Gamon. 2015. Representing text for joint embedding of text and knowledge bases. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing. Association for Computational Linguistics, pages 1499-1509.
William Thomas Tutte, Graph theory. Addison-Wesley Menlo Park11William Thomas Tutte. 1984. Graph theory, vol- ume 11. Addison-Wesley Menlo Park.
Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, arXiv:1609.03499Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprintAaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499 .
Structured training for neural network transition-based parsing. David Weiss, Chris Alberti, Michael Collins, Slav Petrov, Annual Meeting of the Association for Computational Linguistics. David Weiss, Chris Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Annual Meeting of the Association for Computational Linguistics.
Multi-scale context aggregation by dilated convolutions. Fisher Yu, Vladlen Koltun, International Conference on Learning Representations. ICLRFisher Yu and Vladlen Koltun. 2016. Multi-scale con- text aggregation by dilated convolutions. In Inter- national Conference on Learning Representations (ICLR).
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in Neural Information Processing Systems 28 (NIPS). Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text clas- sification. In Advances in Neural Information Pro- cessing Systems 28 (NIPS).
| [] |
[
"MICE: Mining Idioms with Contextual Embeddings",
"MICE: Mining Idioms with Contextual Embeddings"
] | [
"Tadej Škvorc es:tadej.skvorc@fri.uni-lj.sitadejškvorc ",
"Polona Gantar apolonija.gantar@guest.arnes.sipolonagantar ",
"Marko Robnik-Šikonja ",
"\nFaculty of Computer and Information Science\nUniversity of Ljubljana\n1000LjubljanaSlovenia\n",
"\nJožef Stefan Institute\nJamova Cesta 391000LjubljanaSlovenia\n",
"\nFaculty of Arts\nUniversity of Ljubljana\n1000LjubljanaSlovenia\n",
"\nFaculty of Computer and Information Science\nUniversity of Ljubljana\n1000LjubljanaSlovenia\n"
] | [
"Faculty of Computer and Information Science\nUniversity of Ljubljana\n1000LjubljanaSlovenia",
"Jožef Stefan Institute\nJamova Cesta 391000LjubljanaSlovenia",
"Faculty of Arts\nUniversity of Ljubljana\n1000LjubljanaSlovenia",
"Faculty of Computer and Information Science\nUniversity of Ljubljana\n1000LjubljanaSlovenia"
] | [] | Idiomatic expressions can be problematic for natural language processing applications as their meaning cannot be inferred from their constituting words. A lack of successful methodological approaches and sufficiently large datasets prevents the development of machine learning approaches for detecting idioms, especially for expressions that do not occur in the training set. We present an approach called MICE that uses contextual embeddings for that purpose. We present a new dataset of multi-word expressions with literal and idiomatic meanings and use it to train a classifier based on two state-of-the-art contextual word embeddings: ELMo and BERT. We show that deep neural networks using both embeddings perform much better than existing approaches and are capable of detecting idiomatic word use, even for expressions that were not present in the training set. We demonstrate the cross-lingual transfer of developed models and analyze the size of the required dataset. | 10.1016/j.knosys.2021.107606 | [
"https://arxiv.org/pdf/2008.05759v2.pdf"
] | 221,112,566 | 2008.05759 | 7404403a7a612c0b89ed7f33222de7828f353330 |
MICE: Mining Idioms with Contextual Embeddings
Tadej Škvorc es:tadej.skvorc@fri.uni-lj.sitadejškvorc
Polona Gantar apolonija.gantar@guest.arnes.sipolonagantar
Marko Robnik-Šikonja
Faculty of Computer and Information Science
University of Ljubljana
1000LjubljanaSlovenia
Jožef Stefan Institute
Jamova Cesta 391000LjubljanaSlovenia
Faculty of Arts
University of Ljubljana
1000LjubljanaSlovenia
Faculty of Computer and Information Science
University of Ljubljana
1000LjubljanaSlovenia
MICE: Mining Idioms with Contextual Embeddings
Preprint submitted to Knowledge-Based Systems November 11, 2021 arXiv:2008(Marko Robnik-Šikonja)Machine learningNatural language processingIdiomatic
Idiomatic expressions can be problematic for natural language processing applications as their meaning cannot be inferred from their constituting words. A lack of successful methodological approaches and sufficiently large datasets prevents the development of machine learning approaches for detecting idioms, especially for expressions that do not occur in the training set. We present an approach called MICE that uses contextual embeddings for that purpose. We present a new dataset of multi-word expressions with literal and idiomatic meanings and use it to train a classifier based on two state-of-the-art contextual word embeddings: ELMo and BERT. We show that deep neural networks using both embeddings perform much better than existing approaches and are capable of detecting idiomatic word use, even for expressions that were not present in the training set. We demonstrate the cross-lingual transfer of developed models and analyze the size of the required dataset.
expressions, Word embeddings, Contextual embeddings, Cross-lingual transfer
Introduction
Idiomatic expressions (IEs), also called idioms, are composed of a group of words whose meaning is established by convention and cannot be deduced from individual words composing the expression (e.g., it's a piece of cake). In this work we, are interested in the detection and identification of IEs.
Due to the lack of satisfactory tools, linguists often create lexicons of idioms manually or by using tools that take into account only co-occurrence features, since these are easier to implement and are relatively language independent. This type of workflow introduces several problems. First, manually created large lexicons of idioms are scarce because of the time-consuming human labor that is required, particularly for less-resourced languages. Second, frequency lists of idioms that were created without robust, generalized identification tools are unreliable due to their discontinuity and syntactic variability. Finally, discovery or identification of new IEs is often based on the personal knowledge of linguists or frequent collocations. This may completely omit many idioms.
IEs such as "break the ice" and "under the weather" commonly occur in texts.
They can be hard to understand for computer models as their meaning differs from the meaning of individual words. To address this, several automatic machine learning based approaches for the detection of idiomatic language emerged. However, current approaches suffer from a number of issues and limitations related to methodological shortcomings and a lack of datasets. The first issue that affects current approaches is the lack of large datasets with annotated IEs.
Because of a large number of different IEs, a dataset that would contain sufficient number of examples for every IE needed to train a classification model currently does not exist. Additionally, most existing datasets only address English, which makes developing approaches for other languages difficult. Existing works use small datasets, such as the data from SemEval 2013, task 5B [1], PARSEME Shared Task on Automatic Verbal Multi-Word Expression (MWE) Identification [2], or the VNC tokens dataset [3]. These datasets only cover a limited number of IEs and contain at most a few annotated sentences for each expression, making it hard to train successful machine-learning models for IE recognition.
Deep neural networks are currently the most successful machine learning approach for textual data, surpassing all other approaches in practically all language processing and understanding tasks [4,5,6,7,8]. As input, neural networks require numerical data, and texts are transformed into numeric vectors via a process called text embedding. The process has to ensure that relations between words are reflected in distances and directions in a numeric space of typically several hundred dimensions. The embedding vectors are obtained from specialized learning tasks based on neural networks, e.g., word2vec [9], GloVe [10], or fastText [11]. For training, the embedding algorithms use large monolingual text corpora and design a learning task that tries to predict a context of a given word. The problem of the first generation of neural embeddings, such as word2vec, is their failure to express polysemous words. During the training of the embedding, all senses of a given word (e.g., paper as a material, as a newspaper, as a scientific work, and as an exam) contribute relevant information about their contexts in proportion to their frequency in the training corpus. This causes the final vector to be placed somewhere in the weighted middle of all word's meanings. Consequently, rare meanings of words (which mostly include their idioms) are poorly expressed with these embeddings and the resulting vectors do not offer good semantic representations. For example, none of the 50 closest vectors of the word paper is related to science 1 .
The idea of contextual embeddings is to generate a different vector for each context a word appears in, and the context is typically defined sentence-wise.
To a large extent, this solves the problems with word polysemy, i.e. the context of a sentence is typically enough to disambiguate different meanings of a word for humans as well as for the learning algorithms. In our work, we use what 1 A demo showing near vectors computed with word2vec from Google News corpus is available at http://bionlp-www.utu.fi/wv_demo/. are currently the most successful approaches to contextual word embeddings, ELMo [7] and BERT [8]. We examine whether contextual word embeddings can be used as a solution to the idiom identification problem. Past work shows that contextual word embeddings are capable of detecting different meanings of polysemous words and can improve the performance on a variety of NLP tasks [8]. However, to the best of our knowledge, current approaches have not used contextual word embeddings for differentiating between idiomatic and literal language use. In the proposed approach, called MICE (Mining Idioms with Contextual Embeddings), we use ELMo and BERT embeddings as an input to a neural network and show that using them as the first layer of neural networks improves results compared to existing approaches. We evaluate our approach on a new dataset of Slovene IEs, as well as on the existing dataset from the PARSEME Shared Task on Automatic Verbal MWE Identification. To test if ELMo and BERT representations contain complementary information, we use a recent Bayesian ensemble model to combine the predictions of different MICE models. This is the first attempt to combine BERT and ELMo embeddings using an ensemble approach for idiom detection. We analyze different properties of the proposed models, such as the amount of labelled data required to get useful results, different variants of BERT models, and cross-lingual transfer of trained models. The contributions of the paper can be stated explicitly as follows.
• The first approach to use contextual embedding models (ELMo and BERT) to detect IEs. • The first system to successfully recognize IEs not present in the training set.
• The first system to successfully analyze both sentence-level and token-level IE detection.
• The first successful cross-lingual approach for detection of IEs.
• The first Bayesian ensemble approach to combine ELMo and BERT-based models.
4
• An extensive analysis of different properties of IE detection, such as differences in the recognition rate for different IEs and different amounts of training data.
• Creation of SloIE, a large dataset of IEs in less-resourced, morphologically rich Slovene language.
We show that contextual embeddings contain a large amount of lexical and semantic information that can be used to detect IEs. Our MICE approach outperforms existing approaches that do not use pre-trained contextual word embeddings in the detection of IE present in the training data, as well as identification of IE missing in the training set. The latter is a major problem of existing approaches. Finally, we show that multilingual contextual word embeddings are capable of detecting IEs in multiple languages even when trained on a monolingual dataset.
The reminder of the paper is structured as follows. In Section 2, we describe past research on automatic IE detection. We present our MICE methodology in Section 3. Section 4 describes the datasets used for the evaluation of our approach, which we describe in Section 5. Section 6 concludes the paper.
Related Work
There currently exists a variety of approaches for detecting IEs in a text, broadly divided into supervised and unsupervised methods. In supervised approaches, the problem is frequently presented as a binary classification problem where a separate classifier is trained for each idiom [12]. The disadvantage of this approach is that it scales poorly to a large number of idioms as it requires a separate training set for each idiom.
In recent years, several neural network approaches have been proposed.
MUMULS [13] uses a neural network with bidirectional gated recurrent units (GRUs) [14] in combination with an embedding layer. In addition to idioms, it is capable of detecting different types of verbal multi-word expressions, which 5 were annotated within the PARSEME Shared Task on Automatic Verbal MWE Identification [2]. MUMULS achieved the best results on multiple languages, but the authors reported a poor classification accuracy on languages with a low amount of training data and were unable to detect expressions that did not occur in the training set. The 2018 edition of the shared task [15] featured several other systems based on neural networks [16,17,18] with similar outcomes to MUMULS, namely good results on several languages but low classification accuracy and F 1 score for languages with small training datasets and no detection of expressions that are not present in the training set. Another approach was presented by Boroş and Burtica [18], who use a bidirectional long short-term memory network (biLSTM) in combination with graph-based decoding. However, despite using neural networks, these approaches do not use pretrained contextual embeddings.
Because of this, they cannot use un-annotated datasets when training their model, making it more difficult for them to make full use of contextual information in text.
The second broad group of methods for detecting idiomatic word use are unsupervised approaches. Sporleder and Li [19] use lexical cohesion to detect IEs without the need for a labeled dataset or language resources such as dictionaries or lexicons. Liu and Hwa [20] compare the context of a word's occurrence to a pre-defined "literal usage representation" (i.e. a collection of words that often appear near literal uses of the word) to obtain a heuristic measure indicating whether a word was used literally or idiomatically. The obtained scores are passed to a probabilistic latent variable model, which predicts the usage of each word. They report average F 1 scores between 0.72 to 0.75 on the SemEval 2013 Task 5B [1] and VNC tokens datasets [3]. This is lower than the results obtained by our model on a comparable task.
A potential problem with current approaches is a lack of large annotated datasets that could be used to train classification models. Liu and Hwa [12] use the data from SemEval 2013 Task 5B [1], which only contains 10 different idioms with 2371 examples. Boroş and Burtica [18] and Klyueva et al. [13] trained their models on the PARSEME Shared Task on Automatic Verbal MWE Identification 6 [2], which only contains a small number of idioms across 20 languages. Larger datasets exist, such as the VNC tokens dataset [3], which contains 2,984 instances of 53 different expressions, and the dataset presented by Fadaee et al. [21], which contains 6,846 sentences with 235 different IEs in English and German. In our work, we use a larger dataset with 29,400 sentences and 75 different IEs.
Existing classification approaches require a list of idiomatic phrases with accompanying datasets on which a classifier is trained. Current approaches pay little attention to detecting idioms that do not appear in the training set, which is a much harder problem. However, due to a large number of idiomatic phrases, such use is more reflective of real-world problems. Even the unsupervised approach presented by Liu and Hwa [20] first manually constructs literal usage representations for each idiomatic phrase and is therefore not suitable for detecting non-listed IEs. We use contextual embeddings, which can capture semantic information without requiring labelled data for training. This allows them to detect idiomatic phrases even if they do not appear in a pre-defined list.
Detecting IEs with Contextual Word Embeddings
We first describe two state-of-the-art deep neural network approaches to contextual embeddings, ELMo [7] and BERT [8], followed by the proposed neural network architectures for identification of IEs and a their Bayesian ensemble.
ELMo contextual embeddings
ELMo (Embeddings from Language Models) [7] is a large pretrained neural language model, producing contextual embeddings and state-of-the-art results in many text processing tasks. The ELMo architecture consists of three layers of neurons. The output of neurons after each layer gives one set of embeddings, altogether three sets. The first layer is the convolutional (CNN) layer operating on the character-level input. This layer is followed by two biLSTM layers that consist of two concatenated LSTM layers. The first, left-to-right LSTM layer is trained to predict the following word based on the given past words, where each word is represented by the embeddings from the CNN layer. The second, right-to-left LSTM predicts the preceding word based on the given following words. Although ELMo is trained on character-level input and is able to handle out-of-vocabulary words, a vocabulary file containing the most common tokens is used for efficiency during training and embedding generation.
In NLP tasks, a weighted average of the three embeddings is usually used.
The weights for merging the representation of layers are learned during the training of the model for a specific task. Optionally, the entire ELMo model can be fine-tuned for the specific task.
In our work, we use the ELMo model that was pre-trained on a large amount of Slovene text [22]. We take an average of the three ELMo embedding layers as the input to our prediction models. These embeddings are not fine-tuned to the specific task of idiom detection, as we wanted to evaluate how well the embeddings capture the relevant contextual information without task-specific fine-tuning. As results show, even without fine-tuning, the contextual embeddings improve performance compared to similar approaches that do not use contextual word-embeddings. Fine-tuning of the embedding layers of neural networks is left for further work.
BERT contextual model
BERT (Bidirectional Encoder Representations from Transformers) [8] generalizes the idea of language models to masked language models-inspired by Cloze (i.e. gap filling) tests-which test the understanding of a text by removing a certain portion of words that the participant is asked to fill in. The masked language model randomly masks some of the tokens from the input, and the task of the language model is to predict the missing token based on its neighbourhood.
BERT uses transformer architecture of neural networks [23], which uses both left and right context in predicting the masked word and further introduces the task of predicting whether two sentences appear in a sequence. The input representation of BERT are sequences of tokens representing subword units.
The result of pre-trained tokenization is that some common words are kept as 8 single tokens, while others are split into subwords (e.g., common stems, prefixes, suffixes-if needed down to a single letter token). The original BERT project offers pre-trained English, Chinese, and multilingual models; the latter, called mBERT, is trained on 104 languages simultaneously. BERT has shown excellent performance on 11 NLP tasks: 8 from GLUE language understanding benchmark [24], question answering, named entity recognition, and common-sense inference.
Rather than training an individual classifier for every classification task from scratch, which would be resource and time expensive, the pre-trained BERT language model is usually used and fine-tuned on a specific task. This approach is common in modern NLP because large pretrained language models extract highly relevant textual features without task-specific development and training. Frequently, this approach also requires less task-specific data. During pre-training, the BERT model learns relations between sentences (entailment) and between tokens within a sentence. This knowledge is used during training on a specific downstream task [8]. The use of BERT for a token classification task requires adding connections between its last hidden layer and new neurons corresponding to the number of classes in the intended task. To classify a sequence, we use a special [CLS] token that represents the final hidden state of the input sequence (i.e. the sentence). The predicted class label of the [CLS] token corresponds to the class label of the entire sequence. The fine-tuning process is applied to the whole network, and all of the parameters of BERT and new class-specific weights are fine-tuned jointly to maximize the log-probability of the correct labels.
In our use of BERT models, we did not fine-tune the embedding weights but left them as they were after the original pre-training. This simplification significantly reduces the computational load but leads to a potential loss of accuracy. This is a possible improvement to be tested in future work, as finetuning the embeddings would likely improve the results.
The proposed MICE architecture
Our approach is based on contextual word embeddings, which were designed to deal with the fact that a word can have multiple meanings. Instead of assigning the same vector to every occurrence of a word, contextual embeddings assign a different vector to each word occurrence based on its context. As the contexts of words' literal use and idiomatic occurrences of the same word are likely to differ, these embeddings shall be well-suited for detecting IEs. We used two state-of-the-art embedding approaches: ELMo [7] and BERT [8]. For ELMo, we used the pretrained Slovene model described by [22]. The model was trained on the Gigafida corpus [25] of Slovene texts. For BERT embeddings, we use two different models:
1. The multilingual mBERT model presented by Devlin et al. [8], which was trained on Wikipedia text from 104 languages, including Slovene. [26], which was trained on English, Slovene, and Croatian using Wikipedia for English text, the Gigafida corpus for Slovene text, and a combination of hrWaC [27], articles from the Styria media group, and Riznica corpora [28] for Croatian text. This BERT is better suited for classification tasks in Slovene and Croatian as mBERT as its training incorporated larger amounts of training data and a larger vocabulary for each of the involved languages. The authors also report improved cross-lingual transfer of trained models between the three languages.
The trilingual CroSloEngual BERT presented by Ulčar and Robnik-Šikonja
We use the embeddings (ELMo or BERT) as the first layer of a neural network. This layer is followed by a bidirectional gated recurrent unit (GRU) with 100 cells. GRUs are similar to standard recurrent units but use an additional update and reset gate to help deal with the vanishing gradient problem. The update gate is defined as
z t = σ(W (z) x t + U (z) h t−1 + b z ),(1)
where W ( z) and U ( z) are trainable weights, x t is the input vector and b z is the trainable bias. h t−1 represents the memory of past inputs computed by the 10 network. The reset gate uses the same equation, with different weights and biases:
r t = σ(W (r) x t + U (r) h t−1 + b r ).(2)
For each input, the GRU computes the output as:
h t = z t h t−1 + (1 − z) tanh(W (h) x t + U (h) (r t h t−1 ) + b h ),(3)
where is the Hadamard product, and W (h) , U (h) , and b h are trainable weights and biases.
For both ELMo and BERT embeddings, we follow the GRU layer with a softmax layer to obtain the final predictions. A dropout of 50% is applied at the softmax layer. This approach follows the work on MWE detection presented by
Klyueva et al. [13] but with the difference that we use contextual embeddings.
We deliberately use a simple network architecture to show that the embeddings, by themselves, capture enough semantic information to properly recognize IEs.
We use the architecture on two types of classification tasks: a token-level classification, where we predict whether an individual token has an idiomatic or literal meaning, and a sentence-level classification, where the network makes a single prediction for the entire sentence, predicting whether the sentence contains an expression with an idiomatic meaning. The details of the tasks are presented in Section 5.
We fine-tuned the hyperparameters using a development set consisting of 7% of sentences randomly selected from our dataset, as described in Section 4.1.
We trained the network for 10 epochs using RMSProp as the optimizer with the learning rate of 0.001, ρ = 0.9, and = 10 −7 . We used the binary cross-entropy as the loss function.
Bayesian ensemble of MICE models
Due to the fact that MICE can be used with different embeddings, it is possible that the information extracted by different embedding models is complementary.
In this case, an ensemble of different embedding models could improve the performance by learning their combination that best suits each idiom. We test this hypothesis using the Multivariate Normal Mixture Conditional Likelihood Model (MM), a Bayesian ensemble model proposed by Pirš and Štrumbelj [29].
Miok et al. [30] showed that an MM ensemble can improve the performance of individual classifiers on the text annotation task. We test this approach on the IE detection task and combine our three best models (MICE with Slovene ELMo, MICE with mBERT, and MICE with CroSloEngual BERT). We transform the predictions of each model using an inverse logistic transformation and concatenate them to obtain a (m − 1)r-variate distribution, where m = 2 is the number of classes and r = 3 is the number of models. As explained below, we model the latent distribution using the multivariate normal mixtures conditional on the labels and predictions obtained from the training set in a similar fashion to linear discriminant analysis. We then generate predictions on the test data using the following formula:
p(T * = t|u * , θ) = p(u * |θ t )(γ t n t ) r i=1 p(u * |θ i )(γ i n i ) ,
where p is the probability density function, γ t is the frequency prior for class t, n t is the number of true labels in class t from the training dataset, θ are the estimated parameters of the Bayesian model, θ t are the subset of the parameters with the true label t, T * ∈ {1, 2, ..., m} is the response random variable corresponding to the predicted class, and u * is the concatenated and transformed probability vectors of our embedding models. The model is described in detail in Miok et al. [30]. As a baseline ensemble, we use simple (unweighted) voting.
We evaluate the ensemble models in the same manner as the individual models. The results and discussion of this evaluation are presented in Section 5.
Datasets
Our approach supports two types of tasks, monolingual and multilingual.
The monolingual approach requires a reasonably large dataset with a sufficient number of idioms. We analyze the required size of a dataset in terms of different
idioms and examples of their usage in both monolingual and multilingual settings in Section 5.5. The multilingual approach exploits the existing monolingual dataset to transfer the trained model to languages with fewer resources, i.e. with non-existent or smaller datasets.
In Section 4.1 we describe our monolingual Slovene dataset. In Section 4.2 we describe the well-known PARSEME datasets [2] for detection of multi-word expressions in many languages, which also include idioms.
Monolingual dataset
We evaluate our approach on a new dataset of Slovene IEs, called SloIE, Table 1 shows an overview of the data present in our dataset. The distribution of literal and idiomatic uses of each expression is shown in Figure 1.
Evaluation
We evaluate our MICE approach in five different settings, explained below.
We present the results of these evaluation scenarios in the subsections.
1. Classification of IEs that were present in the training set. In Section 5.1, we evaluate whether MICE is capable of detecting IEs that were present in the training set. This task is easier than detection of IEs not present in the training set, but still difficult due to the fact that idioms in the SloIE dataset can appear both literally or idiomatically. An English example would be the phrase "breaking the ice", which can have both the literal meaning ("The ship was breaking the ice on its way across the Arctic) or the idiomatic meaning ("I had trouble breaking the ice at the party). The models for this task have to recognize the meaning of the phrase based on its context. We split this task into two sub-tasks: i) sentence-level We compare the proposed MICE approach to different existing approaches.
As a baseline, we use the SVM classifier with the tf-idf weighted vector of a sentence as the input. We compare our approach to MUMULS [13], which uses a similar neural network architecture to our approach but does not use pretrained contextual word embeddings. Unlike our approach, MUMULS uses part-of-speech tags and word lemmas as additional inputs.
For the token-based evaluation, ELMo and BERT models use different tokenization strategies: ELMo uses words as tokens, while BERT splits words into sub-word units. In the case of BERT, we use the prediction of the first token as the prediction for the entire work, following the methodology presented by Devlin et al. [8]. This ensures that the token-level results are comparable between models using different tokenization strategies.
To test our Bayesian ensemble model MICE-MM, we first train each of the ensemble members (i.e. individual MICE models) the same way as in the individual evaluation. We combine the models' predictions using the MM model.
As a baseline ensemble model, we use voting (MICE-voting).
For all tests, we report the classification accuracy (CA) and F 1 score. As many of the tasks are highly imbalanced, CA is not a good measure and we mostly use the obtained F 1 scores in interpretations of results.
IEs from the training set
For the first experiment, detection of IEs present in the training set, we randomly split the SloIE dataset into training, testing, and development sets with the ratio of 63:30:7 (18,522,8,820, and 2,058 sentences). The network was trained for 10 epochs using RMSProp as the optimizer with a learning rate of 0.001, ρ = 0.9, and = 10 −7 . Binary cross-entropy was used as the loss function.
The evaluation on the development set showed that training the model for more than 10 epochs led to overfitting, likely due to the size of the dataset. We report two sets of results: recognition of individual tokens in a sentence as idiomatic or non-idiomatic (i.e. token-level classification), and detection of the whole sentence as either containing or not containing idioms (i.e. sentence-level classification).
The results for token-level classification are presented in Table 4. To provide a sensible context for token-based classification, the input of the SVM classifier consists of the target token and three words before and three words after the target word. The SVM classifier obtains better F 1 score than MUMULS but 20 lower score compared to MICE variants. The dataset is highly imbalanced, with 96,7% of all tokens being non-idiomatic. Lacking discriminating information, MUMULS predicts almost every token as non-idiomatic, which results in high classification accuracy but a very low F 1 score. Due to the imbalanced nature of the dataset, the F 1 score is more reflective of relevant real-world performance, and here the MICE variants are in the class of their own. In the evaluation on the sentence-level, instead of classifying each token, we classified each sentence based on whether it contains a IE or not. This lowers the importance of different tokenization strategies between ELMo and BERT.
However, sentence-level evaluation does not show whether the approaches are capable of detecting specific words in a sentence as idioms. The results of this evaluation are presented in Table 5. The sentence-level classification task is less difficult, which leads to an improved performance for all models. The SVM baseline outperforms the mBERT model. MUMULS also achieves better results, outperforming the SVM baseline and the mBERT approach. MICE with CroSloEngual BERT is closer to ELMo in this task, though the latter still achieves the best scores. MICE with mBERT likely achieves lower scores because this model was not pretrained on a large enough amount of Slovene text. Unlike for token-level classification, the sentence-level results show that the Bayesian ensemble improves the performance. This indicates that MICE models with Slovene ELMo and with the two BERT models contain complementary information which can be generalized from the train to test set. The Bayesian ensemble model learned which combination of the individual models is best suited for each idiom and increased the classification accuracy and F 1 score compared to individual models and the voting baseline.
The results confirm the assumption that for different IEs, different embedding models perform best. A further analysis shall determine why this occurs in sentence-level and not in token-level classification. In future work, we plan to conduct a comprehensive quantitative and qualitative analysis using a larger number of embedding models to determine the impact of embeddings on different idioms.
IEs outside the training set
In the previous experiment with the same IEs present in both the training and testing set, we were able to obtain good results (especially with our contextual embeddings approach). However, many languages lack large annotated datasets and even when they do exist, they are unlikely to contain every possible IE found in that language. Because of this, evaluations containing IEs in both sets over-estimates the practical importance of tested methods.
To address this, we tested how well the approaches based on contextual word embeddings generalize to IEs outside the training set. For this experiment, we split our dataset into a training and testing set so that IEs from the testing set do not appear in the training set. Apart from this change, everything else remained the same as in section 5.1 above.
Since IEs in the test set are not present in the training set, the classification models cannot learn how to detect them based on word-data alone. We hypothesize that their detection is possible based on contexts in which they appear. As the meaning of an IE is different from the literal meaning of its constituting words, it should appear in a different context. Neural networks with contextual word embeddings could detect such occurrences. Indeed, our results for token-and sentence-level IE detection, presented in Tables 6 and 7, show that approaches that do not use contextual word embeddings fail to successfully detect IEs that did not occur in the training set, while MICE approaches using contextual embeddings extract useful information.
For token level results, shown in Table 6 Sentence-level results in Table 7 show improved scores of all models. The SVM baseline and MUMULS still lag behind the default classifier concerning both CA and F 1 score. MICE approaches are better, with Slovene ELMo variant achieving the best scores.
When evaluating our approach on models outside the training set, the MICE-MM ensemble model is unable to improve the performance in terms of the F 1 score.
Evaluation of individual IEs
In addition to cumulative results of the entire test set, we are also interested in individual differences between IEs, as it is possible that some IEs are easy and others are hard to detect. As the meanings of IEs can vary from being similar or very different to the literal meanings of their words, we assume that the ability of models based on contextual word embeddings could vary significantly. The distribution shows that for the majority of IEs, MICE models achieve high F 1 scores above 0.8, while there are a few IEs with low recognition rate with F 1 < 0.6. In Table 8 we elaborate on these results and show the five best and worst recognizable IEs. At the moment, we do not have an interpretation of why certain IEs are more or less difficult to detect, and leave this question for further work.
Cross-lingual evaluation of IEs
The results above show encouraging results for IE detection in a language with sufficiently large datasets. As recent research on cross-lingual embeddings shows that reasonably good transfer of trained models can be obtained for many tasks [32,33,34,35], we attempt such a transfer of our models. We use the dataset from the PARSEME shared task on automatic identification of verbal multiword expressions described in Section 4.2. We evaluated two contextual embeddings discussed in the previous sections: the Slovene ELMo embeddings and the multilingual BERT embeddings. We evaluated the cross-lingual MICE approach in the following manner: work, we plan to use these embeddings for prediction in other languages by using cross-lingual mappings (e. g., [36]).
• We evaluated MICE with mBERT embeddings on all languages from the PARSEME collection. The mBERT model was trained on 104 languages, including every language present in the PARSEME dataset.
For both test-cases, we constructed balanced datasets which consist of every sentence with IEs from the PARSEME dataset in a given language, and an equal number of sentences without IEs, chosen at random from the same dataset. We performed the evaluation on the sentence-level classification task. The balanced dataset is much smaller than the original dataset, and possibly reduced performance may be due to a smaller amount training data.. For a more fair comparison, we also constructed a smaller, imbalanced dataset by taking a random subset of SloIE sentences for each expression equal in size to the balanced dataset. The size and number of sentences for the imbalanced dataset were the same as the balanced version.
We performed sentence-level classification on the two datasets, predicting IEs present in the training set. The results of the classification are shown in Table 11. Results show that training the model on the balanced dataset did not lead to an improved classification accuracy or F 1 score. This indicates that MICE is insensitive to this sort of imbalance and performs well even when trained on imbalanced datasets.
Conclusion and Future Work
We showed that contextual word embeddings can be used with neural networks to successfully detect IEs in text. When contextual embeddings (ELMo or mBERT) were used as the first layer of a neural network with the same architecture as the existing MUMULS approach, we were able to obtain much better results. While the existing approaches performed relatively well on the sentence-level classification of IEs that were present in the training set, they failed on token-level tasks and when detecting new IEs, not present in the training set.
We showed that using fine-tuned contextual word embeddings allows the network to perform better on token-level classification and to successfully generalize to IEs that were not present in the training set. This opens an opportunity for the successful treatment of IEs in many downstream applications. We published our code and models under the CC licence 3 .
We evaluated our MICE approach on the SloIE dataset, a new, large dataset of Slovene idioms, as well as on the existing multilingual PARSEME datasets.
SloIE dataset, which we made publicly available 4 , is larger than most of existing datasets, and should therefore be useful for further research into automatic idiom detection. Additionally, we evaluated how the size of the dataset affects the results and showed that our approaches perform well even when trained on smaller datasets.
We show that contextual word embeddings are capable of generalizing to other languages. When dealing with similar language pairs (e. g., Slovene-Croatian), both the monolingual ELMo embeddings and the multilingual BERT embeddings were capable of detecting idioms in Croatian text when trained only on Slovene. The multilingual BERT model was able to detect idioms even in some more distant languages, though with reduced classification accuracy and F 1 scores. Finally, a Bayesian ensemble of our best models has further improved the results on sentence-level classification, indicating that the used contextual embeddings contain at least some complementary information about IEs.
Our work could be improved and extended in multiple ways. We only used embeddings that were pretrained on general text and were not fine-tuned for the specific task of detecting idiomatic language. Several authors have shown [37,8] that specializing embeddings for specific tasks can improve results on a variety of NLP tasks. Several such approaches could be applied to our task and would likely further improve the performance. Additionally, we intentionally used a simple network architecture that could be improved in the future. A further examination of Bayesian ensemble models is also required to determine why the MM model performs well on sentence-level classification but is unable to improve performance when used on token-level classification and when the embeddings in the test set are not present in the training set. Finally, to put our models into a practical use, we intend to apply MICE models in the task of IE lexicon construction.
which we make publicly available for further research 2 . The dataset consists of 29,400 sentences extracted from the Gigafida corpus[25] and contains 75 different IEs. The 75 IEs were selected from the Slovene Lexical Database[31] and had to meet the condition that they appear in corpus sentences in both idiomatic and literal senses, such as, e.g., break the ice, step on someone's toes. Manualselection of idiomatic examples showed that about two-thirds of the idioms in the Slovene Lexical Database (2,041 in total) that occur in both idiomatic and literal senses occur in 50% or more of the corpus sentences in their idiomatic sense, and about one-third of the idioms occur in 50% or more of the corpus in their literal sense, either because literal use is not possible, or it's very rare, although possible in terms of syntax and semantics (e.g. get under someone's skin). Although this finding is interesting from a (socio)linguistic point of view, in designing the dataset for our purposes, we limited ourselves to idioms that meet the condition of appearing in both the idiomatic and the literal sense in the corpus sentences, assuming that speakers can identify both the literal and the idiomatic interpretation of a term based on the context. Two annotators, students of linguistics, marked the complete set of 29,400 sentences. They had four possible choices: YES (the expression in a particular sentence is used in the idiomatic sense, NO (the expression is used in the literal sense), DON'T KNOW (not sure whether the expression is used in a literal 2 http://hdl.handle.net/11356/1335 13 or idiomatic sense) and VAGUE, (literal or idiomatic use cannot be inferred from the sentence). Student annotators were previously briefed with short instructions and provided with a sample of good examples. For the training of classification models, we selected only sentences where both annotators agreed on the annotation. We also disregarded examples that were marked as "vague" or "don't know". The inter-annotator agreement across the entire dataset was 0.952. Due to the nature of IEs, our dataset is imbalanced. A few expressions occur proportionally in both literal and idiomatic use, while most expressions occur predominately idiomatically. The dataset contains fewer than 100 occurrences for most expressions.
Figure 1 :
1The number of literal and idiomatic uses for IEs present in the SloIE dataset. The top figure shows IEs that occur more than 35 times with an idiomatic meaning. The bottom figure shows IEs that occur less than 35 times with an idiomatic meaning.
, due to the imbalanced class distribution, all approaches except for MICE-Voting and MICE-MM lag behind the default classifier concerning CA. For both the SVM baseline and MUMULS this is the case also in terms of F 1 score. The MICE approach with ELMo and mBERT models manages to correctly classify a number of IEs, though the results are worse than in the scenario where the same IEs are present in both the training and testing set. MICE with ELMO embeddings is again the best method, while CroSloEngual embeddings are surprisingly unsuccessful.
Figure 2
2shows the distribution of F 1 scores across all the IEs in our SloIE dataset.
Figure 2 :
2The distribution of F 1 scores per IEs in sentence-level task on the out-of-test-set task using MICE with Slovene ELMo embeddings.
Table 1 :
1An overview of the data present in the SloIE dataset.Sentences
29,400
Tokens
695,636
Idiomatic sentences
24,349
Literal sentences
5,051
Idiomatic tokens
67,088
Literal tokens
626,707
Different IEs
75
Table 2
2shows ten randomly-chosen idioms from the SloIE dataset. Sevenof the chosen idioms appear more often idiomatically, while three appear more
often literally.
SloIE is much larger than other existing datasets in terms of the number of
sentences. For comparison, the closest to SloIE is the VNC tokens dataset that
contains 2984 instances of 53 IEs. We do not expect that datasets of such sizes
would appear soon for most other languages. For that reason, we analyze the
size and distribution required for successful IE detection models in Section 5.5.
Table 2 :
2An example of 10 idioms from the dataset with their direct/idiomatic English classifier difficult. For that reason, we used the PARSEME dataset to evaluate our cross-lingual model. The model used the pretrained mBERT embeddings from[8], was further trained on our Slovene SloIE dataset, and tested on each of the PARSEME datasets in different languages. The details are reported intranslations. The columns show the percentages of idiomatic ("yes"), literal ("no"), and
ambiguous ("dn" ≈ "don't know") sentences for each idiom.
Annotator 1
Annotator 2
Expression
yes no dn yes no dn
barvati kaj s črnimi barvami 50 50
0
50 50
0
paint with dark colors / present pessimistically
kdo nosi hlače
19 75
5
10 70 19
to wear pants / to be in charge
kdo nosi težak križ
41 50
8
41 50
8
to carry a heavy cross / to be under pressure
kdo pade v naročje
77 13
9
63 13 22
to fall into someone's lap / to achieve with ease
kdo si oblizuje prste
84
4
11 51 17 31
to lick ones fingers / to be very satisfied
kislo jabolko
37 31 31 25 56 18
sour apple / unpleasant matter
kot bi odrezal
59 31
9
45
3
51
as if cut off / instantly
letati od cveta do cveta
30 60 10 30 40 25
fly from flower to flower / select without a plan
med in mleko
46
6
46 40
6
53
honey and milk / abundance
oprati si roke
85 10
3
66 14 19
wash ones hands / to be innocent
The results could be useful guidelines for creators of similar datasets in other
languages.
Table 3 :
3An overview of the data present in the PARSEME datasets. Of the 20 languages in the PARSEME corpus, we use 18. We omit Arabic because it is not available as an open language and Farsi, which does not contain IEs. On average, each language contains 586 IEs. Difference in detection of individual IEs. It is possible that success in detection of different IEs differs significantly, where some IEs are easy and other much more difficult to detect. In Section 5.3 we evaluate how well our model detects each IE in our dataset and present the differences.4.Cross-lingual transfer on the PARSEME dataset. In Section 5.4 we evaluate whether our approach can be used to detect expressions in different lan-Required size of a dataset. Our dataset is significantly larger than other datasets used for automatic idiom detection, e.g., PARSEME (for a single language). In Section 5.5 we conduct a series of experiments that provide an information how large dataset (in terms of number of IE and number of examples per IE) is actually needed for successful detection of idioms. This information may be valuable for other languages where similar detection tools will be built.Language Sentences
Tokens
IEs
BG
6,913
157,647
417
DE
6,261
120,840
1,005
EL
5,244
142,322
515
EN
7,436
124,203
59
ES
2,502
102,090
196
FA
2,736
46,530
0
FR
17,880
450,221
1,786
HE
4,673
99,790
86
HU
3,569
87,777
92
HR
3,003
69915
131
IT
15,728
387,325
913
LT
12,153
209,636
229
MT
5,965
141,096
261
PL
11,578
191,239
317
PT
19,640
359,345
820
RO
45,469
778,674
524
SL
8,881
183,285
283
SV
200
3,376
9
TR
16,715
334,880
2,911
Total
19,6546
3,990,191 10,554
classification, where the network makes a single prediction for the entire
sentence, predicting whether that sentence contains an expression with
an idiomatic meaning and ii) token-level classification, where we predict
whether each token has a literal or idiomatic meaning. The sentence-level
classification task is easier, but the token-level task can be more useful, as
Table 4 :
4Comparison of results when classifying tokens with the same IEs present in the training and testing set. Each token was classified as either belonging to IE with the literal meaning, belonging to IE with the idiomatic meaning, or not belonging to IE.Of the MICE approaches, the one with the Slovene ELMo model obtains the highest F 1 score. The MICE variants with BERT embeddings obtain lower classification accuracies and F 1 scores. This is likely due to the fact that our ELMo embeddings were pretrained on a large amount of only Slovene texts, while the mBERT model was trained on 104 different languages. Only a small amount of Slovene texts was included in its training and it has a small proportion of Slovene words in the vocabulary. The CroSloEngual embeddings were trained on a larger amount of Slovene text and therefore achieve better results.Method
CA
F 1
Default classifier
0.903
0.176
SVM baseline
0.8756 0.3962
MUMULS
0.975 0.0659
MICE with Slovene ELMo
0.981 0.912
MICE with mBERT
0.974
0.869
MICE with CroSloEngual BERT
0.972
0.872
MICE-voting
0.979
0.904
MICE-MM
0.979
0.907
In token-level classification, the MICE-MM ensemble does not outperform
the best individual model (Slovene ELMo embeddings). However, a separate
MM ensemble model, trained only on the two BERT models, outperforms each
Table 5 :
5Comparison of results when classifying sentences from the SloIE dataset and the same IEs are present in the training and testing sets. Each sentence was classified as either containing an expression with the literal meaning or containing an expression with the idiomatic meaning.Method
CA
F 1
Default classifier
0.828
0.906
SVM baseline
0.900
0.942
MUMULS
0.915
0.948
MICE with Slovene ELMo
0.951
0.980
MICE with mBERT
0.897
0.908
MICE with CroSloEngual BERT 0.921
0.954
MICE-voting
0.964
0.979
MICE-MM
0.971 0.982
Table 6 :
6Comparison of results when classifying tokens and test set IEs are not present in the training set.Method
CA F 1 score
Default classifier
0.903
0.176
SVM baseline
0.870
0.029
MUMULS
0.873
0.000
MICE with Slovene ELMo
0.803
0.866
MICE with mBERT
0.733
0.803
MICE with CroSloEngual BERT
0.759
0.176
MICE-voting
0.917
0.599
MICE-MM
0.925
0.662
Table 7 :
7Comparison of results when classifying sentences and the test set IEs are not present in the training set.This can be explained by the fact that the MM approach uses the predictions made on the training data IEs to learn the latent distributions of IE predictions from the test data, which might not generalize well. The voting ensemble MICE-Method
CA
F 1 score
Default classifier
0.828
0.906
SVM baseline
0.783
0.689
MUMULS
0.520
0.672
MICE with Slovene ELMo
0.936
0.964
MICE with mBERT
0.888
0.939
MICE with CroSloEngual BERT 0.914
0.952
MICE-voting
0.915
0.953
MICE-MM
0.934
0.963
voting also does not improve performance over the best-performing individual
model.
Table 8 :
8Examples of the easiest and most difficult IEs and their direct/idiomatic translation for the MICE model with Slovene ELMo embeddings.IE
F 1 score
Number of detected IEs
pospraviti v arhive
1.0
4
to archive / to remove from attention
kislo jabolko
1.0
9
sour apple / unpleassant matter
pomešati jabolka in hruške
1.0
33
compare apples and pears / compare things that cannot be compared
pristati v žepih nekoga
1.0
28
lend in someones pockets / to steal
perje začne frčati
1.0
19
the feather starts to flutter / to make a public uproar
pospraviti kaj v arhiv
0.600
12
to archive something / to end something
imeti krompir
0.597
162
to have a potato / to be lucky
gnilo jajce
0.571
11
rotten egg / unpleasant surprise
kdo nosi hlače
0.525
218
to wear trousers / to be in charge
Želodec se obrne
0.487
10
to turn the stomach / to be disgusted
• We evaluated MICE with Slovene ELMo embeddings on Slavic languages
similar to Slovene, with datasets present in the PARSEME collection, i.e.,
Slovene, Croatian, and Polish. As the Slovene ELMo embeddings are not
multilingual, they are unlikely to generalize to other languages. In future
Table 9 .
9for each language on the training set and evaluated it on the testing set. For Slovene, Croatian, and Polish we additionally trained MICE mBERT models on the SloIE dataset, as the similarity of those languages means that additional data in the Slovene language could be beneficial. The results are presented inThe results of the monolingual evaluation presented in Section 5.2 are also confirmed on the Slovene PARSEME dataset, as MICE with Slovene ELMO model is capable of detecting idioms in that dataset. The same model generalizes very well to the PARSEME Croatian dataset, likely due to its similarity to Slovene. The generalization to Polish, which is more distant Slavic language, is not successful. MICE models with mBERT also generalize well for a few languages. They obtain good results on Slovene and Croatian, likely due to the large amount of training data in the SloIE corpus, which also generalizes to Croatian idioms. The MICE mBERT models outperform default classifiers in French, Turkish, Lithuanian, Italian, Hebrew, and Basque, despite small amounts of training data, low numbers of IEs in training sets, most IEs only appearing once, and IEs in the testing set not appearing in the training set. They performFor the Slavic languages test, we trained the prediction model on the whole
SloIE dataset, presented in Section 4.1. We did not train the model on any
multilingual data to see whether the contextual embeddings alone are enough to
generalize to other languages, at least to similar ones such as Croatian. For all
PARSEME languages using MICE with mBERT, we split each dataset into the
training, testing and validation sets using a 60:30:10 ratio, trained the model
Table 9 :
9Results of the multilingual evaluation. The MICE models with Slovene ELMo embeddings were evaluated on Slavic languages similar to Slovene, while the variants with mBERT were tested for all languages in PARSEME dataset which contain IEs. We report F 1 scores and include default classifiers as a reference.less well on other languages. even obtaining scores below the default classifier. This is expected, as the PARSEME dataset only contains a small number of IEs, with only one or two sentences for each expression. This means that most of the idioms in the test set did not appear in the training set. Our evaluation on the much larger SloIE dataset shows that achieving good results on IEs outside the training set is difficult even when using a large training dataset.Therefore, the small size of the datasets for individual languages is the main reason that the models performed worse than the default classifier in several languages. MUMULS and the SVM baseline were both unable to detect IEs in other languages, obtaining the F 1 score of 0 in all cases. We did not perform cross-lingual evaluation using ensemble models. The results of individual models show that there is only one case where an ensemble might be useful (Croatian with Slovene ELMo and mBERT), which has a small amount of data(3,003 sentences). In order to evaluate how ensemble models perform on cross-lingual classification, a more comprehensive analysis would be required, which we leave for future work.5.5. Effect of the dataset sizeMost languages currently do not have IE datasets, and it might be helpful to provide an information on how large datasets are required. In this section, we analyze the size of dataset needed to obtain acceptable performance in Slovene language and expect that findings will generalize to other languages. Further, as our SloIE dataset is larger than existing IE datasets, our results are not directly comparable to existing research, which was evaluated on smaller datasets. Our evaluation will shed light on this question as well.We approach the analysis by running a number of tests on subsets of SloIE dataset. We randomly selected subsets of different sizes (100 %, 80%, 60%, 40%, 20%, and 10% percent) and re-ran the evaluations, repeating tests with IEs from the training set (Section 5.1). We only tested our best model, MICE, with Slovene ELMo embeddings. We show the results when classifying IEs from the training set inTable 10. The results show that MICE performs well even when using smaller datasets. The F 1 score and CA slowly decrease with lower numbers of training sentences and remain quite high even with smaller training sets. This means that our approach could achieve good real-world performance even with languages that do not have large annotated datasets. When classifying IEs from outside the training set, the results did not significantly change with lower dataset sizes.Language
Slovene ELMo mBERT Default F 1
Slovene
0.8163
0.8359
0.667
Croatian
0.9191
0.8970
0.667
Polish
0.2863
0.6987
0.667
English
-
0.650
0.667
French
-
0.814
0.667
German
-
0.622
0.667
Turkish
-
0.682
0.667
Romanian
-
0.625
0.667
Lithuanian
-
0.689
0.667
Italian
-
0.683
0.667
Hungarian
-
0.555
0.667
Hindi
-
0.562
0.667
Hebrew
-
0.693
0.667
Farsi
-
-
-
Basque
-
0.692
0.667
Spanish
-
0.340
0.667
Greek
-
0.484
0.667
Bulgarian
-
0.601
0.667
Table 10 :
10The effect of dataset size on classification accuracy (CA) and F 1 score using the sentence-level classification task with IEs that appear in the training set, using MICE with Slovene ELMo embeddings.Our final evaluation checks whether a balanced dataset improves the result. The SloIE dataset is highly imbalanced (both in the number of examples per IE and in the number of idiomatic and literal use cases of each expression). This might make training neural networks difficult. To determine how much the dataset imbalance effects the results we constructed a smaller, balanced dataset, that contains the same amount of idiomatic/non-idiomatic sentences for each expression. The balanced version of the dataset contains 5481 training sentences and 2349 testing sentences across 75 IEs.Sentences
CA
F 1 score
27698
0.903
0.938
17449
0.906
0.942
9771
0.902
0.938
4787
0.870
0.934
2010
0.894
0.934
703
0.874
0.924
Table 11 :
11The effect of using a balanced dataset on classification accuracy and F 1 score. The evaluation was conducted as a sentence-level classification task with IEs appearing in the training set, and using MICE with Slovene ELMo embeddings.Dataset
CA
F 1 score Default CA Default F 1
Balanced
0.8011
0.766
0.500
0.667
Imbalanced 0.812
0.853
0.625
0.767
https://github.com/TadejSkvorc/MICE 4 http://hdl.handle.net/11356/1335
AcknowledgementsThe research was supported by the Slovene Research Agency through research core funding no. P6-0411 and P6-0215, as well as the projects J6-8256 and J6-2581. This paper is supported by European Union's Horizon 2020 Programme project EMBEDDIA (Cross-Lingual Embeddings for Less-Represented Languages in European News Media, grant no. 825153).33The SloIE dataset was annotated by student annotators Kaja Žvanut, Tajda Liplin-Šerbetar, Karolina Zgaga and Tjaša Jelovšek. A part of it was also annotated by a non-native speaker Danijela Topić-Vizcaya.
I Korkontzelos, T Zesch, F M Zanzotto, C Biemann, Proceedings of the Seventh International Workshop on Semantic Evaluation. the Seventh International Workshop on Semantic Evaluation2Second Joint Conference on Lexical and Computational Semantics (* SEM)I. Korkontzelos, T. Zesch, F. M. Zanzotto, C. Biemann, Semeval-2013 task 5: Evaluating phrasal semantics, in: Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), 2013, pp. 39-47.
The PARSEME shared task on automatic identification of verbal multiword expressions. A Savary, C Ramisch, S Cordeiro, F Sangati, V Vincze, B Qasemizadeh, M Candito, F Cap, V Giouli, I Stoyanova, A Doucet, Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017). the 13th Workshop on Multiword Expressions (MWE 2017)Association for Computational LinguisticsA. Savary, C. Ramisch, S. Cordeiro, F. Sangati, V. Vincze, B. QasemiZadeh, M. Candito, F. Cap, V. Giouli, I. Stoyanova, A. Doucet, The PARSEME shared task on automatic identification of verbal multiword expressions, in: Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), Association for Computational Linguistics, 2017, pp. 31-47.
The VNC-tokens dataset. P Cook, A Fazly, S Stevenson, Proceedings of the LREC Workshop Towards a Shared Task for Multiword Expressions. the LREC Workshop Towards a Shared Task for Multiword ExpressionsP. Cook, A. Fazly, S. Stevenson, The VNC-tokens dataset, in: Proceedings of the LREC Workshop Towards a Shared Task for Multiword Expressions (MWE 2008), 2008, pp. 19-22.
Deep learning. Y Lecun, Y Bengio, G Hinton, Nature. 521Y. LeCun, Y. Bengio, G. Hinton, Deep learning, Nature 521 (2015) 436-444.
Character-level convolutional networks for text classification. X Zhang, J Zhao, Y Lecun, Advances in neural information processing systems. X. Zhang, J. Zhao, Y. LeCun, Character-level convolutional networks for text classification, in: Advances in neural information processing systems, 2015, pp. 649-657.
Character-aware neural language models. Y Kim, Y Jernite, D Sontag, A M Rush, AAAIY. Kim, Y. Jernite, D. Sontag, A. M. Rush, Character-aware neural language models., in: AAAI, 2016, pp. 2741-2749.
M E Peters, M Neumann, M Iyyer, M Gardner, C Clark, K Lee, L Zettlemoyer, Deep contextualized word representations. Proceedings of NAACL-HLTM. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, L. Zettlemoyer, Deep contextualized word representations, in: Proceedings of NAACL-HLT, 2018, pp. 2227-2237.
BERT: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational Linguistics1J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers for language understanding, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, 2019, pp. 4171-4186.
Exploiting similarities among languages for machine translation. T Mikolov, Q V Le, I Sutskever, 1309.4168arXiv preprintT. Mikolov, Q. V. Le, I. Sutskever, Exploiting similarities among languages for machine translation, arXiv preprint 1309.4168 (2013).
Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the 2014 conference on Empirical methods in natural language processing EMNLP. the 2014 conference on Empirical methods in natural language processing EMNLPJ. Pennington, R. Socher, C. Manning, Glove: Global vectors for word representation, in: Proceedings of the 2014 conference on Empirical methods in natural language processing EMNLP, 2014, pp. 1532-1543.
Enriching word vectors with subword information. P Bojanowski, E Grave, A Joulin, T Mikolov, Transactions of the Association for Computational Linguistics. 5P. Bojanowski, E. Grave, A. Joulin, T. Mikolov, Enriching word vectors with subword information, Transactions of the Association for Computational Linguistics 5 (2017) 135-146.
C Liu, R Hwa, Representations of context in recognizing the figurative and literal usages of idioms. Thirty-First AAAI Conference on Artificial IntelligenceC. Liu, R. Hwa, Representations of context in recognizing the figurative and literal usages of idioms, in: Thirty-First AAAI Conference on Artificial Intelligence, 2017, pp. 3230-3236.
Neural networks for multi-word expression detection. N Klyueva, A Doucet, M Straka, Proceedings of the 13th Workshop on Multiword Expressions. the 13th Workshop on Multiword ExpressionsN. Klyueva, A. Doucet, M. Straka, Neural networks for multi-word ex- pression detection, in: Proceedings of the 13th Workshop on Multiword Expressions (MWE 2017), 2017, pp. 60-65.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingK. Cho, B. van Merriënboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, Y. Bengio, Learning phrase representations using RNN encoder-decoder for statistical machine translation, in: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2014, pp. 1724-1734.
. C Ramisch, S R Cordeiro, A Savary, V Vincze, V Barbu, A Mititelu, M Bhatia, M Buljan, P Candito, V Gantar, T Giouli, A Güngör, U Hawwari, J Iñurrieta, S Kovalevskaitė, T Krek, C Lichte, J Liebeskind, Monti, 35C. Ramisch, S. R. Cordeiro, A. Savary, V. Vincze, V. Barbu Mititelu, A. Bha- tia, M. Buljan, M. Candito, P. Gantar, V. Giouli, T. Güngör, A. Hawwari, U. Iñurrieta, J. Kovalevskaitė, S. Krek, T. Lichte, C. Liebeskind, J. Monti, 35
Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions. C Parra Escartín, B Qasemizadeh, R Ramisch, N Schneider, I Stoyanova, A Vaidya, A Walsh, Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions. the Joint Workshop on Linguistic Annotation, Multiword Expressions and ConstructionsLAW-MWE-CxGC. Parra Escartín, B. QasemiZadeh, R. Ramisch, N. Schneider, I. Stoy- anova, A. Vaidya, A. Walsh, Edition 1.1 of the PARSEME shared task on automatic identification of verbal multiword expressions, in: Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), 2018, pp. 222-240.
Deep-BGT at PARSEME shared task 2018: Bidirectional lstm-crf model for verbal multiword expression identification. G Berk, B Erden, T Güngör, Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions. the Joint Workshop on Linguistic Annotation, Multiword Expressions and ConstructionsLAW-MWE-CxGG. Berk, B. Erden, T. Güngör, Deep-BGT at PARSEME shared task 2018: Bidirectional lstm-crf model for verbal multiword expression identification, in: Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), 2018, pp. 248-253.
Mumpitz at PARSEME shared task 2018: A bidirectional lstm for the identification of verbal multiword expressions. R Ehren, T Lichte, Y Samih, Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions. the Joint Workshop on Linguistic Annotation, Multiword Expressions and ConstructionsLAW-MWE-CxGR. Ehren, T. Lichte, Y. Samih, Mumpitz at PARSEME shared task 2018: A bidirectional lstm for the identification of verbal multiword expressions, in: Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018), 2018, pp. 261-267.
Multiword expression detection using bidirectional long-short-term memory networks and graph-based decoding. T Boroş, R Burtica, Gbd-Ner At, Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions. the Joint Workshop on Linguistic Annotation, Multiword Expressions and ConstructionsLAW-MWE-CxGT. Boroş, R. Burtica, GBD-NER at PARSEME shared task 2018: Multi- word expression detection using bidirectional long-short-term memory net- works and graph-based decoding, in: Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW- MWE-CxG-2018), 2018, pp. 254-260.
Unsupervised recognition of literal and non-literal use of idiomatic expressions. C Sporleder, L Li, Proceedings of the 12th Conference of the European Chapter of the ACL. the 12th Conference of the European Chapter of the ACLC. Sporleder, L. Li, Unsupervised recognition of literal and non-literal use of idiomatic expressions, in: Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), 2009, pp. 754-762.
Heuristically informed unsupervised idiom usage recognition. C Liu, R Hwa, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingC. Liu, R. Hwa, Heuristically informed unsupervised idiom usage recognition, in: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, 2018, pp. 1723-1731.
Examining the tip of the iceberg: A data set for idiom translation. M Fadaee, A Bisazza, C Monz, arXiv:1802.04681arXiv preprintM. Fadaee, A. Bisazza, C. Monz, Examining the tip of the iceberg: A data set for idiom translation, arXiv preprint arXiv:1802.04681 (2018).
High quality ELMo embeddings for seven less-resourced languages. M Ulčar, M Robnik-Šikonja, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, FranceM. Ulčar, M. Robnik-Šikonja, High quality ELMo embeddings for seven less-resourced languages, in: Proceedings of The 12th Language Resources and Evaluation Conference, European Language Resources Association, Marseille, France, 2020, pp. 4731-4738.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in neural information processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, in: Advances in neural information processing systems, 2017, pp. 5998-6008.
GLUE: A multitask benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S Bowman, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPA. Wang, A. Singh, J. Michael, F. Hill, O. Levy, S. Bowman, GLUE: A multi- task benchmark and analysis platform for natural language understanding, in: Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, 2018, pp. 353-355.
S Krek, P Gantar, Š A Holdt, V Gorjanc, Nadgradnja Korpusov Gigafida, Proceedings of Language technologies and digital humanistics. Language technologies and digital humanisticsS. Krek, P. Gantar, Š. A. Holdt, V. Gorjanc, Nadgradnja korpusov Gigafida, Kres, ccGigafida in ccKres, in: Proceedings of Language technologies and digital humanistics, 2016, pp. 200-202.
less is more in multilingual models. M Ulčar, M Robnik-Šikonja, Bert Finest, Bert Crosloengual, Proceedings of Text, Speech, and Dialogue. Text, Speech, and Dialogue2020M. Ulčar, M. Robnik-Šikonja, FinEst BERT and CroSloEngual BERT: less is more in multilingual models., in: Proceedings of Text, Speech, and Dialogue, TSD 2020, 2020, pp. 104-111.
Compiling web corpora for Croatian and Slovene. N Ljubešić, T Erjavec, International Conference on Text, Speech and Dialogue. SpringerN. Ljubešić, T. Erjavec, hrWaC and slWaC: Compiling web corpora for Croatian and Slovene, in: International Conference on Text, Speech and Dialogue, Springer, 2011, pp. 395-402.
Riznica: the Croatian language corpus. D Ćavar, D B Rončević, Prace filologiczne. 63D. Ćavar, D. B. Rončević, Riznica: the Croatian language corpus, Prace filologiczne 63 (2012) 51-65.
Bayesian combination of probabilistic classifiers using multivariate normal mixtures. G Pirš, E Štrumbelj, Journal of Machine Learning Research. 20G. Pirš, E. Štrumbelj, Bayesian combination of probabilistic classifiers using multivariate normal mixtures, Journal of Machine Learning Research 20 (2019) 1892-1909.
K Miok, G Pirs, M Robnik-Sikonja, arXiv:2010.14872Bayesian methods for semi-supervised text annotation. arXiv preprintK. Miok, G. Pirs, M. Robnik-Sikonja, Bayesian methods for semi-supervised text annotation, arXiv preprint arXiv:2010.14872 (2020).
Slovene lexical database. P Gantar, S Krek, Natural language processing, multilinguality: 6th international conference. 37P. Gantar, S. Krek, Slovene lexical database, in: Natural language process- ing, multilinguality: 6th international conference, 2011, pp. 72-80. 37
A survey of cross-lingual word embedding models. S Ruder, I Vulić, A Søgaard, Journal of Artificial Intelligence Research. 65S. Ruder, I. Vulić, A. Søgaard, A survey of cross-lingual word embedding models, Journal of Artificial Intelligence Research 65 (2019) 569-631.
Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond. M Artetxe, H Schwenk, Transactions of the Association for Computational Linguistics. 7M. Artetxe, H. Schwenk, Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond, Transactions of the Association for Computational Linguistics 7 (2019) 597-610.
Cross-lingual transfer of Twitter sentiment models using a common vector space. M Robnik-Šikonja, K Reba, I Mozetič, M. Robnik-Šikonja, K. Reba, I. Mozetič, Cross-lingual transfer of Twitter sentiment models using a common vector space, 2020. URL: https://arxiv. org/pdf/2005.07456.
Linking named entities across languages using multilingual word embeddings. E Pontes, J G Moreno, A Doucet, 20th ACM/IEEE Joint Conference on Digital Libraries. 2020E. Linhares Pontes, J. G. Moreno, A. Doucet, Linking named entities across languages using multilingual word embeddings, in: 20th ACM/IEEE Joint Conference on Digital Libraries, JCDL 2020, 2020.
Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency parsing. T Schuster, O Ram, R Barzilay, A Globerson, 10.18653/v1/N19-1162Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1T. Schuster, O. Ram, R. Barzilay, A. Globerson, Cross-lingual alignment of contextual word embeddings, with applications to zero-shot dependency pars- ing, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp. 1599-1613. URL: https: //www.aclweb.org/anthology/N19-1162. doi:10.18653/v1/N19-1162.
Specializing word embeddings (for parsing) by information bottleneck. X L Li, J Eisner, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingX. L. Li, J. Eisner, Specializing word embeddings (for parsing) by infor- mation bottleneck, in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, pp. 2744-2754.
| [
"https://github.com/TadejSkvorc/MICE"
] |
[
"A SELF-SUPERVISED APPROACH FOR SEMANTIC INDEXING IN THE CONTEXT OF COVID-19 PANDEMIC",
"A SELF-SUPERVISED APPROACH FOR SEMANTIC INDEXING IN THE CONTEXT OF COVID-19 PANDEMIC"
] | [
"Nima Ebadi nima.ebadi@utsa.edu \nDepartment of Electrical and Computer Engineering\nDepartment of Information Systems and Security\nUniversity of Texas at San Antonio San Antonio\nUniversity of Texas at San Antonio San Antonio\n78249, 78249SA, TX\n",
"Peyman Najafirad peyman.najafirad@utsa.edu \nDepartment of Electrical and Computer Engineering\nDepartment of Information Systems and Security\nUniversity of Texas at San Antonio San Antonio\nUniversity of Texas at San Antonio San Antonio\n78249, 78249SA, TX\n"
] | [
"Department of Electrical and Computer Engineering\nDepartment of Information Systems and Security\nUniversity of Texas at San Antonio San Antonio\nUniversity of Texas at San Antonio San Antonio\n78249, 78249SA, TX",
"Department of Electrical and Computer Engineering\nDepartment of Information Systems and Security\nUniversity of Texas at San Antonio San Antonio\nUniversity of Texas at San Antonio San Antonio\n78249, 78249SA, TX"
] | [] | The pandemic has accelerated the pace at which COVID-19 scientific papers are published. In addition, the process of manually assigning semantic indexes to these papers by experts is even more time-consuming and overwhelming in the current health crisis. Therefore, there is an urgent need for automatic semantic indexing models which can effectively scale-up to newly introduced concepts and rapidly evolving distributions of the hyperfocused related literature. In this research, we present a novel semantic indexing approach based on the state-of-the-art self-supervised representation learning and transformer encoding exclusively suitable for pandemic crises. We present a case study on a novel dataset that is based on COVID-19 papers published and manually indexed in PubMed. Our study shows that our self-supervised model outperforms the best performing models of BioASQ Task 8a by micro-F1 score of 0.1 and LCA-F score of 0.08 on average. Our model also shows superior performance on detecting the supplementary concepts which is quite important when the focus of the literature has drastically shifted towards specific concepts related to the pandemic. Our study sheds light on the main challenges confronting semantic indexing models during a pandemic, namely new domains and drastic changes of their distributions, and as a superior alternative for such situations, propose a model founded on approaches which have shown auspicious performance in improving generalization and data efficiency in various NLP tasks. We also show the joint indexing of major Medical Subject Headings (MeSH) and supplementary concepts improves the overall performance. | null | [
"https://arxiv.org/pdf/2010.03544v1.pdf"
] | 222,177,088 | 2010.03544 | 2126665a067a9bee613a3581322ece01c67ea522 |
A SELF-SUPERVISED APPROACH FOR SEMANTIC INDEXING IN THE CONTEXT OF COVID-19 PANDEMIC
October 8, 2020
Nima Ebadi nima.ebadi@utsa.edu
Department of Electrical and Computer Engineering
Department of Information Systems and Security
University of Texas at San Antonio San Antonio
University of Texas at San Antonio San Antonio
78249, 78249SA, TX
Peyman Najafirad peyman.najafirad@utsa.edu
Department of Electrical and Computer Engineering
Department of Information Systems and Security
University of Texas at San Antonio San Antonio
University of Texas at San Antonio San Antonio
78249, 78249SA, TX
A SELF-SUPERVISED APPROACH FOR SEMANTIC INDEXING IN THE CONTEXT OF COVID-19 PANDEMIC
October 8, 2020
The pandemic has accelerated the pace at which COVID-19 scientific papers are published. In addition, the process of manually assigning semantic indexes to these papers by experts is even more time-consuming and overwhelming in the current health crisis. Therefore, there is an urgent need for automatic semantic indexing models which can effectively scale-up to newly introduced concepts and rapidly evolving distributions of the hyperfocused related literature. In this research, we present a novel semantic indexing approach based on the state-of-the-art self-supervised representation learning and transformer encoding exclusively suitable for pandemic crises. We present a case study on a novel dataset that is based on COVID-19 papers published and manually indexed in PubMed. Our study shows that our self-supervised model outperforms the best performing models of BioASQ Task 8a by micro-F1 score of 0.1 and LCA-F score of 0.08 on average. Our model also shows superior performance on detecting the supplementary concepts which is quite important when the focus of the literature has drastically shifted towards specific concepts related to the pandemic. Our study sheds light on the main challenges confronting semantic indexing models during a pandemic, namely new domains and drastic changes of their distributions, and as a superior alternative for such situations, propose a model founded on approaches which have shown auspicious performance in improving generalization and data efficiency in various NLP tasks. We also show the joint indexing of major Medical Subject Headings (MeSH) and supplementary concepts improves the overall performance.
INTRODUCTION AND BACKGROUND
To facilitate literature search and storage, curators at National Library of Medicine (NLM) annotate every article with a set of concepts from established categorical semantic terminologies. [1] This annotation process of scientific articles is generally referred to as semantic indexing. Nevertheless, the manual process of biomedical semantic indexing is time-consuming and financially expensive. [2,3] Therefore, several automated semantic indexing models have been proposed in the literature, including NLM's official Medical Text Indexing tool (MTI). [4,5,6,7,8,9] A pandemic situation, however, is an extreme scenario which highlights the importance of automated semantic indexing as researchers desperately require a well compartmentalized database to gain insights about the recent findings. [10] During the current pandemic so many related papers are being published at a much faster pace, [11] and the focus of the literature has drastically shifted towards COVID-19 related topics and subtopics, [12] some of them have not had a standard name until a couple of month ago. [13] Such conditions cause challenges for the automatic semantic indexing systems which are based on substantial supervisions and hand-coded features. Albeit the importance of semantic indexing in the pandemic situation, there is a lack of study on the performance of such automated models on the rapidly evolving corpus of COVID-19 related documents. [10] In this research, we present a case study on the state-of-the-art semantic indexing models in the context of COVID-19 pandemic. We analyze the key challenges of these models performing various evaluation (training and testing) schema. We find out the key aspects of the pandemic causing challenges for automatic semantic indexing models are the abrupt changes in the distribution of these indexes, rapid arXiv:2010.03544v1 [cs.IR] 7 Oct 2020 growth of specific topics regarding few indexes from a relatively large set of indexes, and lack of standard terms for newly introduced topics.
In this research, we attempt to tackle the problem of semantic indexing exclusively in the pandemic situation. We propose a novel semantic indexing methodology suitable for the aforementioned challenges, i.e. that is able to effectively scale-up to COVID-19 literature. Inspired by the state-of-the-art performance of self-supervised learning (SSL) models in various NLP, [14,15] and BioNLP, [16] tasks-specifically their generalization and data efficiency capabilities-as well as the best performing models in BioASQ Task 8a, [5,6] we design our methodology based on transformers encoding and attention mechanism between the document and candidate indexes. Our experimental results denote our model as a superior alternative over the best-performing models of BioASQ challenge during health crisis situations, like the current one. The main contributions of this study are as follows:
1. We propose a novel semantic indexing approach which can effectively scale up to new distributions, thereby suitable for emergency situations like the current pandemic where the related literature is rapidly evolving.
2. Our study bring attention to the main challenges confronting semantic indexing models in the pandemic crises, and attempts to address them by proposing a novel model inspired by the best-performing models of BioASQ challenge, but unlike them, able to leverage self-supervised representation learning and transformer language model to improve efficiency and generalization.
3. We present a case study on a novel semantic indexing dataset that is based on the COVID-19 related research articles published and manually indexed in PubMed. We use flat and hierarchical measures to evaluate the performance of our model along with the state-of-the-art benchmarks. Our study demonstrate the superiority of our self-supervised approach in scaling to the novel pandemic situation with the relatively small amount of labeled data available.
4. We also discuss the importance of more fine-grained categorization of documents to supplementary concepts, and show their indexing can actually improve the MeSH indexing performance when performed simultaneously. In addition to major MeSH indexing, we evaluate the performance of simultaneous indexing of both major MeSH and supplementary concepts.
5. This paper aims to offer some aid in the process of semantic indexing of the novel COVID-19 literature so as to lighten the load on NLM indexers.
Biomedical Semantic Indexing
Biomedical literature has been collected by the National Library of Medicine (NLM) for the last 150 years. As of 2020, PubMed database contains about 30 Million biomedical journal citations. This number has risen from 12 Million citations in 2004 to 30 Million citations in 2020 having a growth rate of 4% per year. Through a laborious process, NLM curators fully examine every document and annotate it with a set of hierarchically-organized terminologies developed by NLM called Medical Subject Headings (MeSH 1 ) along with supplementary concepts for more fine-grained categorization. [17] In 2019, more than 900K biomedical citations were added to PubMed and manually indexed to more than 29K MeSH concept categories 2 .
In the light of the size and growth rate of such databases, several automated models have been developed to improve the time-consuming and financially expensive process of biomedical semantic indexing through annual competitions such as BioASQ Task a, [18] and presented models, [6,5] as well as other BioNLP research venues. [19,9,7] These approaches are either based on i) simple retrieval systems; such as SNOKES team which participated in BioASQ 6th and uses search engine methods along with UMIA concept extractor, [20] Iria another participating team which combines ensemble of the best performing models from previous years challenges with k-NN MeSH masking algorithms, [21] Segura et al. utilize ElasticSearch to manage the "scalability" issue of the task and the enhanced NLM Medical Text Indexing (MTI), [8] Zavorin et al. combines L2R with Medical Text Indexing; [9] or ii) deep learning models with substantial hand-coded features and supervision. DeepMesh which is the best performing model of a couple of edition of BioASQ challenge that combines document to vector models with crafted features from the document and MeSH indexes along with ensemble models fed by those features. Other deep learning approaches include UIMA concept extractor links, [5] and AUTH that also uses document to vector approach with an ensemble of machine learning classifier (SVM) fed with document-MeSH features. [17] Jin et al. and Xun et al. combined retrieval systems with deep recurrent neural networks and attention mechanism and also provide explainability for MeSH indexing decisions. [6,7] The amount of hand-crafted features and supervision required for these models make it difficult for them to effectively scale-up as the biomedical databases do during pandemic crises. [22] 1
.2 Semantic Indexing in Pandemic
These semantic indexing models are proposed to perform well in normal situations, when there is no specific interest towards specific concepts, and are evaluated based on their overall performance on all major MeSH indexes. [23] In the pandemic situation, however, the focus of the literature has drastically shifted towards the specific concepts and sub-concepts related to the current Coronavirus disease. The number of published documents related to Coronavirus have risen from to a few articles per month to more than 10K articles in June 2020-roughly 1 out of every 11.5 citations are about Coronavirus these days. [24] The rapidly growing and evolving literature of COVID-19 causes challenges for automatic semantic indexing models. [10] Previously introduced semantic indexing models are based on supervised learning approaches and heavily hand-coded features; therefore, they require significant amount of labeled data regarding a specific concept to show decent performance in indexing related documents, and have challenges to effectively scale-up to newly introduced terminologies and sub-concepts; thereby, not suitable for emergency situations like the ongoing health crisis.
On the other hand, self-supervised learning (SSL), where a model is initially trained on a data-rich unsupervised pretext task then fine-tuned on a downstream task, has recently emerged as an effective technique in almost every deep learning problem ranging from computer vision, [25,26] NLP, [27,15,28] to Bioinformatics, [29,30] and IoT security. [31,32,33] Self-supervised learning is known to enhance data efficiency, [34] and generalization, [35] because the SSL-based model learns some general auxiliary knowledge from the pre-text task that allows the model to "understand" the downstream task better. [36] Therefore, SSL-based models are more robust towards changing domains and scaling up to new distributions. [37] In order for a deep SSL algorithm to be effective, the pre-text learning process should be susceptible to downstream learning. [36] However, as for the proposed semantic indexing models in the literature, either their architecture or the representation learning objective of the downstream task does not allow pre-training in an unsupervised manner that is useful for the downstream learning. In DeepMeSH only the TF-IDF vectorization can be updated with unlabeled data, without any word-level representation learning. [5] In AttenMeSH word-level encoding of the document can be updated by pre-training the Bi-GRU on an masked language modeling (MLM) pretext task, but these encodings wouldn't be appropriate enough for the downstream task because i) it cannot involve the document-index attention in the encoding, and ii) more sophisticated architectures have been proven to be more effective in this regard such as transformers. [38] 2 MATERIALS AND METHODS
Self-supervised Learning Pre-text Task and Document Representations
As for the initial representations of documents, we use the word representations provided by BioASQ organizers which is a pre-trained word embedding trained on large-scale corpus of biomedical documents 3 . We also use BioASQ provided word tokenizer to parse documents's title and abstract to a list of the constituting words 4 . We perform stemming and eliminate the stop words as different variants of an individual word or a stop words does not affect the semantic index of a document.
Afterwards Bag-of-Word representation of each document is computed according to the following equation:
D = [w 1 , w 2 , ...w |D| ] where w j ∈ R de 1(1)
Where |D| is the number of non-stopwords in the document, and d e1 is the word embedding size.
For the model to get acquainted with COVID-19 context, we leverage an unlabeled corpus of COVID-19 related documents, called CORD-19, [40] which is significantly larger than our labeled dataset of indexed documents. We pre-train a bi-directional transformer on masked language modeling (MLM) task with word-tokenized representation of the documents shown in Eq. 1, along with those of candidate similar indexes 5 . [38,41] Such algorithms have shown promising results in many NLP problems regarding deep semantic analysis of scientific documents, such as SciBERT and BioBERT. [42,16] In this regard, the document representation D is masked and fed to the transformers model and passed through a positional encoding approach following the original implementation of transformers. [38] Similar semantic indexes are also fed to the transformer and through a joint document-index attention, index specific encoding of the document is generated. The retrieval and embedding of candidate indexes is discussed in the next section. Next, the transformer model gets trained to predict masked tokens from the index specific encoding (similar to 1.b) but the softmax is applied over the index axis). Unlike BERT and BioBERT, we do not leverage the next sentence prediction (NSP) task for two reasons: 1) sentences ordering is required for QA type of inferences, not semantic indexing and text classifications.
2) It has been proven to be ineffective. [15,14]
Candidate MeSH Retrieval and Index Representations
Inspired by Jin et al., as for major MeSH indexes, we initially use a retrieval system to retrieve a subset of related MeSH categories from relevant documents 6 .
[6] Note: we only perform this retrieval process for major MeSH indexes, not for supplementary concepts since there is only 19 of them in COVID-19 dataset.
In this regard, we translate the target biomedical document into a query to extract the relevant ones from the annotated database. We follow the same pre-process of parsing, stemming and stopwords removal of Section 2.1. Every document is represented by their both TF-IDF and BM25 weighted sum of their words, following weighting schems of Wang et al., [43] and Paik et al., [44] respectively.
Each document as query is represented as follows:
d = n i=1 TF − IDF/BM25(w i , d) × v wi n i=1 TF − IDF/BM25(w i , d)(2)
Where, w i is the i th word in document d, and v wi is the word vector from the provided pre-trained embeddings.
Next using cosine similarity scores between the target document and other ones, we find the K relevant documents. Next, we use scoring scheme to re-rank and collect the candidate MeSH indexes. We score every MeSH term by 6 We can regard this module as a weak classifier which filter out the negative data which is far more than the positive ones (29K total number of MeSH terms vs. 12.6 terms for every document on average). Jin et al. shows doing so enhances the efficiency and performance of the indexing models as the classifier only focuses on the detection of correct MeSH indexes from a subset of plausible ones. summing their IDF weights in the documents and rank them. The top M with the highest scores are considered for indexing and passed to the next stage.
Semantic indexes representations are quite straight-forward to extract as they are single words. The indexes' embeddings are as follows:
M = [m 1 , m 2 , ...m |M | ] where m j ∈ R de 2(3)
Where |M | is the number of filtered indexes, and d e2 is the embedding size of every index m j . To simplify the model, we make d e1 = d e2 = d model .
Index Specific Context Vectors
After candidate indexes are retrieved, the document BoW representation D along with those of indexes M are fed to the bi-directional transformers which is pre-trained on the self-supervised pre-text task. Positional encoding is performed for documents, not for indexes, to bring in their words' ordering. Initially, D and M are separately encoded to D and M with self-attention mechanism allowing words to only attend other words, and indexes to attend other indexes (there is no cross attention between words and indexes). Self-attention mechanism for D is to capture context-aware representation of words, and for M is to capture correlation and dependencies between indexes which has been shown important. [5,45] Next, cross-attention between encodings of words and indexes are computed using scaled dot-product attention function, [38,46] as follows:
O = Softmax( M D T √ d model )D ∈ R |M |×d model(4)
where M D T ∈ R |M |×|D| is the dot product between every index and every word packed together into a matrix multiplication. Softmax is performed over word axis to get attention weights for every index. Finally, O is the index specific context vectors each of which is based on a weighted sum of the word vectors.
Projection Layer and Final Prediction
To compute the likelihood scores of indexes, we apply a linear projection layer with a non-linear activation function σ on the context specific vectors O and index encodings M , as following equation:
Y = σ(U · O T + V · M T + B)(5)
whereŶ ∈ R |M |×1 is the set of likelihood scores for candidate indexes, and U, V ∈ R 1×d model and B ∈ R |M |×1 are trainable parameters.
Finally, the predicted indexes are computed through thresholding over every likelihood score. Thresholds are defined by maximizing the micro f-measure in the training set, following. [47] 3 RESULTS
Dataset
For self-supervised representation learning (pre-training) stage of our methodology, we use CORD-19 dataset which includes 141K research articles about Coronavirus published in peer-reviewed venues and archival services such as bioRxiv 7 Medicine indexers. The dataset includes journal name in which the article has been published, article's title and abstract along with MeSH indexes for the training sets. On average, each article is indexed with 12.84 MeSH categories. [18] ii) Recently collected COVID-19 related documents from PubMed: we use 13K latest documents, published and annotated in 2020, related to COVID-19 crawled from PubMed using this query: covid-19 AND severe acute repository syndrome 2 AND sars-cov-2, [24] and evaluation results are calculated by this dataset. Table 1 provide information about the statistics of the datasets. We utilize the MeSH majors from both sets as well as the supplementary concept from our set to measure the performance in detail. As shown in Figure 2 major MeSH indexes and supplementary concepts form a Directed Acyclic Graph (DAG) where there is a hierarchical relation between two major MeSH (parent-child) and a mapping relation between mesh and supplementary concepts.
Experimental Setup
As for the MeSH retrieval part of our methodology, we use bm25/tf-idf bag-of-words representation methods with vocabulary size of 90K. The retrieval components, i.e. vectorization global features as well as the thresholding features, are trained using the train set only to avoid data bleeding. [48] The bi-directional transformer is implemented using TensorFlow (2.0) Eager, [49,50] and tensor-2-tensor 9 library. The hyperparameter values are shown in Table 2. We use Adam optimizer and early stopping strategies. [51] We apply three versions of our methodology: i) base bi-transformer without self-supervised training: where the model is not pre-trained on CORD-19 dataset, section b) and c) of Figure 1 with random initialization of the parameters; ii) base bi-transformer with masked language modeling (MLM) as the self-supervised pre-text task; and iii) large bi-transformer with MLM self-supervised tasks. Table 2: Hyperparameters values. We use bold text for the optimal ones among all tried values. * refer to those for Bi-Trans Large.
Evaluation Metrics
Following BioASQ challenge, we evaluate the performance of the semantic indexing models based on two sets of evaluation measures: i) flat: Accuracy,Micro and Macro F-measures, and ii) hierarchical: lowest common Ancestor F-measure (LCA-F).
Accuracy is the fraction of correct predictions. However, in multi-label classification problems true and predicted classes could be a set of labels for every example; therefore, there is an additional notion on partially correct. To capture this, precision and recall measures are computed for every class separately. Then, the results are aggregated using micro-averaging and macro-averaging strategies to compute micro (MiP/MiR) and macro precision/recall (MaP/MaR) respectively. Micro-averaging is evaluating the average difference between the predicted labels and the actual labels globally for each test example, and then averaging over all examples in the test set. The second strategy is macroaveraging evaluation in which each label is evaluated separately then averaged over all the labels. Finally the micro and macro f-measures (MiF and MaF) are computed base on the harmonic mean of the corresponding precision and recall. MiF is more affected by the performance of frequent indexes, while MaF treats every index equally. [52] Following BioASQ, MiF is the major flat measure in our presented case study.
As shown in Figure 2, semantic indexes have a hierarchical relation between one another. Therefore, in addition to flat measures, hierarchical measures are also used to evaluate the hierarchical classification performance of the semantic indexing models. In this regard, we leverage Lowest Common Ancestor F-measure (LCA-F) the algorithm provided by Kosmopoulos et al., [53] which is the same algorithm used in BioASQ challenge 10 . In LCA-F measure, sets of true class and predicted class are compared based on union of their corresponding augmented graphs which encompass all the lowest common ancestors between every pair. The algorithm has shown desirable results in various hierarchical text classification tasks. Table 3 shows the performance of the semantic indexing models with the aim of indexing only major subject headings (i.e. major MeSH). The models are trained on BioASQ data from 2015-2019 (excluding the recent COVID-19 documents) as well as COVID training set. They are tested on COVID testing set. As shown in Table 3 The higher performance of self-supervised models reveals that the models learn some sort of common sense (acquire general knowledge) about the pandemic and new distribution of major MeSH indexes.
Major MeSH Indexing
Efficiency w.r.t COVID-19 Training Data
To evaluate how efficiently the semantic indexing models scale-up to the novel Coronavirus related literature, we chronologically sort the COVID-19 training dataset, and train each model with the following proportions of the data to evaluate their zero-and few-shot performance along with their data efficiency: 0.0 (zero-shot evaluation), 0.05, 0.1, 0.2, 0.5 and 1 (the whole data). Figure 3 shows the MeSH indexing performance of the top performing models from Table 3 based on the size of the exclusive COVID-19 training data. The beginning performance is their zero-shot performance, when the models have only been trained on BioASQ dataset and have not seen a COVID-19 paper yet. The SSL-based versions of our BioTrans model achieve substantially superior performance until almost half of the training data is fed, especially in the very beginning. They reach 0.95 of their optimum performance by only 0.2 of the data. Other models reach this point once half of the training data is fed which means a couple of months delay, an essential issue for a pandemic crisis. Our BioTrans model which is not pre-trained on COVID-19 SSL dataset does not learn effectively Figure 3: MeSH indexing performance w.r.t the size of COVID-19 training data based on their Micro-F score. COVID-19 related data is chronologically ordered and then divided; therefore, the horizontal axis is directly related to the date these papers are published.
with the COVID-19 supervised data, and simply follows similar learning speed to those of AttenMeSH and DeepMeSH as it has been inspired by these techniques. This shows that the major strength of using such architecture-bi-directional transformers encoding with attention between documents and indexes-emerges when it undergoes a self-supervised learning process.
Indexing of Major MeSH and Supplementary Concepts
As the literature gets hyper-focused towards specific topics in the context of pandemic, classification to more finegrained indexes becomes critical. Therefore, we also present evaluation of simultaneous indexing of major Mesh and supplementary concepts. In this regard, the trained models on BioASQ are simply fine-tuned to detect supplementary concepts of COVID training set in addition to the major mesh indexes. Supplementary concepts are added as new classes to the potential indexes.
As demonstrated in Table 4 the performance of the baselines is improved by fine-tuning them to detect supplementary concepts as well. This shows the importance of more fine-grained indexing. In comparison to baselines our model improved even more with the aid of supplementary concepts.
CONCLUSION AND DISCUSSION
In this research, we propose a novel semantic indexing approach based on self-supervised deep representation learning models to tackle this task in the current health crisis. We present a case study on COVID-19 literature collected from recently indexed documents in PubMed. We compare the performance of our model with the state-of-the-art baselines based on flat and hierarchical measures. Our study shows the presented self-supervised model outperforms the baselines with the small amount of labeled data available. We further evaluate the indexing of supplementary concepts along with the major MeSH indexes demonstrating the state-of-the-art performance. We also show that the indexing of supplementary concepts improves MeSH indexing performance of our model explaining the importance of more fine-grained categorization of documents in the current pandemic situation. In future, we will attempt to continue our case study as COVID-19 documents are being published and indexed in PubMed. We will mainly focus on improving the data efficiency and generalization aspects of semantic indexing models as COVID-19 literature is rapidly evolving. We will also try sophisticated few-and zero-shot learning techniques to better handle newly introduced concepts.
Figure 1 :
1Our proposed semantic indexing methodology consists of a self-supervised learning stage(Fig. a)and a fine-tuning stage (Fig. b and c).Fig. a) shows a bi-directional transformers model trained on unlabeled data (i.e. CORD-19) via masked language modeling and next sentence prediction tasks. Fig. b) Every PubMed document along with a set of candidate indexes are encoded via bidirectional transformers self-attention, and cross-attention between every input token/word and index are computed. Fig. c) index specific context vectors (computed by a weighted sum) is passed through a linear projection layer and thresholding process to detect the final semantic indexes which should be assigned to the document. (Example absract is from Yang et al., (2020).[39])
Figure 2 :
2Semantic Indexes of Coronavirus. The major MeSH and supplementary concepts form a Directed Acyclic Graph (DAG). In this figure, supplementary concepts have been added only for two of the nodes.
Data Descriptive for self-supervised representation learning and supervised semantic indexing (training and evaluation) datasets.and medRxiv 8 .[40] These articles are crawled from various medical databases including PubMed's PMC
(using the query: COVID-19 and coronavirus research), a COVID-19 corpus maintained by WHO, Elsevier and the
aforementioned archival services. A great portion of the dataset (i.e. 48K) of these articles have been published in 2020,
in the context of pandemic.
For supervised training we use two sets: i) BioASQ Task 8a Test Sets (from 2015-2019): The training set consists of
2,501,982 annotated articles from PubMed. MeSH labels are manually assigned to the articles by National Library of
Dataset
No. of
No. of
No. of Supp.
Documents MeSH Indexes
Concepts
COVID
SSL
141,764
-
-
BioASQ
2,501,982
27,114
-
(12.84/doc.)
COVID
Train
10,210
14,335
18
(16.6/doc.)
(1.97/doc.)
COVID
Test
3,463
10,922
15
(17.9/doc.)
(1.92/doc.)
Table 1:
, semanticModel
LCA-F
Micro F1
Macro-F
Accuracy
Medical Text Indexer (MTI) (Default)
0.5083
0.6521
0.4553
0.4267
MTI (First Line Index)
0.5011
0.6152
0.5038
0.4419
Deep MeSH
0.5732
0.6974
0.5024
0.4612
Attention MeSH
0.5417
0.6571
0.4928
0.5147
iria
0.4542
0.4908
0.3491
0.2179
xgx
0.4266
0.6368
0.4934
0.5018
MeSHmallow
0.3751
0.5172
0.3570
0.3916
BioTrans (Base) (w/o SSL)
0.4899
0.6013
0.3725
0.3309
BioTrans (Base) (w/ MLM SSL)
0.5211
0.6588
0.4733
0.5096
BioTrans (Large) (w/ MLM SSL)
0.5683
0.7071
0.5044
0.5319
Table 3 :
3Large transformer model also shows better capacity to scale up to new concepts.MeSH indexing performances of our deep transformer and self-supervised learning based models along with
the state-of-the-art ones in terms of LCA F-measure (hierarchical measure) as well as accuracy, micro and macro
F-measure (flat measure).
indexing models which are based on deep representation learning algorithm demonstrate better performance in scaling
10 https://github.com/BioASQ/Evaluation-Measures/tree/master/hierarchical
Table 4 :
4Performance of simultaneous indexing major MeSH and supplementary concepts in the context of COVID-19. Micro F-measure has been calculated for supplementary concepts and major MeSH separately and jointly.
https://www.nlm.nih.gov/mesh/meshhome.html 2 https://www.nlm.nih.gov/pubs/techbull/mj18/brief/mj18_updates_2018_baseline_stats.html
http://participants-area.bioasq.org/tools/BioASQword2vec 4 http://participants-area.bioasq.org/tools/ 5 Note: in the pre-text task we also feed the potentially related indexes for every document to learn latent representations based on the index specific encodings and reinforce the consistency between the pretext and downstream tasks
https://www.biorxiv.org 8 https://www.medrxiv.org
https://github.com/tensorflow/tensor2tensor
FUNDINGThe authors gratefully acknowledge the use of the services of Jetstream cloud, funded by National Science Foundation (NSF) awards 1445604, and the Cloud Technology Endowed Professorship.
Sergios Petridis, Dimitris Polychronopoulos, et al. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Dirk Michael R Alvers, Anastasia Weissenborn, Krithara, BMC bioinformatics. 161138George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, et al. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC bioinformatics, 16(1):138, 2015.
Recommending mesh terms for annotating biomedical articles. Minlie Huang, Aurélie Névéol, Zhiyong Lu, Journal of the American Medical Informatics Association. 185Minlie Huang, Aurélie Névéol, and Zhiyong Lu. Recommending mesh terms for annotating biomedical articles. Journal of the American Medical Informatics Association, 18(5):660-667, 2011.
The nlm medical text indexer system for indexing biomedical literature. G James, Antonio Mork, Alan R Jimeno-Yepes, Aronson, BioASQ@ CLEF. James G Mork, Antonio Jimeno-Yepes, and Alan R Aronson. The nlm medical text indexer system for indexing biomedical literature. In BioASQ@ CLEF, 2013.
Tackling mesh indexing dataset shift with time-aware concept embedding learning. Qiao Jin, Haoyang Ding, Linfeng Li, Haitao Huang, Lei Wang, Jun Yan, International Conference on Database Systems for Advanced Applications. SpringerQiao Jin, Haoyang Ding, Linfeng Li, Haitao Huang, Lei Wang, and Jun Yan. Tackling mesh indexing dataset shift with time-aware concept embedding learning. In International Conference on Database Systems for Advanced Applications, pages 474-488. Springer, 2020.
Deepmesh: deep semantic representation for improving large-scale mesh indexing. Shengwen Peng, Ronghui You, Hongning Wang, Chengxiang Zhai, Hiroshi Mamitsuka, Shanfeng Zhu, Bioinformatics. 3212Shengwen Peng, Ronghui You, Hongning Wang, Chengxiang Zhai, Hiroshi Mamitsuka, and Shanfeng Zhu. Deepmesh: deep semantic representation for improving large-scale mesh indexing. Bioinformatics, 32(12):i70- i79, 2016.
Attentionmesh: Simple, effective and interpretable automatic mesh indexer. Qiao Jin, Bhuwan Dhingra, William Cohen, Xinghua Lu, Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering. the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answeringQiao Jin, Bhuwan Dhingra, William Cohen, and Xinghua Lu. Attentionmesh: Simple, effective and interpretable automatic mesh indexer. In Proceedings of the 6th BioASQ Workshop A challenge on large-scale biomedical semantic indexing and question answering, pages 47-56, 2018.
Meshprobenet: a self-attentive probe net for mesh indexing. Guangxu Xun, Kishlay Jha, Ye Yuan, Yaqing Wang, Aidong Zhang, Bioinformatics. 3519Guangxu Xun, Kishlay Jha, Ye Yuan, Yaqing Wang, and Aidong Zhang. Meshprobenet: a self-attentive probe net for mesh indexing. Bioinformatics, 35(19):3794-3802, 2019.
Labda at the 2016 bioasq challenge task 4a: Semantic indexing by using elasticsearch. Isabel Segura-Bedmar, Adrián Carruana, Paloma Martínez, Proceedings of the Fourth BioASQ workshop. the Fourth BioASQ workshopIsabel Segura-Bedmar, Adrián Carruana, and Paloma Martínez. Labda at the 2016 bioasq challenge task 4a: Semantic indexing by using elasticsearch. In Proceedings of the Fourth BioASQ workshop, pages 16-22, 2016.
Using learning-to-rank to enhance nlm medical text indexer results. Ilya Zavorin, James Mork, Dina Demner-Fushman, Proceedings of the Fourth BioASQ workshop. the Fourth BioASQ workshopIlya Zavorin, James Mork, and Dina Demner-Fushman. Using learning-to-rank to enhance nlm medical text indexer results. In Proceedings of the Fourth BioASQ workshop, pages 8-15, 2016.
Lessons from covid-19 to future evidence synthesis efforts: first living search strategy and out of date scientific publishing and indexing industry (submitted). Farhad Shokraneh, Tony Russell-Rose, Journal of Clinical Epidemiology. Farhad Shokraneh and Tony Russell-Rose. Lessons from covid-19 to future evidence synthesis efforts: first living search strategy and out of date scientific publishing and indexing industry (submitted). Journal of Clinical Epidemiology, 2020.
Co-search: Covid-19 information retrieval with semantic search, question answering, and abstractive summarization. Andre Esteva, Anuprit Kale, Romain Paulus, Kazuma Hashimoto, Wenpeng Yin, Dragomir Radev, Richard Socher, arXiv:2006.09595arXiv preprintAndre Esteva, Anuprit Kale, Romain Paulus, Kazuma Hashimoto, Wenpeng Yin, Dragomir Radev, and Richard Socher. Co-search: Covid-19 information retrieval with semantic search, question answering, and abstractive summarization. arXiv preprint arXiv:2006.09595, 2020.
Trec-covid: Rationale and structure of an information retrieval shared task for covid-19. Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, William R Hersh, Journal of the American Medical Informatics Association. Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, and William R Hersh. Trec-covid: Rationale and structure of an information retrieval shared task for covid-19. Journal of the American Medical Informatics Association, 2020.
Retweets of officials' alarming vs reassuring messages during the covid-19 pandemic: Implications for crisis management. Naga H Raghav Rao, Patricia Vemprala, Rohit Akello, Valecha, International Journal of Information Management. 102187H Raghav Rao, Naga Vemprala, Patricia Akello, and Rohit Valecha. Retweets of officials' alarming vs reassuring messages during the covid-19 pandemic: Implications for crisis management. International Journal of Information Management, page 102187, 2020.
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. Albert: A lite bert for self-supervised learning of language representations. In International Conference on Learning Representations, 2019.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta, arXiv:1907.11692A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, Bioinformatics. 364Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234-1240, 2020.
Large-scale semantic indexing and question answering in biomedicine. Eirini Papagiannopoulou, Yiannis Papanikolaou, Dimitris Dimitriadis, Sakis Lagopoulos, Grigorios Tsoumakas, Manos Laliotis, Nikos Markantonatos, and Ioannis Vlahavas. Proceedings of the Fourth BioASQ workshopEirini Papagiannopoulou, Yiannis Papanikolaou, Dimitris Dimitriadis, Sakis Lagopoulos, Grigorios Tsoumakas, Manos Laliotis, Nikos Markantonatos, and Ioannis Vlahavas. Large-scale semantic indexing and question answering in biomedicine. In Proceedings of the Fourth BioASQ workshop, pages 50-54, 2016.
Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Dirk Michael R Alvers, Anastasia Weissenborn, Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artieres, Axel Ngonga. Norman Heino16138George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artieres, Axel Ngonga, Norman Heino, Eric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. An overview of the bioasq large-scale biomedical semantic indexing and question answering competition. BMC Bioinformatics, 16:138, 2015.
Livivo-the vertical search engine for life sciences. Bernd Müller, Christoph Poley, Jana Pössel, Alexandra Hagelstein, Thomas Gübitz, Datenbank-Spektrum. 171Bernd Müller, Christoph Poley, Jana Pössel, Alexandra Hagelstein, and Thomas Gübitz. Livivo-the vertical search engine for life sciences. Datenbank-Spektrum, 17(1):29-34, 2017.
Georgios Paliouras, and Ioannis Kakadiaris. Results of the fifth edition of the bioasq challenge. Anastasios Nentidis, Konstantinos Bougiatiotis, Anastasia Krithara, Anastasios Nentidis, Konstantinos Bougiatiotis, Anastasia Krithara, Georgios Paliouras, and Ioannis Kakadiaris. Results of the fifth edition of the bioasq challenge. In BioNLP 2017, pages 48-57, 2017.
Cole and utai at bioasq 2015: experiments with similarity based descriptor assignment. J Francisco, Luis M De Ribadas, Campos, M Vıctor, Alfonso E Darriba, Romero, Working Notes of CLEF 2015 -Conference and Labs of the Evaluation forum. Toulouse, FranceFrancisco J Ribadas, Luis M De Campos, Vıctor M Darriba, and Alfonso E Romero. Cole and utai at bioasq 2015: experiments with similarity based descriptor assignment. In Working Notes of CLEF 2015 -Conference and Labs of the Evaluation forum, Toulouse, France, September 8-11, 2015, 2015.
High dimensional model representation of log-likelihood ratio: binary classification with expression data. Ali Foroughi Pour, Maciej Pietrzak, Lori A Dalton, Grzegorz A Rempała, BMC bioinformatics. 21Ali Foroughi Pour, Maciej Pietrzak, Lori A Dalton, and Grzegorz A Rempała. High dimensional model rep- resentation of log-likelihood ratio: binary classification with expression data. BMC bioinformatics, 21:1-27, 2020.
Results of the seventh edition of the bioasq challenge. Anastasios Nentidis, Konstantinos Bougiatiotis, Anastasia Krithara, Georgios Paliouras, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerAnastasios Nentidis, Konstantinos Bougiatiotis, Anastasia Krithara, and Georgios Paliouras. Results of the seventh edition of the bioasq challenge. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 553-568. Springer, 2019.
Keep up with the latest coronavirus research. Q Chen, A Allot, Z Lu, Nature. 5797798193Q. Chen, A. Allot, and Z. Lu. Keep up with the latest coronavirus research. Nature, 579(7798):193, 2020.
Self-supervised learning of motion capture. Hsiao-Yu Tung, Hsiao-Wei Tung, Ersin Yumer, Katerina Fragkiadaki, Advances in Neural Information Processing Systems. Hsiao-Yu Tung, Hsiao-Wei Tung, Ersin Yumer, and Katerina Fragkiadaki. Self-supervised learning of motion capture. In Advances in Neural Information Processing Systems, pages 5236-5246, 2017.
Self-supervised learning of pretext-invariant representations. Ishan Misra, Laurens Van Der Maaten, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionIshan Misra and Laurens van der Maaten. Self-supervised learning of pretext-invariant representations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6707-6717, 2020.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763, 2019.
Multi-task self-supervised learning for robust speech recognition. Mirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, Yoshua Bengio, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEMirco Ravanelli, Jianyuan Zhong, Santiago Pascual, Pawel Swietojanski, Joao Monteiro, Jan Trmal, and Yoshua Bengio. Multi-task self-supervised learning for robust speech recognition. In ICASSP 2020-2020 IEEE Interna- tional Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 6989-6993. IEEE, 2020.
Self-supervised learning model for skin cancer diagnosis. Ammara Masood, Adel Al-Jumaily, Khairul Anam, 7th International IEEE/EMBS Conference on Neural Engineering (NER). IEEEAmmara Masood, Adel Al-Jumaily, and Khairul Anam. Self-supervised learning model for skin cancer diagnosis. In 2015 7th International IEEE/EMBS Conference on Neural Engineering (NER), pages 1012-1015. IEEE, 2015.
Deep learning-based cross-classifications reveal conserved spatial behaviors within tumor histological images. bioRxiv. Javad Noorbakhsh, Saman Farahmand, Sandeep Namburi, Dennis Caruana, David Rimm, Mohammad Soltaniehha, Kourosh Zarringhalam, H Jeffrey, Chuang, 715656Javad Noorbakhsh, Saman Farahmand, Sandeep Namburi, Dennis Caruana, David Rimm, Mohammad Soltanieh- ha, Kourosh Zarringhalam, Jeffrey H Chuang, et al. Deep learning-based cross-classifications reveal conserved spatial behaviors within tumor histological images. bioRxiv, page 715656, 2020.
On data-driven curation, learning, and analysis for inferring evolving internet-of-things (iot) botnets in the wild. Antonio Morteza Safaei Pour, Kurt Mangino, Matthias Friday, Elias Rathbun, Farkhund Bou-Harb, Sagar Iqbal, Jorge Samtani, Nasir Crichigno, Ghani, Computers & Security. 91101707Morteza Safaei Pour, Antonio Mangino, Kurt Friday, Matthias Rathbun, Elias Bou-Harb, Farkhund Iqbal, Sagar Samtani, Jorge Crichigno, and Nasir Ghani. On data-driven curation, learning, and analysis for inferring evolving internet-of-things (iot) botnets in the wild. Computers & Security, 91:101707, 2020.
Internet-scale insecurity of consumer internet of things: An empirical measurements perspective. Antonio Mangino, Morteza Safaei Pour, Elias Bou-Harb, ACM Transactions on Management Information Systems. TMISAntonio Mangino, Morteza Safaei Pour, and Elias Bou-Harb. Internet-scale insecurity of consumer internet of things: An empirical measurements perspective. ACM Transactions on Management Information Systems (TMIS).
Detecting internet of things attacks using distributed deep learning. Gonzalo De La Torre Parra, Paul Rad, Kim-Kwang Raymond Choo, Nicole Beebe, Journal of Network and Computer Applications. 102662Gonzalo De La Torre Parra, Paul Rad, Kim-Kwang Raymond Choo, and Nicole Beebe. Detecting internet of things attacks using distributed deep learning. Journal of Network and Computer Applications, page 102662, 2020.
A simple framework for contrastive learning of visual representations. Ting Chen, Simon Kornblith, Mohammad Norouzi, Geoffrey Hinton, arXiv:2002.05709arXiv preprintTing Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations. arXiv preprint arXiv:2002.05709, 2020.
Adversarial robustness: From self-supervised pre-training to fine-tuning. Tianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, Zhangyang Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionTianlong Chen, Sijia Liu, Shiyu Chang, Yu Cheng, Lisa Amini, and Zhangyang Wang. Adversarial robustness: From self-supervised pre-training to fine-tuning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 699-708, 2020.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683, 2019.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI Blog. 189Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. OpenAI Blog, 1(8):9, 2019.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998-6008, 2017.
Genetic cluster analysis of sars-cov-2 and the identification of those responsible for the major outbreaks in various countries. Xuemei Yang, Ning Dong, Edward Wai-Chi Chan, Sheng Chen, Emerging Microbes & Infections. 91Xuemei Yang, Ning Dong, Edward Wai-Chi Chan, and Sheng Chen. Genetic cluster analysis of sars-cov-2 and the identification of those responsible for the major outbreaks in various countries. Emerging Microbes & Infections, 9(1):1287-1299, 2020.
. Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Michael Kinney, Ziyang Liu, William Merrill, Paul Mooney, Dewey A Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D Wade, Kuansan Wang, Christopher Wilhelm, Boya Xie, Douglas M Raymond, Daniel S Weld, Oren Etzioni, Sebastian Kohlmeier, Cord-19: The covid-19 open research dataset. ArXivLucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Michael Kinney, Ziyang Liu, William. Merrill, Paul Mooney, Dewey A. Murdick, Devvret Rishi, Jerry Sheehan, Zhihong Shen, Brandon Stilson, Alex D. Wade, Kuansan Wang, Christopher Wilhelm, Boya Xie, Douglas M. Raymond, Daniel S. Weld, Oren Etzioni, and Sebastian Kohlmeier. Cord-19: The covid-19 open research dataset. ArXiv, 2020.
Mass: Masked sequence to sequence pre-training for language generation. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, Tie-Yan Liu, International Conference on Machine Learning. Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. Mass: Masked sequence to sequence pre-training for language generation. In International Conference on Machine Learning, pages 5926-5936, 2019.
Scibert: A pretrained language model for scientific text. Iz Beltagy, Kyle Lo, Arman Cohan, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Iz Beltagy, Kyle Lo, and Arman Cohan. Scibert: A pretrained language model for scientific text. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3606-3611, 2019.
Text clustering based on the improved tfidf by the iterative algorithm. Xingheng Wang, Jun Cao, Yao Liu, Shi Gao, Xue Deng, Electrical & Electronics Engineering (EEESYM), 2012 IEEE Symposium on. IEEEXingheng Wang, Jun Cao, Yao Liu, Shi Gao, and Xue Deng. Text clustering based on the improved tfidf by the iterative algorithm. In Electrical & Electronics Engineering (EEESYM), 2012 IEEE Symposium on, pages 140-143. IEEE, 2012.
A novel tf-idf weighting scheme for effective ranking. H Jiaul, Paik, Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval. the 36th international ACM SIGIR conference on Research and development in information retrievalACMJiaul H Paik. A novel tf-idf weighting scheme for effective ranking. In Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval, pages 343-352. ACM, 2013.
Automatic text summarization using customizable fuzzy features and attention on the context and vocabulary. Ramin Sahba, Nima Ebadi, Mo Jamshidi, Paul Rad, 2018 World Automation Congress (WAC). IEEERamin Sahba, Nima Ebadi, Mo Jamshidi, and Paul Rad. Automatic text summarization using customizable fuzzy features and attention on the context and vocabulary. In 2018 World Automation Congress (WAC), pages 1-5. IEEE, 2018.
Human action performance using deep neuro-fuzzy recurrent attention model. Nihar Bendre, Nima Ebadi, J John, Peyman Prevost, Najafirad, IEEE Access. 8Nihar Bendre, Nima Ebadi, John J Prevost, and Peyman Najafirad. Human action performance using deep neuro-fuzzy recurrent attention model. IEEE Access, 8:57749-57761, 2020.
Threshold optimisation for multi-label classifiers. Ignazio Pillai, Giorgio Fumera, Fabio Roli, Pattern Recognition. 467Ignazio Pillai, Giorgio Fumera, and Fabio Roli. Threshold optimisation for multi-label classifiers. Pattern Recognition, 46(7):2055-2065, 2013.
A simple but tough-to-beat baseline for the fake news challenge stance detection task. Benjamin Riedel, Isabelle Augenstein, P Georgios, Sebastian Spithourakis, Riedel, arXiv:1707.03264arXiv preprintBenjamin Riedel, Isabelle Augenstein, Georgios P Spithourakis, and Sebastian Riedel. A simple but tough-to-beat baseline for the fake news challenge stance detection task. arXiv preprint arXiv:1707.03264, 2017.
Tensorflow eager: A multi-stage, pythonembedded dsl for machine learning. Akshay Agrawal, Akshay Naresh Modi, Alexandre Passos, Allen Lavoie, Ashish Agarwal, Asim Shankar, Igor Ganichev, Josh Levenberg, Mingsheng Hong, Rajat Monga, arXiv:1903.01855arXiv preprintAkshay Agrawal, Akshay Naresh Modi, Alexandre Passos, Allen Lavoie, Ashish Agarwal, Asim Shankar, Igor Ganichev, Josh Levenberg, Mingsheng Hong, Rajat Monga, et al. Tensorflow eager: A multi-stage, python- embedded dsl for machine learning. arXiv preprint arXiv:1903.01855, 2019.
Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, arXiv:1603.04467Large-scale machine learning on heterogeneous distributed systems. arXiv preprintMartín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
On early stopping in gradient descent learning. Yuan Yao, Lorenzo Rosasco, Andrea Caponnetto, Constructive Approximation. 26Yuan Yao, Lorenzo Rosasco, and Andrea Caponnetto. On early stopping in gradient descent learning. Constructive Approximation, 26(2):289-315, 2007.
Implicit life event discovery from call transcripts using temporal input transformation network. Nima Ebadi, Brandon Lwowski, Mehrad Jaloli, Paul Rad, IEEE Access. 7Nima Ebadi, Brandon Lwowski, Mehrad Jaloli, and Paul Rad. Implicit life event discovery from call transcripts using temporal input transformation network. IEEE Access, 7:172178-172189, 2019.
Evaluation measures for hierarchical classification: a unified view and novel approaches. Aris Kosmopoulos, Ioannis Partalas, Eric Gaussier, Georgios Paliouras, and Ion Androutsopoulos. 29Aris Kosmopoulos, Ioannis Partalas, Eric Gaussier, Georgios Paliouras, and Ion Androutsopoulos. Evaluation measures for hierarchical classification: a unified view and novel approaches. Data Mining and Knowledge Discovery, 29(3):820-865, 2015.
| [
"https://github.com/BioASQ/Evaluation-Measures/tree/master/hierarchical",
"https://github.com/tensorflow/tensor2tensor"
] |
[
"A Syntactic Neural Model for General-Purpose Code Generation",
"A Syntactic Neural Model for General-Purpose Code Generation"
] | [
"Pengcheng Yin pcyin@cs.cmu.edu \nLanguage Technologies Institute\nLanguage Technologies Institute Carnegie Mellon University\nCarnegie Mellon University\n\n",
"Graham Neubig gneubig@cs.cmu.edu \nLanguage Technologies Institute\nLanguage Technologies Institute Carnegie Mellon University\nCarnegie Mellon University\n\n"
] | [
"Language Technologies Institute\nLanguage Technologies Institute Carnegie Mellon University\nCarnegie Mellon University\n",
"Language Technologies Institute\nLanguage Technologies Institute Carnegie Mellon University\nCarnegie Mellon University\n"
] | [] | We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches. | 10.18653/v1/p17-1041 | [
"https://arxiv.org/pdf/1704.01696v1.pdf"
] | 12,718,048 | 1704.01696 | c8d0e13de2eaa09a928eff36b99d63f494c2f5ec |
A Syntactic Neural Model for General-Purpose Code Generation
Pengcheng Yin pcyin@cs.cmu.edu
Language Technologies Institute
Language Technologies Institute Carnegie Mellon University
Carnegie Mellon University
Graham Neubig gneubig@cs.cmu.edu
Language Technologies Institute
Language Technologies Institute Carnegie Mellon University
Carnegie Mellon University
A Syntactic Neural Model for General-Purpose Code Generation
We consider the problem of parsing natural language descriptions into source code written in a general-purpose programming language like Python. Existing datadriven methods treat this problem as a language generation task without considering the underlying syntax of the target programming language. Informed by previous work in semantic parsing, in this paper we propose a novel neural architecture powered by a grammar model to explicitly capture the target syntax as prior knowledge. Experiments find this an effective way to scale up to generation of complex programs from natural language descriptions, achieving state-of-the-art results that well outperform previous code generation and semantic parsing approaches.
Introduction
Every programmer has experienced the situation where they know what they want to do, but do not have the ability to turn it into a concrete implementation. For example, a Python programmer may want to "sort my list in descending order," but not be able to come up with the proper syntax sorted(my list, reverse=True) to realize his intention. To resolve this impasse, it is common for programmers to search the web in natural language (NL), find an answer, and modify it into the desired form (Brandt et al., 2009(Brandt et al., , 2010. However, this is time-consuming, and thus the software engineering literature is ripe with methods to directly generate code from NL descriptions, mostly with hand-engineered methods highly tailored to specific programming languages (Balzer, 1985;Little and Miller, 2009;Gvero and Kuncak, 2015).
In parallel, the NLP community has developed methods for data-driven semantic parsing, which attempt to map NL to structured logical forms executable by computers. These logical forms can be general-purpose meaning representations (Clark and Curran, 2007;Banarescu et al., 2013), formalisms for querying knowledge bases (Tang and Mooney, 2001;Zettlemoyer and Collins, 2005;Berant et al., 2013) and instructions for robots or personal assistants Quirk et al., 2015), among others. While these methods have the advantage of being learnable from data, compared to the programming languages (PLs) in use by programmers, the domainspecific languages targeted by these works have a schema and syntax that is relatively simple.
Recently, Ling et al. (2016) have proposed a data-driven code generation method for high-level, general-purpose PLs like Python and Java. This work treats code generation as a sequence-tosequence modeling problem, and introduce methods to generate words from character-level models, and copy variable names from input descriptions. However, unlike most work in semantic parsing, it does not consider the fact that code has to be well-defined programs in the target syntax.
In this work, we propose a data-driven syntaxbased neural network model tailored for generation of general-purpose PLs like Python. In order to capture the strong underlying syntax of the PL, we define a model that transduces an NL statement into an Abstract Syntax Tree (AST; Fig. 1(a), § 2) for the target PL. ASTs can be deterministically generated for all well-formed programs using standard parsers provided by the PL, and thus give us a way to obtain syntax information with minimal engineering. Once we generate an AST, we can use deterministic generation tools to convert the AST into surface code. We hypothesize that such a structured approach has two benefits. First, we hypothesize that structure can be used to constrain our search space, ensuring generation of well-formed code. To this end, we propose a syntax-driven neural code generation model. The backbone of our approach is a grammar model ( § 3) which formalizes the generation story of a derivation AST into sequential application of actions that either apply production rules ( § 3.1), or emit terminal tokens ( § 3.2). The underlying syntax of the PL is therefore encoded in the grammar model a priori as the set of possible actions. Our approach frees the model from recovering the underlying grammar from limited training data, and instead enables the system to focus on learning the compositionality among existing grammar rules. Xiao et al. (2016) have noted that this imposition of structure on neural models is useful for semantic parsing, and we expect this to be even more important for general-purpose PLs where the syntax trees are larger and more complex.
Second, we hypothesize that structural information helps to model information flow within the neural network, which naturally reflects the recursive structure of PLs. To test this, we extend a standard recurrent neural network (RNN) decoder to allow for additional neural connections which reflect the recursive structure of an AST ( § 4.2). As an example, when expanding the node in Fig. 1(a), we make use of the information from both its parent and left sibling (the dashed rectangle). This enables us to locally pass information of relevant code segments via neural network connections, resulting in more confident predictions.
Experiments ( § 5) on two Python code generation tasks show 11.7% and 9.3% absolute improvements in accuracy against the state-of-the-art system (Ling et al., 2016). Our model also gives competitive performance on a standard semantic parsing benchmark.
The Code Generation Problem
Given an NL description x, our task is to generate the code snippet c in a modern PL based on the in-tent of x. We attack this problem by first generating the underlying AST. We define a probabilistic grammar model of generating an AST y given x: p(y|x). The best-possible ASTŷ is then given bŷ y = arg max y p(y|x).
(1)
y is then deterministically converted to the corresponding surface code c. 1 While this paper uses examples from Python code, our method is PLagnostic. Before detailing our approach, we first present a brief introduction of the Python AST and its underlying grammar. The Python abstract grammar contains a set of production rules, and an AST is generated by applying several production rules composed of a head node and multiple child nodes. For instance, the first rule in Tab. 1 is used to generate the function call sorted(·) in Fig. 1(a). It consists of a head node of type Call, and three child nodes of type expr, expr* and keyword*, respectively. Labels of each node are noted within brackets. In an AST, non-terminal nodes sketch the general structure of the target code, while terminal nodes can be categorized into two types: operation terminals and variable terminals. Operation terminals correspond to basic arithmetic operations like AddOp.Variable terminal nodes store values for variables and constants of built-in data types 2 . For instance, all terminal nodes in Fig. 1(a) are variable terminal nodes.
Grammar Model
Before detailing our neural code generation method, we first introduce the grammar model at its core. Our probabilistic grammar model defines the generative story of a derivation AST. We factorize the generation process of an AST into sequential application of actions of two types:
• APPLYRULE[r] applies a production rule r to the current derivation tree; shows the generation process of the target AST in Fig. 1(a). Each node in Fig. 1(b) indicates an action. Action nodes are connected by solid arrows which depict the chronological order of the action flow. The generation proceeds in depth-first, left-to-right order (dotted arrows represent parent feeding, explained in § 4.2.1).
Formally, under our grammar model, the probability of generating an AST y is factorized as:
p(y|x) = T t=1 p(a t |x, a <t ),(2)
where a t is the action taken at time step t, and a <t is the sequence of actions before t. We will explain how to compute Eq. (2) in § 4. Put simply, the generation process begins from a root node at t 0 , and proceeds by the model choosing APPLYRULE actions to generate the overall program structure from a closed set of grammar rules, then at leaves of the tree corresponding to variable terminals, the model switches to GENTOKEN actions to generate variables or constants from the open set. We describe this process in detail below.
APPLYRULE Actions
APPLYRULE actions generate program structure, expanding the current node (the frontier node at time step t: n ft ) in a depth-first, left-to-right traversal of the tree. Given a fixed set of production rules, APPLYRULE chooses a rule r from the subset that has a head matching the type of n ft , and uses r to expand n ft by appending all child nodes specified by the selected production. As an example, in Fig. 1(b), the rule Call → expr. . . expands the frontier node Call at time step t 4 , and its three child nodes expr, expr* and keyword* are added to the derivation. APPLYRULE actions grow the derivation AST by appending nodes. When a variable terminal node (e.g., str) is added to the derivation and becomes the frontier node, the grammar model then switches to GENTOKEN actions to populate the variable terminal with tokens.
Unary Closure Sometimes, generating an AST requires applying a chain of unary productions. For instance, it takes three time steps (t 9 − t 11 ) to generate the sub-structure expr* → expr → Name → str in Fig. 1(a). This can be effectively reduced to one step of APPLYRULE action by taking the closure of the chain of unary productions and merging them into a single rule: expr* → * str. Unary closures reduce the number of actions needed, but would potentially increase the size of the grammar. In our experiments we tested our model both with and without unary closures ( § 5).
GENTOKEN Actions
Once we reach a frontier node n ft that corresponds to a variable type (e.g., str), GENTOKEN actions are used to fill this node with values. For generalpurpose PLs like Python, variables and constants have values with one or multiple tokens. For instance, a node that stores the name of a function (e.g., sorted) has a single token, while a node that denotes a string constant (e.g., a='hello world') could have multiple tokens. Our model copes with both scenarios by firing GENTOKEN actions at one or more time steps. At each time step, GENTOKEN appends one terminal token to the current frontier variable node. A special </n> token is used to "close" the node. The grammar model then proceeds to the new frontier node.
Terminal tokens can be generated from a predefined vocabulary, or be directly copied from the input NL. This is motivated by the observation that the input description often contains out-ofvocabulary (OOV) variable names or literal values that are directly used in the target code. For instance, in our running example the variable name my list can be directly copied from the the input at t 12 . We give implementation details in § 4.2.2.
Estimating Action Probabilities
We estimate action probabilities in Eq. (2) using attentional neural encoder-decoder models with an information flow structured by the syntax trees.
Encoder
For an NL description x consisting of n words {w i } n i=1 , the encoder computes a context sensitive embedding h i for each w i using a bidirectional Long Short-Term Memory (LSTM) network (Hochreiter and Schmidhuber, 1997), similar to the setting in (Bahdanau et al., 2014). See supplementary materials for detailed equations.
Decoder
The decoder uses an RNN to model the sequential generation process of an AST defined as Eq. (2). Each action step in the grammar model naturally grounds to a time step in the decoder RNN. Therefore, the action sequence in Fig. 1(b) can be interpreted as unrolling RNN time steps, with solid arrows indicating RNN connections. The RNN maintains an internal state to track the generation process ( § 4.2.1), which will then be used to compute action probabilities p(a t |x, a <t ) ( § 4.2.2).
Tracking Generation States
Our implementation of the decoder resembles a vanilla LSTM, with additional neural connections (parent feeding, Fig. 1(b)) to reflect the topological structure of an AST. The decoder's internal hidden state at time step t, s t , is given by:
s t = f LSTM ([a t−1 : c t : p t : n ft ], s t−1 ), (3)
where f LSTM (·) is the LSTM update function.
[:] denotes vector concatenation. s t will then be used to compute action probabilities p(a t |x, a <t ) in Eq. (2). Here, a t−1 is the embedding of the previous action. c t is a context vector retrieved from input encodings {h i } via soft attention. p t is a vector that encodes the information of the parent action. n ft denotes the node type embedding of the current frontier node n ft 3 . Intuitively, feeding the decoder the information of n ft helps the model to keep track of the frontier node to expand. Action Embedding a t We maintain two action embedding matrices, W R and W G . Each row in W R (W G ) corresponds to an embedding vector for an action APPLYRULE[r] (GENTOKEN[v]). Context Vector c t The decoder RNN uses soft attention to retrieve a context vector c t from the input encodings {h i } pertain to the prediction of the current action. We follow Bahdanau et al. (2014) and use a Deep Neural Network (DNN) with a single hidden layer to compute attention weights. Parent Feeding p t Our decoder RNN uses additional neural connections to directly pass information from parent actions. For instance, when computing s 9 , the information from its parent action step t 4 will be used. Formally, we define the parent action step p t as the time step at which the frontier node n ft is generated. As an example, for t 9 , its parent action step p 9 is t 4 , since n f 9 is the node , which is generated at t 4 by the APPLYRULE[Call →. . .] action.
We model parent information p t from two sources: (1) the hidden state of parent action s pt , and (2) the embedding of parent action a pt . p t is the concatenation. The parent feeding schema enables the model to utilize the information of parent code segments to make more confident predictions. Similar approaches of injecting parent information were also explored in the SEQ2TREE model in Dong and Lapata (2016) 4 .
Calculating Action Probabilities
In this section we explain how action probabilities p(a t |x, a <t ) are computed based on s t . APPLYRULE The probability of applying rule r as the current action a t is given by a softmax 5 :
p(a t = APPLYRULE[r]|x, a <t ) = softmax(W R · g(s t )) · e(r) (4)
where g(·) is a non-linearity tanh(W·s t +b), and e(r) the one-hot vector for rule r. GENTOKEN As in § 3.2, a token v can be generated from a predefined vocabulary or copied from the input, defined as the marginal probability:
p(a t = GENTOKEN[v]|x, a <t ) = p(gen|x, a <t )p(v|gen, x, a <t ) + p(copy|x, a <t )p(v|copy, x, a <t ).
The selection probabilities p(gen|·) and p(copy|·) are given by softmax(W S · s t ). The probability of generating v from the vocabulary, p(v|gen, x, a <t ), is defined similarly as Eq. (4), except that we use the GENTOKEN embedding matrix W G , and we concatenate the context vector c t with s t as input. To model the copy probability, we follow recent advances in modeling copying mechanism in neural networks (Gu et al., 2016;Jia and Liang, 2016;Ling et al., 2016), and use a pointer network to compute the probability of copying the i-th word from the input by attending to input representations {h i }:
p(w i |copy, x, a <t ) = exp(ω(h i , s t , c t )) n i =1 exp(ω(h i , s t , c t ))
,
where ω(·) is a DNN with a single hidden layer. Specifically, if w i is an OOV word (e.g., my list, which is represented by a special <unk> token in encoding), we directly copy the actual word w i to the derivation.
Training and Inference
Given a dataset of pairs of NL descriptions x i and code snippets c i , we parse c i into its AST y i and decompose y i into a sequence of oracle actions under the grammar model. The model is then optimized by maximizing the log-likelihood of the oracle action sequence. At inference time, we use beam search to approximate the best ASTŷ in Eq. (1). See supplementary materials for the pseudo-code of the inference algorithm.
tioning on the hidden states of parent non-terminals, while our parent feeding uses the states of parent actions. 5 We do not show bias terms for all softmax equations. IFTTT dataset (Quirk et al., 2015) is a domainspecific benchmark that provides an interesting side comparison. Different from HS and DJANGO which are in a general-purpose PL, programs in IFTTT are written in a domain-specific language used by the IFTTT task automation App. Users of the App write simple instructions (e.g., If Instagram.AnyNewPhotoByYou Then Dropbox.AddFileFromURL) with NL descriptions (e.g., "Autosave your Instagram photos to Dropbox"). Each statement inside the If or Then clause consists of a channel (e.g., Dropbox) and a function (e.g., AddFileFromURL) 6 . This simple structure results in much more concise ASTs (7 nodes on average). Because all examples are created by ordinary Apps users, the dataset is highly noisy, with input NL very loosely connected to target ASTs. The authors thus provide a high-quality filtered test set, where each example is verified by at least three annotators. We use this set for evaluation. Also note IFTTT's grammar has more productions (Tab. 2), but this does not imply that its grammar is more complex. This is because for HS and DJANGO terminal tokens are generated by GENTOKEN actions, but for IFTTT, all the code is generated directly by APPLYRULE actions.
Setup
Preprocessing All input descriptions are tokenized using NLTK. We perform simple canonicalization for DJANGO, such as replacing quoted strings in the inputs with place holders. See supplementary materials for details. We extract unary closures whose frequency is larger than a threshold k (k = 30 for HS and 50 for DJANGO).
Configuration The size of all embeddings is 128, except for node type embeddings, which is 64. The dimensions of RNN states and hidden layers are 256 and 50, respectively. Since our datasets are relatively small for a data-hungry neural model, we impose strong regularization using recurrent dropouts (Gal and Ghahramani, 2016), together with standard dropout layers added to the inputs and outputs of the decoder RNN. We validate the dropout probability from {0, 0.2, 0.3, 0.4}. For decoding, we use a beam size of 15.
Results
Evaluation results for Python code generation tasks are listed in Tab. 3. Numbers for our syseters since they are mostly specific to users. 7 These two metrics are not ideal: accuracy only measures exact match and thus lacks the ability to give credit to semantically correct code that is different from the reference, while it is not clear whether BLEU provides an appropriate proxy for measuring semantics in the code generation task. A more intriguing metric would be directly measuring semantic/functional code equivalence, for which we present a pilot study at the end of this section (cf. Error Analysis). We leave exploring more sophisticated metrics (e.g. based on static code analysis) as future work. tems are averaged over three runs. We compare primarily with two approaches: (1) Latent Predictor Network (LPN), a state-of-the-art sequenceto-sequence code generation model (Ling et al., 2016), and (2) SEQ2TREE, a neural semantic parsing model (Dong and Lapata, 2016). SEQ2TREE generates trees one node at a time, and the target grammar is not explicitly modeled a priori, but implicitly learned from data. We test both the original SEQ2TREE model released by the authors and our revised one (SEQ2TREE-UNK) that uses unknown word replacement to handle rare words (Luong et al., 2015). For completeness, we also compare with a strong neural machine translation (NMT) system (Neubig, 2015) using a standard encoder-decoder architecture with attention and unknown word replacement 8 , and include numbers from other baselines used in Ling et al. (2016). On the HS dataset, which has relatively large ASTs, we use unary closure for our model and SEQ2TREE, and for DJANGO we do not. System Comparison As in Tab. 3, our model registers 11.7% and 9.3% absolute improvements over LPN in accuracy on HS and DJANGO. This boost in performance strongly indicates the importance of modeling grammar in code generation. For the baselines, we find LPN outperforms others in most cases. We also note that SEQ2TREE achieves a decent accuracy of 13.6% on HS, which is due to the effect of unknown word replacement, since we only achieved 1.5% without it. A closer comparison with SEQ2TREE is insightful for understanding the advantage of our syntax-driven approach, since both SEQ2TREE and our system output ASTs: (1) SEQ2TREE predicts one node each time step, and requires additional "dummy" nodes to mark the boundary of a subtree. The sheer number of nodes in target ASTs makes the prediction process error-prone. In contrast, the APPLYRULE actions of our grammar model allows for generating multiple nodes at a single time step. Empirically, we found that in HS, SEQ2TREE takes more than 300 time steps on average to generate a target AST, while our model takes only 170 steps.
(2) SEQ2TREE does not directly use productions in the grammar, which possibly leads to grammatically incorrect ASTs and thus empty code outputs. We observe that the ratio of grammatically incorrect ASTs predicted by SEQ2TREE on HS and DJANGO are 21.2% and 10.9%, respectively, while our system guarantees grammaticality. Ablation Study We also ablated our bestperforming models to analyze the contribution of each component. "-frontier embed." removes the frontier node embedding n ft from the decoder RNN inputs (Eq. (3)). This yields worse results on DJANGO while gives slight improvements in accuracy on HS. This is probably because that the grammar of HS has fewer node types, and thus the RNN is able to keep track of n ft without depending on its embedding. Next, "-parent feed." removes the parent feeding mechanism. The accuracy drops significantly on HS, with a marginal deterioration on DJANGO. This result is interesting because it suggests that parent feeding is more important when the ASTs are larger, which will be the case when handling more complicated code generation tasks like HS. Finally, removing the pointer network ("-copy terminals") in GENTO-
CHANNEL FULL TREE
Classical Methods posclass (Quirk et al., 2015) 81.4 71.0 LR (Beltagy and Quirk, 2016) 88.8 82.5
Neural Network Methods NMT 87.7 77.7 NN (Beltagy and Quirk, 2016) 88.0 74.3 SEQ2TREE (Dong and Lapata, 2016) 89 The results with and without unary closure demonstrate that, interestingly, it is effective on HS but not on DJANGO. We conjecture that this is because on HS it significantly reduces the number of actions from 173 to 142 (c.f., Tab. 2), with the number of productions in the grammar remaining unchanged. In contrast, DJANGO has a broader domain, and thus unary closure results in more productions in the grammar (237 for DJANGO vs. 100 for HS), increasing sparsity. Performance by the size of AST We further investigate our model's performance w.r.t. the size of the gold-standard ASTs in Figs. 3 and 4. Not surprisingly, the performance drops when the size of the reference ASTs increases. Additionally, on the HS dataset, the BLEU score still remains at around 50 even when the size of ASTs grows to 200, indicating that our proposed syntax-driven approach is robust for long code segments. Domain Specific Code Generation Although this is not the focus of our work, evaluation on IFTTT brings us closer to a standard semantic parsing setting, which helps to investigate similarities and differences between generation of more complicated general-purpose code and and more limiteddomain simpler code. Tab. 4 shows the results, following the evaluation protocol in (Beltagy and Quirk, 2016) for accuracies at both channel and full parse tree (channel + function) levels. Our full model performs on par with existing neural network-based methods, while outperforming other neural models in full tree accuracy (82.0%). This score is close to the best classical method (LR), which is based on a logistic regression input self.plural is an lambda function with an argument n, which returns result of boolean expression n not equal to integer 1 pred. self.plural = lambda n: len(n) ref. self.plural = lambda n: int(n!=1) model with rich hand-engineered features (e.g., brown clusters and paraphrase). Also note that the performance between NMT and other neural models is much closer compared with the results in Tab. 3. This suggests that general-purpose code generation is more challenging than the simpler IFTTT setting, and therefore modeling structural information is more helpful. Case Studies We present output examples in Tab. 5. On HS, we observe that most of the time our model gives correct predictions by filling learned code templates from training data with arguments (e.g., cost) copied from input. However, we do find interesting examples indicating that the model learns to generalize beyond trivial copying. For instance, the first example is one that our model predicted wrong -it generated code block A instead of the gold B (it also missed a function definition not shown here). However, we find that the block A actually conveys part of the input intent by destroying all, not some, of the minions. Since we are unable to find code block A in the training data, it is clear that the model has learned to generalize to some extent from multiple training card examples with similar semantics or structure.
The next two examples are from DJANGO. The first one shows that the model learns the usage of common API calls (e.g., os.path.join), and how to populate the arguments by copying from inputs. The second example illustrates the difficulty of generating code with complex nested structures like lambda functions, a scenario worth further investigation in future studies. More examples are attached in supplementary materials. Error Analysis To understand the sources of errors and how good our evaluation metric (exact match) is, we randomly sampled and labeled 100 and 50 failed examples (with accuracy=0) from DJANGO and HS, resp. We found that around 2% of these examples in the two datasets are actually semantically equivalent. These examples include:
(1) using different parameter names when defining a function; (2) omitting (or adding) default values of parameters in function calls. While the rarity of such examples suggests that our exact match metric is reasonable, more advanced evaluation metrics based on statistical code analysis are definitely intriguing future work.
For DJANGO, we found that 30% of failed cases were due to errors where the pointer network failed to appropriately copy a variable name into the correct position. 25% were because the generated code only partially implementated the required functionality. 10% and 5% of errors were due to malformed English inputs and preprocessing errors, respectively. The remaining 30% of examples were errors stemming from multiple sources, or errors that could not be easily categorized into the above. For HS, we found that all failed card examples were due to partial implementation errors, such as the one shown in Table 5.
Related Work
Code Generation and Analysis Most existing works on code generation focus on generating code for domain specific languages (DSLs) (Kushman and Barzilay, 2013; Raza et al., 2015;Manshadi et al., 2013), with neural network-based approaches recently explored (Parisotto et al., 2016;Balog et al., 2016). For general-purpose code generation, besides the general framework of Ling et al. (2016), existing methods often use language and task-specific rules and strategies (Lei et al., 2013;Raghothaman et al., 2016). A similar line is to use NL queries for code retrieval Allamanis et al., 2015). The reverse task of generating NL summaries from source code has also been explored (Oda et al., 2015;Iyer et al., 2016). Finally, there are probabilistic models of source code (Maddison and Tarlow, 2014;Nguyen et al., 2013). The most relevant work is Allamanis et al. (2015), which uses a factorized model to measure semantic relatedness between NL and ASTs for code retrieval, while our model tackles the more challenging generation task. Semantic Parsing Our work is related to the general topic of semantic parsing, where the target logical forms can be viewed as DSLs. The parsing process is often guided by grammatical formalisms like combinatory categorical grammars (Kwiatkowski et al., 2013;Artzi et al., 2015), dependency-based syntax (Liang et al., 2011;Pasupat and Liang, 2015) or task-specific formalisms (Clarke et al., 2010;Yih et al., 2015;Krishnamurthy et al., 2016;Misra et al., 2015;Mei et al., 2016). Recently, there are efforts in designing neural network-based semantic parsers (Misra and Artzi, 2016;Dong and Lapata, 2016;Neelakantan et al., 2016;Yin et al., 2016). Several approaches have be proposed to utilize grammar knowledge in a neural parser, such as augmenting the training data by generating examples guided by the grammar (Kociský et al., 2016;Jia and Liang, 2016). used a neural decoder which constrains the space of next valid tokens in the query language for question answering. Finally, the structured prediction approach proposed by Xiao et al. (2016) is closely related to our model in using the underlying grammar as prior knowledge to constrain the generation process of derivation trees, while our method is based on a unified grammar model which jointly captures production rule application and terminal symbol generation, and scales to general purpose code generation tasks.
Conclusion
This paper proposes a syntax-driven neural code generation approach that generates an abstract syntax tree by sequentially applying actions from a grammar model. Experiments on both code generation and semantic parsing tasks demonstrate the effectiveness of our proposed approach.
Figure 1 :
1(a) the Abstract Syntax Tree (AST) for the given example code. Dashed nodes denote terminals. Nodes are labeled with time steps during which they are generated. (b) the action sequence (up to t14) used to generate the AST in (a) • GENTOKEN[v] populates a variable terminal node by appending a terminal token v.
Fig. 1 (b)
1Fig. 1(b) shows the generation process of the target AST in Fig. 1(a). Each node in Fig. 1(b) indicates an action. Action nodes are connected by solid arrows which depict the chronological order of the action flow. The generation proceeds in depth-first, left-to-right order (dotted arrows represent parent feeding, explained in § 4.2.1). Formally, under our grammar model, the probability of generating an AST y is factorized as:
Figure 2 :
2Illustration of a decoder time step (t = 9)
Figure 3 :Figure 4 :
34Performance Performance w.r.t reference AST size on HS
Role Explanation Call → expr[func] expr*[args] keyword*[keywords] Function Call func: the function to be invoked args: arguments list keywords: keyword arguments list If → expr[test] stmt*[body] stmt*[orelse] If Statement test: condition expression body: statements inside the If clause orelse: elif or else statements For → expr[target] expr*[iter] stmt*[body] For Loop target: iteration variable iter: enumerable to iterate over body: loop body orelse: else statements stmt*[orelse] FunctionDef → identifier[name] arguments*[args] Function Def. name: function name args: function arguments body: function body stmt*[body] Table 1: Example production rules for common Python statements (Python Software Foundation, 2016)Production Rule
Table 2 :
2Statistics of datasets and associated grammars5 Experimental Evaluation
5.1 Datasets and Metrics
HEARTHSTONE (HS) dataset (Ling et al., 2016)
is a collection of Python classes that implement
cards for the card game HearthStone. Each card
comes with a set of fields (e.g., name, cost, and
description), which we concatenate to create the
input sequence. This dataset is relatively difficult:
input descriptions are short, while the target code
is in complex class structures, with each AST hav-
ing 137 nodes on average.
DJANGO dataset (Oda et al., 2015) is a collection
of lines of code from the Django web framework,
each with a manually annotated NL description.
Compared with the HS dataset where card imple-
mentations are somewhat homogenous, examples
in DJANGO are more diverse, spanning a wide va-
riety of real-world use cases like string manipula-
tion, IO operations, and exception handling.
Table 3 :
3Results on two Python code generation tasks.† Results previously reported in Ling et al. (2016).
Table 5 :
5Predicted examples from HS (1st) and DJANGO. Copied contents (copy probability > 0.9) are highlighted.
We use astor library to convert ASTs into Python code.
t 4 t 5 t 6 t 7 t 8 t 4 t 9 t 1 0 t 1 1 t 1 2 t 1 3 t 4 t 1 4 t 1 5 t 1 6 t 1 7 t 1 6
We maintain an embedding for each node type. 4 SEQ2TREE generates tree-structured outputs by condi-
Like Beltagy and Quirk (2016), we strip function param-
For NMT, we also attempted to find the best-scoring syntactically correct predictions in the size-5 beam, but this did not yield a significant improvement over the NMT results in Tab. 3.
AcknowledgmentWe are grateful to Wang Ling for his generous help with LPN and setting up the benchmark. We also thank Li Dong for helping with SEQ2TREE and insightful discussions.Supplementary Materials A Encoder LSTM EquationsSuppose the input natural language description x consists of n words {w i } n i=1 . Let w i denote the embedding of w i . We use two LSTMs to process x in forward and backward order, and get the sequence of hidden states { h i } n i=1 and { h i } n i=1 in the two directions:B Inference AlgorithmGiven an NL description, we approximate the best ASTŷ in Eq. 1 using beam search. The inference procedure is listed in Algorithm 1.We maintain a beam of size K. The beam is initialized with one hypothesis AST with a single root node (line 2). At each time step, the decoder enumerates over all hypotheses in the beam. For each hypothesis AST, we first find its frontier node n ft (line 6). If n ft is a non-terminal node, we collect all syntax rules r with n ft as the head node to the actions set (line 10). If n ft is a variable terminal node, we add all terminal tokens in the vocabulary and the input description as candidate actions (line 13). We apply each candidate action on the current hypothesis AST to generate a new hypothesis (line 15). We then rank all newly generated hypotheses and keep the top-K scored ones in the beam. A complete hypothesis AST is generated when it has no frontier node. We then convert the top-scored complete AST into the surface code(lines 18-19).We remark that our inference algorithm can be implemented efficiently by expanding multiple hypotheses (lines 5-16) simultaneously using mini-batching on GPU.C Dataset PreprocessingInfrequent Words We replace word types whose frequency is lower than d with a special <unk> token (d = 3 for DJANGO, 3 for HS and 2 for IFTTT). Canonicalization We perform simple canonicalization for the DJANGO dataset: (1) We observe that input descriptions often come with quoted string literals (e.g., verbose name is a string 'cache entry'). We therefore replace quoted strings with indexed placeholders using regular expression. After decoding, we run a postprocessing step to replace all placeholders with their actual values. (2) For descriptions with cascading variable reference (e.g., call method self.makekey), we append after the whole variable name with tokens separated by '.' (e.g., append self and makekey after self.makekey). This gives the pointer network flexibility to copy either partial or whole variable names. Generate Oracle Action Sequence To train our model, we generate the gold-standard action sequence from reference code. For IFTTT, we simply parse the officially provided ASTs into sequences of APPLYRULE actions. For HS and DJANGO, we first convert the Python code into ASTs using the standard ast module. Values inside variable terminal nodes are tokenized by space and camel case (e.g., ClassName is tokenized to Class and Name). We then traverse the AST in pre-order to generate the reference action sequence according to the grammar model.D Additional Decoding ExamplesWe provide extra decoding examples from the DJANGO and HS datasets, listed inTable 6and Table 7, respectively. The model heavily relies on the pointer network to copy variable names and constants from input descriptions. We find the source of errors in DJANGO is more diverse, with most incorrect examples resulting from missing arguments and incorrect words copied by the pointer network. Errors in HS are mostly due to partially or incorrectly implemented effects. Also note that the first example inTable 6is semantically correct, although it was considered incorrect under our exact-match metric. This suggests more advanced evaluation metric that takes into account the execution results in future studies. input call the function get language, split the result by '-', substitute the first element of the result for base lang.(2)), SelfSelector()))])input <name> Maexxna </name> <cost> 6 </cost> <attack> 2 </attack> <defense> 8 </defense> <desc> Destroy any minion damaged by this minion. </desc> <rarity> Legendary </rarity> ... input <name> Hellfire </name> <cost> 4 </cost> <attack> -1 </attack> <defense> -1 </defense> <desc> Deal 3 damage to ALL characters. reason Partially implemented effect: only deal 3 damage to opponent's characters input <name> Darkscale Healer </name> <cost> 5 </cost> <attack> 4 </attack> <defense> 5 </defense> <desc> Battlecry: Restore 2 Health to all friendly characters. </desc> <rarity> Common </rarity> ... reason Incorrect effect: damage 2 health instead of restoring. Cast effect to all players instead of friendly players only.
Bimodal modelling of source code and natural language. Miltiadis Allamanis, Daniel Tarlow, Andrew D Gordon, Yi Wei, Proceedings of ICML. ICML37Miltiadis Allamanis, Daniel Tarlow, Andrew D. Gor- don, and Yi Wei. 2015. Bimodal modelling of source code and natural language. In Proceedings of ICML. volume 37.
Tree-structured decoding with doubly recurrent neural networks. David Alvarez, - Melis, Tommi S Jaakkola, Proceedings of ICLR. ICLRDavid Alvarez-Melis and Tommi S. Jaakkola. 2017. Tree-structured decoding with doubly recurrent neu- ral networks. In Proceedings of ICLR.
Broad-coverage CCG semantic parsing with AMR. Yoav Artzi, Kenton Lee, Luke Zettlemoyer, Proceedings of EMNLP. EMNLPYoav Artzi, Kenton Lee, and Luke Zettlemoyer. 2015. Broad-coverage CCG semantic parsing with AMR. In Proceedings of EMNLP.
Weakly supervised learning of semantic parsers for mapping instructions to actions. Yoav Artzi, Luke Zettlemoyer, Transaction of ACL. 11Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transaction of ACL 1(1).
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, CoRR abs/1409.0473Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR abs/1409.0473.
Deepcoder: Learning to write programs. Matej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow, CoRR abs/1611.01989Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. 2016. Deepcoder: Learning to write programs. CoRR abs/1611.01989.
A 15 year perspective on automatic programming. Robert Balzer, IEEE Trans. Software Eng. 1111Robert Balzer. 1985. A 15 year perspective on au- tomatic programming. IEEE Trans. Software Eng. 11(11).
Abstract meaning representation for sembanking. Laura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, Nathan Schneider, Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse. the 7th Linguistic Annotation Workshop and Interoperability with DiscourseLAW-ID@ACLLaura Banarescu, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning representation for sembanking. In Proceedings of the 7th Linguis- tic Annotation Workshop and Interoperability with Discourse, LAW-ID@ACL.
Improved semantic parsers for if-then statements. I Beltagy, Chris Quirk, Proceedings of ACL. ACLI. Beltagy and Chris Quirk. 2016. Improved seman- tic parsers for if-then statements. In Proceedings of ACL.
Semantic parsing on freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, Proceedings of EMNLP. EMNLPJonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of EMNLP.
Example-centric programming: integrating web search into the development environment. Joel Brandt, Mira Dontcheva, Marcos Weskamp, Scott R Klemmer, Proceedings of CHI. CHIJoel Brandt, Mira Dontcheva, Marcos Weskamp, and Scott R. Klemmer. 2010. Example-centric program- ming: integrating web search into the development environment. In Proceedings of CHI.
Two studies of opportunistic programming: interleaving web foraging, learning, and writing code. Joel Brandt, Philip J Guo, Joel Lewenstein, Mira Dontcheva, Scott R Klemmer, Proceedings of CHI. CHIJoel Brandt, Philip J. Guo, Joel Lewenstein, Mira Dontcheva, and Scott R. Klemmer. 2009. Two stud- ies of opportunistic programming: interleaving web foraging, learning, and writing code. In Proceedings of CHI.
Widecoverage efficient statistical parsing with CCG and log-linear models. Stephen Clark, James R Curran, Computational Linguistics. 334Stephen Clark and James R. Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4).
Driving semantic parsing from the world's response. James Clarke, Dan Goldwasser, Ming-Wei Chang, Dan Roth , Proceedings of CoNLL. CoNLLJames Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world's response. In Proceedings of CoNLL.
Language to logical form with neural attention. Li Dong, Mirella Lapata, Proceedings of ACL. ACLLi Dong and Mirella Lapata. 2016. Language to logical form with neural attention. In Proceedings of ACL.
A theoretically grounded application of dropout in recurrent neural networks. Yarin Gal, Zoubin Ghahramani, Proceedings of NIPS. NIPSYarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Proceedings of NIPS.
Incorporating copying mechanism in sequence-to-sequence learning. Jiatao Gu, Zhengdong Lu, Hang Li, O K Victor, Li, Proceedings of ACL. ACLJiatao Gu, Zhengdong Lu, Hang Li, and Victor O. K. Li. 2016. Incorporating copying mechanism in sequence-to-sequence learning. In Proceedings of ACL.
Interactive synthesis using free-form queries. Tihomir Gvero, Viktor Kuncak, Proceedings of ICSE. ICSETihomir Gvero and Viktor Kuncak. 2015. Interactive synthesis using free-form queries. In Proceedings of ICSE.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural Computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Computation 9(8).
Summarizing source code using a neural attention model. Srinivasan Iyer, Ioannis Konstas, Alvin Cheung, Luke Zettlemoyer, Proceedings of ACL. ACLSrinivasan Iyer, Ioannis Konstas, Alvin Cheung, and Luke Zettlemoyer. 2016. Summarizing source code using a neural attention model. In Proceedings of ACL.
Data recombination for neural semantic parsing. Robin Jia, Percy Liang, Proceedings of ACL. ACLRobin Jia and Percy Liang. 2016. Data recombination for neural semantic parsing. In Proceedings of ACL.
. Tomás Kociský, Gábor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, andTomás Kociský, Gábor Melis, Edward Grefenstette, Chris Dyer, Wang Ling, Phil Blunsom, and
Semantic parsing with semi-supervised sequential autoencoders. Karl Moritz Hermann, Proceedings of EMNLP. EMNLPKarl Moritz Hermann. 2016. Semantic parsing with semi-supervised sequential autoencoders. In Pro- ceedings of EMNLP.
Semantic parsing to probabilistic programs for situated question answering. Jayant Krishnamurthy, Oyvind Tafjord, Aniruddha Kembhavi, Proceedings of EMNLP. EMNLPJayant Krishnamurthy, Oyvind Tafjord, and Aniruddha Kembhavi. 2016. Semantic parsing to probabilistic programs for situated question answering. In Pro- ceedings of EMNLP.
Using semantic unification to generate regular expressions from natural language. Nate Kushman, Regina Barzilay, Proceedings of NAACL. NAACLNate Kushman and Regina Barzilay. 2013. Using se- mantic unification to generate regular expressions from natural language. In Proceedings of NAACL.
Scaling semantic parsers with on-the-fly ontology matching. Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, Luke S Zettlemoyer, Proceedings of the EMNLP. the EMNLPTom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke S. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Pro- ceedings of the EMNLP.
From natural language specifications to program input parsers. Tao Lei, Fan Long, Regina Barzilay, Martin C Rinard, Proceedings of ACL. ACLTao Lei, Fan Long, Regina Barzilay, and Martin C. Ri- nard. 2013. From natural language specifications to program input parsers. In Proceedings of ACL.
Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, Ni Lao, CoRR abs/1611.00020Chen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. 2016. Neural symbolic ma- chines: Learning semantic parsers on freebase with weak supervision. CoRR abs/1611.00020.
Learning dependency-based compositional semantics. Percy Liang, Michael I Jordan, Dan Klein, Proceedings of ACL. ACLPercy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of ACL.
Latent predictor networks for code generation. Wang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Fumin Wang, Andrew Senior, Proceedings of ACL. ACLWang Ling, Phil Blunsom, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Fumin Wang, and Andrew Senior. 2016. Latent predictor networks for code generation. In Proceedings of ACL.
Keyword programming in java. Greg Little, Robert C Miller, Autom. Softw. Eng. 161Greg Little and Robert C. Miller. 2009. Keyword pro- gramming in java. Autom. Softw. Eng. 16(1).
Addressing the rare word problem in neural machine translation. Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, Wojciech Zaremba, Proceedings of ACL. ACLThang Luong, Ilya Sutskever, Quoc V. Le, Oriol Vinyals, and Wojciech Zaremba. 2015. Addressing the rare word problem in neural machine translation. In Proceedings of ACL.
Structured generative models of natural source code. Chris J Maddison, Daniel Tarlow, Proceedings of ICML. ICML32Chris J. Maddison and Daniel Tarlow. 2014. Structured generative models of natural source code. In Pro- ceedings of ICML. volume 32.
Integrating programming by example and natural language programming. Daniel Mehdi Hafezi Manshadi, James F Gildea, Allen, Proceedings of AAAI. AAAIMehdi Hafezi Manshadi, Daniel Gildea, and James F. Allen. 2013. Integrating programming by example and natural language programming. In Proceedings of AAAI.
Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. Hongyuan Mei, Mohit Bansal, Matthew R Walter, Proceedings of AAAI. AAAIHongyuan Mei, Mohit Bansal, and Matthew R. Wal- ter. 2016. Listen, attend, and walk: Neural mapping of navigational instructions to action sequences. In Proceedings of AAAI.
Neural shiftreduce CCG semantic parsing. K Dipendra, Yoav Misra, Artzi, Proceedings of EMNLP. EMNLPDipendra K. Misra and Yoav Artzi. 2016. Neural shift- reduce CCG semantic parsing. In Proceedings of EMNLP.
Environment-driven lexicon induction for high-level instructions. Kejia Dipendra Kumar Misra, Percy Tao, Ashutosh Liang, Saxena, Proceedings of ACL. ACLDipendra Kumar Misra, Kejia Tao, Percy Liang, and Ashutosh Saxena. 2015. Environment-driven lex- icon induction for high-level instructions. In Pro- ceedings of ACL.
Neural programmer: Inducing latent programs with gradient descent. Arvind Neelakantan, V Quoc, Ilya Le, Sutskever, Proceedings of ICLR. ICLRArvind Neelakantan, Quoc V. Le, and Ilya Sutskever. 2016. Neural programmer: Inducing latent pro- grams with gradient descent. In Proceedings of ICLR.
lamtram: A toolkit for language and translation modeling using neural networks. Graham Neubig, Graham Neubig. 2015. lamtram: A toolkit for lan- guage and translation modeling using neural net- works. http://www.github.com/neubig/lamtram.
A statistical semantic language model for source code. Anh Tuan Tung Thanh Nguyen, Nguyen, Anh Hoan, Tien N Nguyen, Nguyen, Proceedings of ACM SIGSOFT. ACM SIGSOFTTung Thanh Nguyen, Anh Tuan Nguyen, Hoan Anh Nguyen, and Tien N. Nguyen. 2013. A statistical semantic language model for source code. In Pro- ceedings of ACM SIGSOFT.
Learning to generate pseudo-code from source code using statistical machine translation (T). Yusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, Satoshi Nakamura, Proceedings of ASE. ASEYusuke Oda, Hiroyuki Fudaba, Graham Neubig, Hideaki Hata, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Learning to generate pseudo-code from source code using statistical ma- chine translation (T). In Proceedings of ASE.
Neuro-symbolic program synthesis. Emilio Parisotto, Abdel-Rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, Pushmeet Kohli, CoRR abs/1611.01855Emilio Parisotto, Abdel-rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Pushmeet Kohli. 2016. Neuro-symbolic program synthesis. CoRR abs/1611.01855.
Compositional semantic parsing on semi-structured tables. Panupong Pasupat, Percy Liang, Proceedings of ACL. ACLPanupong Pasupat and Percy Liang. 2015. Composi- tional semantic parsing on semi-structured tables. In Proceedings of ACL.
Python Software Foundation. Python abstract grammarPython Software Foundation. 2016. Python abstract grammar. https://docs.python.org/2/library/ast.html.
Language to code: Learning semantic parsers for if-this-then-that recipes. Chris Quirk, Raymond J Mooney, Michel Galley, Proceedings of ACL. ACLChris Quirk, Raymond J. Mooney, and Michel Galley. 2015. Language to code: Learning semantic parsers for if-this-then-that recipes. In Proceedings of ACL.
SWIM: synthesizing what i mean: code search and idiomatic snippet synthesis. Mukund Raghothaman, Yi Wei, Youssef Hamadi, Proceedings of ICSE. ICSEMukund Raghothaman, Yi Wei, and Youssef Hamadi. 2016. SWIM: synthesizing what i mean: code search and idiomatic snippet synthesis. In Proceed- ings of ICSE.
Compositional program synthesis from natural language and examples. Mohammad Raza, Sumit Gulwani, Natasa Milic-Frayling, Proceedings of IJCAI. IJCAIMohammad Raza, Sumit Gulwani, and Natasa Milic- Frayling. 2015. Compositional program synthesis from natural language and examples. In Proceed- ings of IJCAI.
Using multiple clause constructors in inductive logic programming for semantic parsing. R Lappoon, Raymond J Tang, Mooney, Proceedings of ECML. ECMLLappoon R. Tang and Raymond J. Mooney. 2001. Us- ing multiple clause constructors in inductive logic programming for semantic parsing. In Proceedings of ECML.
Pointer networks. Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, Proceedings of NIPS. NIPSOriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proceedings of NIPS.
Building bing developer assistant. Yi Wei, Nirupama Chandrasekaran, Sumit Gulwani, Youssef Hamadi, Technical reportYi Wei, Nirupama Chandrasekaran, Sumit Gul- wani, and Youssef Hamadi. 2015. Build- ing bing developer assistant. Techni- cal report. https://www.microsoft.com/en- us/research/publication/building-bing-developer- assistant/.
Sequence-based structured prediction for semantic parsing. Chunyang Xiao, Marc Dymetman, Claire Gardent, Proceedings of ACL. ACLChunyang Xiao, Marc Dymetman, and Claire Gardent. 2016. Sequence-based structured prediction for se- mantic parsing. In Proceedings of ACL.
Semantic parsing via staged query graph generation: Question answering with knowledge base. Ming-Wei Wen-Tau Yih, Xiaodong Chang, Jianfeng He, Gao, Proceedings of ACL. ACLWen-tau Yih, Ming-Wei Chang, Xiaodong He, and Jianfeng Gao. 2015. Semantic parsing via staged query graph generation: Question answering with knowledge base. In Proceedings of ACL.
Neural enquirer: Learning to query tables in natural language. Pengcheng Yin, Zhengdong Lu, Hang Li, Ben Kao, Proceedings of IJCAI. IJCAIPengcheng Yin, Zhengdong Lu, Hang Li, and Ben Kao. 2016. Neural enquirer: Learning to query tables in natural language. In Proceedings of IJCAI.
Learning to map sentences to logical form structured classification with probabilistic categorial grammars. Luke Zettlemoyer, Michael Collins, Proceedings of UAI. UAILuke Zettlemoyer and Michael Collins. 2005. Learn- ing to map sentences to logical form structured clas- sification with probabilistic categorial grammars. In Proceedings of UAI.
| [
"http://www.github.com/neubig/lamtram."
] |
[
"Choice of Mel Filter Bank in Computing MFCC of a Resampled Speech",
"Choice of Mel Filter Bank in Computing MFCC of a Resampled Speech"
] | [
"Laxmi Narayana laxmi.narayana@tcs.com \nTCS Innovation Lab -Mumbai\nTata Consultancy Services\nYantra ParkThaneWest), MaharastraIndia\n",
"M \nTCS Innovation Lab -Mumbai\nTata Consultancy Services\nYantra ParkThaneWest), MaharastraIndia\n",
"Sunil Kumar Kopparapu sunilkumar.kopparapu@tcs.com \nTCS Innovation Lab -Mumbai\nTata Consultancy Services\nYantra ParkThaneWest), MaharastraIndia\n"
] | [
"TCS Innovation Lab -Mumbai\nTata Consultancy Services\nYantra ParkThaneWest), MaharastraIndia",
"TCS Innovation Lab -Mumbai\nTata Consultancy Services\nYantra ParkThaneWest), MaharastraIndia",
"TCS Innovation Lab -Mumbai\nTata Consultancy Services\nYantra ParkThaneWest), MaharastraIndia"
] | [] | Mel Frequency Cepstral Coefficients (MFCCs) are the most popularly used speech features in most speech and speaker recognition applications. In this paper, we study the effect of resampling a speech signal on these speech features. We first derive a relationship between the MFCC parameters of the resampled speech and the MFCC parameters of the original speech. We propose six methods of calculating the MFCC parameters of downsampled speech by transforming the Mel filter bank used to compute MFCC of the original speech. We then experimentally compute the MFCC parameters of the down sampled speech using the proposed methods and compute the Pearson coefficient between the MFCC parameters of the downsampled speech and that of the original speech to identify the most effective choice of Mel-filter band that enables the computed MFCC of the resampled speech to be as close as possible to the original speech sample MFCC. | 10.1109/isspa.2010.5605491 | [
"https://arxiv.org/pdf/1410.6903v1.pdf"
] | 2,625,050 | 1410.6903 | 70f0cb2383e675c9f9f1e7479038b551adcd14eb |
Choice of Mel Filter Bank in Computing MFCC of a Resampled Speech
October 28, 2014
Laxmi Narayana laxmi.narayana@tcs.com
TCS Innovation Lab -Mumbai
Tata Consultancy Services
Yantra ParkThaneWest), MaharastraIndia
M
TCS Innovation Lab -Mumbai
Tata Consultancy Services
Yantra ParkThaneWest), MaharastraIndia
Sunil Kumar Kopparapu sunilkumar.kopparapu@tcs.com
TCS Innovation Lab -Mumbai
Tata Consultancy Services
Yantra ParkThaneWest), MaharastraIndia
Choice of Mel Filter Bank in Computing MFCC of a Resampled Speech
October 28, 2014Index Terms: MFCCTime scale modificationtime compressiontime ex- pansion
Mel Frequency Cepstral Coefficients (MFCCs) are the most popularly used speech features in most speech and speaker recognition applications. In this paper, we study the effect of resampling a speech signal on these speech features. We first derive a relationship between the MFCC parameters of the resampled speech and the MFCC parameters of the original speech. We propose six methods of calculating the MFCC parameters of downsampled speech by transforming the Mel filter bank used to compute MFCC of the original speech. We then experimentally compute the MFCC parameters of the down sampled speech using the proposed methods and compute the Pearson coefficient between the MFCC parameters of the downsampled speech and that of the original speech to identify the most effective choice of Mel-filter band that enables the computed MFCC of the resampled speech to be as close as possible to the original speech sample MFCC.
Introduction
Time scale modification (TSM) is a class of algorithms that change the playback time of speech/audio signals. By increasing or decreasing the apparent rate of articulation, TSM on one hand, is useful to make degraded speech more intelligible and on the other hand, reduces the time needed for a listener to listen to a message. Reducing the playback time of speech or time compression of speech signal has a variety of applications that include teaching aids to the disabled and in human-computer interfaces. Time-compressed speech is also referred to as accelerated, compressed, time-scale modified, sped-up, rate-converted, or time-altered speech. Studies have indicated that listening to teaching materials twice that have been speeded up by a factor of two is more effective than listening to them once at normal speed [1]. Time compression techniques have also been used in speech recognition systems to time normalize input utterances to a standard length. One potential application is that TSM is often used to adjust Radio commercials and the audio of television advertisements to fit exactly into the 30 or 60 seconds. Time compression of speech also saves storage space and transmission bandwidth for speech messages. Time compressed speech has been used to speed up message presentation in voice mail systems [2].
In general, time scale modification of a speech signal is associated with a parameter called time scale modification (TSM) factor or scaling factor. In this paper we denote the TSM factor by α. There are a variety of techniques for time scaling of speech out of which, resampling is one of the simplest techniques. Resampling of digital signals is basically a process of decimation (for time compression, α > 1) or interpolation (for time expansion, α < 1) or a combination of both. Usually, for decimation, the input signal is sub-sampled. For interpolation, zeros are inserted between samples of the original input signal. For a discrete time signal x[n] the restriction on the TSM factor α to obtain x[αn] is that α be a rational number. For any α = p q where p and q are integers the signal x[αn] is constructed by first interpolating x[n] by a factor of p, say x p = x[n ↑ p] and then decimating x[n] by a factor of q, namely,
x q = x[n ↓ q].
It should be noted that, usually interpolation is carried out before decimation to eliminate information loss in the pre-filtering of decimation.
Most often, cepstral features are the speech features of choice for many speaker and speech recognition systems. For example, the Mel-frequency cepstral coefficient (MFCC) [3] representation of speech is probably the most commonly used representation in speaker recognition and and speech recognition applications [4,5,6]. In general, cepstral features are more compact, discriminable, and most importantly, nearly decorrelated such that they allow the diagonal covariance to be used by the hidden Markov models (HMMs) effectively. Therefore, they can usually provide higher baseline performance over filter bank features [7].
In this paper we study the effect of resampling of speech on the MFCC parameters. We derive and show mathematically how the resampling of speech effects the extracted MFCC parameters and establish a relationship between the MFCC parameters of resampled speech and that of the original speech. We focus our experiments primarily on the downsampled speech by a factor of 2 and propose six methods of computing the MFCC parameters of the downsampled speech, by an appropriate choice of the Mel-filter band, and compute the Pearson correlation between the MFCC of the original speech signal and the computed MFCC of the down sampled speech to identify the best choice of the Mel filter band.
In Section 3 we derive a relationship between the MFCC parameters computed for original speech and the time scaled speech and discuss six different choice of Mel-filter bank selection to the MFCC parameters of the downsampled speech. Section 4 gives the details of the experiments conducted to substantiate
Computing the MFCC parameters
The outline of the computation of Mel frequency cepstral coefficients (MFCC) is shown in Figure 1. In general, the MFCCs are computed as follows. Let x[n] be a speech signal with a sampling frequency of f s , and is divided into P frames each of length N samples with an overlap of N/2 samples such that
{ x 1 [n], x 2 [n] · · · x p [n] · · · x P [n]}, where x p [n] denotes the p th frame of the speech signal x[n] and is x p [n] = x p * N 2 − 1 + i N −1 i=0 Now the speech signal x[n] can be represented in matrix notation as X def = [ x 1 , x 2 , · · · , x p , · · · , x P ].
Note that the size of the matrix X is N × P . The MFCC features are computed for each frame of the speech sample (namely, for all x p ).
Windowing, DFT and Magnitude Spectrum
In speech signal processing, in order to compute the MFCCs of the p th frame, x p is multiplied with a hamming window w[n] = 0.54 − 0.46 cos nπ N , followed by the discrete Fourier transform (DFT) as shown in (1).
X p (k) = N −1 n=0 x p [n]w[n] exp −j 2πkn N (1) for k = 0, 1, · · · , N − 1. If f s is the sampling rate of the speech sig- nal x[n] then k corresponds to the frequency l f (k) = kf s /N . Let X p = [X p (0), X p (1), · · · , X p (N −1)] T represent the DFT of the windowed p th frame of the speech signal x[n]
, namely x p . Accordingly, let X = [ X 1 , X 2 , · · · X p , · · · , X P ] represent the DFT of the matrix X . Note that the size of X is N × P and is known as STFT (short time Fourier transform) matrix. The modulus of Fourier transform is extracted and the magnitude spectrum is obtained as |X| which again is a matrix of size N x P .
Mel Frequency Filter Bank
The modulus of Fourier transform is extracted and the magnitude spectrum is obtained as |X| which is a matrix of size N × P . The magnitude spectrum is warped according to the Mel scale in order to adapt the frequency resolution to the properties of the human ear [8]. Note that the Mel (φ f ) and the linear frequency (l f ) [9] are related, namely, φ f = 2595 * log 10 (1 +
M (m, k) = 0 for l f (k) < l f c (m − 1) l f (k)−l f c (m−1) l f c (m)−l f c (m−1) for l f c (m − 1) ≤ l f (k) < l f c (m) l f (k)−l f c (m+1) l f c (m)−l f c (m+1) for l f c (m) ≤ l f (k) < l f c (m + 1) 0 for l f (k) ≥ l f c (m + 1)
The Mel filter bank M (m, k) is an F × N matrix.
Mel Frequency Cepstrum
The logarithm of the filter bank outputs (Mel spectrum) is given in (2).
L p (m, k) = ln N −1 k=0 M (m, k) * |X p (k)|(2)
where m = 1, 2, · · · , F and p = 1, 2, · · · , P . The filter bank output, which is the product of the Mel filter bank, M and the magnitude spectrum, |X| is a F × P matrix. A discrete cosine transform of L p (m, k) results in the MFCC parameters.
Φ r p {x[n]} = F m=1 L p (m, k) cos r(2m − 1)π 2F(3)
where r = 1, 2, · · · , F and Φ r p {x[n]} represents the r th MFCC of the p th frame of the speech signal x[n]. The MFCC of all the P frames of the speech signal are obtained as a matrix Φ
Φ {X } = [Φ 1 , Φ 2 , · · · , Φ p , · · · Φ P ](4)
Note that the p th column of the matrix Φ, namely Φ p represents the MFCC of the speech signal, x[n], corresponding to the p th frame, x p [n].
MFCC of Resampled Speech
In this section, we show how the resampling of the speech signal in time effects the computation of MFCC parameters. Let y[s] denote the time scaled speech signal given by
y[s] = x[αn] = x ↓ α (5)
where α is the time scale modification (TSM) factor or the scaling factor 1 . Let y p [s] = x p [αn] = x p ↓ α denote the p th frame of the time scaled speech where s = 0, 1, · · · , S − 1, S being the number of samples in the time scaled speech frame given by S = N α . If α < 1 the signal is expanded in time while α > 1 means the signal is compressed in time. Note that if α = 1 the signal remains unchanged.
DFT of the windowed y p [n] is calculated from the DFT of x p [n] 2 . Assuming that α is an integer and using the scaling property of DFT [12], we have,
Y p (k ) = 1 α α−1 l=0 X p (k + lS)(6)
where k = 1, 2, · · · , S. The MFCC of the time scaled speech are given by
Φ r p {y[n]} = Φ r p {x ↓ α} = F m=1 L p (m, k )cos r(2m − 1)π 2F (7)
where r = 1, 2, · · · , F and
L p (m, k ) = ln S−1 k =0 M (m, k ) 1 α α−1 l=0 X p (k + lS)(8)
Note that L p and M are the log Mel spectrum and the Mel filter bank of the resampled speech. We consider various forms of the Mel filter bank, M (m, k ) which is used in the calculation of MFCC of the resampled speech. The best choice of the Mel filter band is the one which gives the best Pearson correlation between the MFCC of the original speech and the MFCC of the resampled speech.
Computation of MFCC of Resampled speech
The major step in the computation of MFCC of the resampled speech lies in the construction of the Mel filter bank. The Mel filter bank used to calculate the MFCC of the resampled speech is given by Figure 2: Type E and F -Reversing, Adding and Averaging.
M (m, k ) = 0 for l f (k ) < l f c (m − 1)
DCT of the logarithm of the above vectors gives the MFCC of the down sampled speech.
Type E and Type F: Reversing, Adding and Averaging
In this case, the filter bank outputs of the downsampled Mel filter bank, namely, M A (m, k ) are computed. Then the downsampled Mel filter bank is mirrored/reversed such that the filter with the highest bandwidth comes first and the one with the lowest bandwidth comes last. The spectrum of the downsampled signal is passed through this reversed filter bank and the filter bank outputs are again reversed. These reversed filter bank outputs are added to the former filter bank (downsampled bank) outputs and their average is considered to be the Mel spectrum. DCT of the logarithm of the Mel spectrum gives the MFCC of the down sampled speech. This method also has 2 cases, namely, Type E: the center frequencies chosen are of type Type A, and, Type F: the center frequencies chosen are of type Type B. This process is depicted in Figure 2.
Experimental Results
In all our experiments we considered speech signals sampled at 16 kHz and represented by 16 bits. The speech signal is divided into frames of duration 32 ms (or N = 512 samples) and 16 ms overlap (256 samples
} = [Φ 1 , Φ 2 , · · · , Φ m , · · · Φ F ])
3 Note that Φm is a vector formed with the m th MFCC of all the speech frames are calculated using the six methods discussed in Sections 3.1.1 to 3.1.4. Pearson correlation coefficient (denoted by r) 4 is computed between the MFCC parameters of the downsampled speech (using different Mel-filter bank constructs) and the MFCC of the original speech in two different ways. Case I: Pearson correlation coefficient, r between the individual MFCC of the original and the downsampled speech signals, namely, Φ m and Φ m , m = 1, 2, · · · , F is calculated. The variation of the squared Pearson correlation coefficient, r 2 over individual MFCC (F = 30) for the 6 types of Mel filter bank constructs is shown in Figure 3.
Case II: The F MFCC vectors are concatenated to form a single vector and the r between the two vectors corresponding to the original speech and the downsampled speech is computed. The Pearson correlation coefficient, r for the 6 methods is shown in Table 1 for three different 16 kHz, 16 bit speech samples. As observed from Figure 3 and Table 1, the Type A of constructing Mel filter bank for the down sampled speech gives the best correlation between the MFCC parameters of the original speech and that of the downsampled speech.
Conclusion
The effect of resampling of speech on the MFCC parameters of speech has been presented. We have demonstrated that it is possible to extract MFCC from a downsampled speech by constructing an appropriate Mel filter bank. We presented six methods of computing MFCC of a downsampled speech signal by transforming the Mel filter bands used to compute MFCC parameters. The choice of various transformation of Mel filter bank was based on the relationship between the spectrum of the original and the resampled signal (Equation 6). We have shown that the Pearson correlation coefficient between the MFCC parameters of the original speech and the downsampled speech shows a good fit with a downsampled version of the Mel filter bank (Type A). We believe the results presented in this paper will enable us to experiment and measure the 4 Pearson correlation coefficient between two vectors X and Y each of length n is given by performance of a speech recognition engine (statistical phoneme models derived from original speech) on subsampled speech (time compressed speech).
r = X Y − 1 n X Y ( X 2 − 1 n ( X) 2 )( Y 2 − 1 n ( Y ) 2 ) .
Figure 1 :
1Computation of Mel Frequency Cepstral Coefficients the derivation. We conclude in Section 5.
where φ f is the Mel frequency and l f is the linear frequency. Then the magnitude spectrum |X| is segmented into a number of critical bands by means of a Mel filter bank which typically consists of a series of overlapping triangular filters defined by their center frequencies l f c (m).The parameters that define a Mel filter bank are (a) number of Mel filters, F , (b) minimum frequency, l f min and (c) maximum frequency, l f max . For speech, in general, it is suggested in [10] that l f min > 100 Hz. Furthermore, by setting l f min above 50/60Hz, we get rid of the hum resulting from the AC power, if present.[10] also suggests that l f max be less than the Nyquist frequency. Furthermore, there is not much information above 6800 Hz. Then a fixed frequency resolution in the Mel scale is computed using δφ f = (φ f max − φ f min )/(F + 1) where φ f max and φ f min are the frequencies on the Mel scale corresponding to the linear frequencies l f max and l f min respectively. The center frequencies on the Mel scale are given by φ f c (m) = m.δφ where m = 1, 2, · · · , F . To obtain the center frequencies of the triangular Mel filter bank in Hertz, we use the inverse relationship between l f and φ f given by l f c (m) = 700(10 φ f c (m)/2595 − 1). The Mel filter bank, M (m, k)[11] is given by
Figure 3 :
3Pearson correlation (r 2 ) between the MFCC of original speech and downsampled speech (for speech sample 3).
Table 1 :
1Pearson correlation (r) between the MFCC of original speech and the Sample 1 0.978 0.945 0.941 0.908 0.844 0.821 Sample 2 0.976 0.947 0.943 0.914 0.889 0.877 Sample 3 0.973 0.947 0.944 0.916 0.895 0.878downsampled speech
Speech
A
B
C
D
E
F
We use x[αn] and x ↓ α interchangeably. If x = [1, 2, 3, ..., 2 n ] 1X2 n , then x ↓ 2 = [1, 3, 5, ...2 n − 1] 1X2 n−1 2 For convenience, we ignore the effect of the window w[n] on yp[n] or assume that w[n] is also scaled by α.
l f (k )−l f c
where l f (k ) = k (fs/2) N/2 . As mentioned, we consider different forms of Mel filter banks and identify the Mel-filter bank that results in the MFCC value of the resampled speech signal that matches best with the original speech signal MFCC. This is done by computing the Pearson coefficient between the MFCC of the resampled speech and the MFCC of the original speech. The variations in the Mel filter banks is a result of the way in which the center frequencies and the amplitude of the filter coefficients are chosen. In all the cases discussed below, we assume, (a) α = 2, (b) the number of Mel filters used for the feature extraction of original speech and that of the resampled speech are same and, (c) the window length reduces by half, namely, N/2.Type D: InterpolatingHere, alternate center frequencies of the original Mel bank are halved and filters are constructed with the resultant center frequencies. This reduces the bandwidth of the Mel bank and the number of Mel filters by a factor of 2. The output of these F 2 Mel filters are denoted as [ g 1 g 2 · · · g m · · · g F/2 ]. and the Mel spectrum is computed as g 1 g 1 + g 2 2 g 2 g 2 + g 3 2 · · · g m · · · g F/2 g F/2 + g 1 2
Techniques, perception, and applications of time-compressed speech. B Arons, Proceedings of 1992 Conference. 1992 ConferenceAmerican Voice I/O SocietyB. Arons, "Techniques, perception, and applications of time-compressed speech," Proceedings of 1992 Conference, American Voice I/O Society, pp. 169-177, Sep. 1992.
Real-time time-scale modification of speech via the synchronized overlap-add algorithm. D J Hejna, Department of Electrical Engineering and Computer ScienceM.I.T. Masters ThesisD. J. Hejna, "Real-time time-scale modification of speech via the synchro- nized overlap-add algorithm," M.I.T. Masters Thesis, Department of Elec- trical Engineering and Computer Science, February 1990.
Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences. S B Davis, P Mermelstein, IEEE Trans. Acoust. Speech Signal Processing. 284S. B. Davis and P. Mermelstein, "Comparison of parametric representations for monosyllabic word recognition in continuously spoken sentences," IEEE Trans. Acoust. Speech Signal Processing, vol. 28, no. 4, pp. 357-366, 1980.
Robust text-independent speaker identification using gaussian mixture speaker models. D A Reynolds, R C Rose, IEEE Transactions on Speech and Audio Processing. 31D. A. Reynolds and R. C. Rose, "Robust text-independent speaker iden- tification using gaussian mixture speaker models," IEEE Transactions on Speech and Audio Processing, vol. 3, No. 1, January 1995.
Speaker identification using Mel frequency cepstral coefficients. M R Hasan, M Jamil, M G Rabbani, M S Rahman, 3rd International Conference on Electrical & Computer Engineering ICECE. Dhaka, BangladeshM. R. Hasan, M. Jamil, M. G. Rabbani, and M. S. Rahman, "Speaker identification using Mel frequency cepstral coefficients," 3rd International Conference on Electrical & Computer Engineering ICECE 2004, 28-30 De- cember 2004, Dhaka, Bangladesh.
Text independent speaker recognition using the Mel frequency cepstral coefficients and a neural network classifier. H Seddik, A Rahmouni, M Sayadi, First International Symposium on Control, Communications and Signal Processing. H. Seddik, A. Rahmouni, and M. Sayadi, "Text independent speaker recog- nition using the Mel frequency cepstral coefficients and a neural network classifier," First International Symposium on Control, Communications and Signal Processing, pp. 631-634, 2004.
Using Mel-frequency cepstral coefficients in missing data technique. Z Jun, S Kwong, W Gang, Q Hong, EURASIP Journal on Applied Signal Processing. 20043Z. Jun, S. Kwong, W. Gang, and Q. Hong, "Using Mel-frequency cepstral coefficients in missing data technique," EURASIP Journal on Applied Sig- nal Processing, vol. 2004, no. 3, pp. 340-346, 2004.
Computing Mel-frequency cepstral coefficients on the power spectrum. S Molau, M Pitz, R S Uter, H Ney, Proc. Int. Conf. on Acoustic, Speech and Signal Processing. Int. Conf. on Acoustic, Speech and Signal essingS. Molau, M. Pitz, R. S. Uter, and H. Ney, "Computing Mel-frequency cepstral coefficients on the power spectrum," Proc. Int. Conf. on Acoustic, Speech and Signal Processing, pp. 73 -76, 2001.
Discrete-time speech signal processing: Principles and practice. T F Quatieri, Pearson Education. II686T. F. Quatieri, "Discrete-time speech signal processing: Principles and practice," Pearson Education, vol. II, pp. 686, 713, 1989.
Mel frequency cepstral coefficients: An evaluation of robustness of mp3 encoded music. S Sigurdsson, K B Petersen, T L Schiler, Conference Proceedings of the Seventh International Conference on Music Information Retrieval (ISMIR). Vicoria, CanadaS. Sigurdsson, K. B. Petersen, and T. L. Schiler, "Mel frequency cepstral co- efficients: An evaluation of robustness of mp3 encoded music," Conference Proceedings of the Seventh International Conference on Music Information Retrieval (ISMIR), Vicoria, Canada, 2006.
Discrete time signal processing. Schafer Oppenheim, Prentice-HallOppenheim and Schafer, "Discrete time signal processing," Prentice-Hall, 1989.
| [] |
[
"Semantic Frame Parsing for Information Extraction : the CALOR corpus",
"Semantic Frame Parsing for Information Extraction : the CALOR corpus"
] | [
"Gabriel Marzinotto gabriel.marzinotto@orange.com \nOrange Labs\nLannionFrance (\n\nAix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance\n",
"Jeremy Auguste jeremy.auguste@lis-lab.fr \nAix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance\n",
"Frederic Bechet frederic.bechet@lis-lab.fr \nAix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance\n",
"Geraldine Damnati geraldine.damnati@orange.com \nOrange Labs\nLannionFrance (\n",
"Alexis Nasr alexis.nasr@lis-lab.fr \nAix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance\n"
] | [
"Orange Labs\nLannionFrance (",
"Aix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance",
"Aix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance",
"Aix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance",
"Orange Labs\nLannionFrance (",
"Aix Marseille Univ\nUniversité de Toulon\nCNRS\nMarseilleLISFrance"
] | [] | This paper presents a publicly available corpus of French encyclopedic history texts annotated according to the Berkeley FrameNet formalism. The main difference in our approach compared to previous works on semantic parsing with FrameNet is that we are not interested here in full text parsing but rather on partial parsing. The goal is to select from the FrameNet resources the minimal set of frames that are going to be useful for the applicative framework targeted, in our case Information Extraction from encyclopedic documents. Such an approach leverages the manual annotation of larger corpora than those obtained through full text parsing and therefore opens the door to alternative methods for Frame parsing than those used so far on the FrameNet 1.5 benchmark corpus. The approaches compared in this study rely on an integrated sequence labeling model which jointly optimizes frame identification and semantic role segmentation and identification. The models compared are CRFs and multitasks bi-LSTMs. | null | [
"https://www.aclweb.org/anthology/L18-1159.pdf"
] | 21,725,691 | 1812.08039 | be491f76d1da8bfbca0941462687f03807a48a5b |
Semantic Frame Parsing for Information Extraction : the CALOR corpus
Gabriel Marzinotto gabriel.marzinotto@orange.com
Orange Labs
LannionFrance (
Aix Marseille Univ
Université de Toulon
CNRS
MarseilleLISFrance
Jeremy Auguste jeremy.auguste@lis-lab.fr
Aix Marseille Univ
Université de Toulon
CNRS
MarseilleLISFrance
Frederic Bechet frederic.bechet@lis-lab.fr
Aix Marseille Univ
Université de Toulon
CNRS
MarseilleLISFrance
Geraldine Damnati geraldine.damnati@orange.com
Orange Labs
LannionFrance (
Alexis Nasr alexis.nasr@lis-lab.fr
Aix Marseille Univ
Université de Toulon
CNRS
MarseilleLISFrance
Semantic Frame Parsing for Information Extraction : the CALOR corpus
Frame Semantic ParsingLSTMCRF
This paper presents a publicly available corpus of French encyclopedic history texts annotated according to the Berkeley FrameNet formalism. The main difference in our approach compared to previous works on semantic parsing with FrameNet is that we are not interested here in full text parsing but rather on partial parsing. The goal is to select from the FrameNet resources the minimal set of frames that are going to be useful for the applicative framework targeted, in our case Information Extraction from encyclopedic documents. Such an approach leverages the manual annotation of larger corpora than those obtained through full text parsing and therefore opens the door to alternative methods for Frame parsing than those used so far on the FrameNet 1.5 benchmark corpus. The approaches compared in this study rely on an integrated sequence labeling model which jointly optimizes frame identification and semantic role segmentation and identification. The models compared are CRFs and multitasks bi-LSTMs.
Introduction
Semantic Frame parsing is a Natural Language Understanding task that involves detecting in a sentence an event or a scenario, called Frame, as well as all the elements or roles that can be associated to this event in the sentence, called Frame Elements. One of the most popular semantic frame model is the Berkeley FrameNet project developed by ICSI Berkeley (Baker et al., 1998). This model is composed of an inventory of Frames with, for each of them, a list of words, called Lexical Units (or LU), that can trigger a frame in a sentence. Besides, for each frame, a list of Frame Elements (FE), core or optional, is defined. LUs are pairings of a word with a sense; Frame Elements are the components of a frame, represented by sequences of words in a sentence. Two kinds of parsing can be done with a Semantic Frame model: full text parsing where each word in a sentence is analyzed to check if it can trigger a frame; and partial parsing where only a subset of frames and LUs is considered, according to their relevance for a given applicative framework. Annotating all the possible LUs and frames in a sentence is a very difficult (and expensive) task for human annotators, therefore there are very few corpora annotated this way. Moreover not many languages have such resources. Most of previous work in semantic frame parsing that has been done with a full text parsing approach have used the benchmark corpus FrameNet 1.5. Although the size of this benchmark is relatively large, a lot of frames have a very small number of occurrences in the corpus. This is due to the very large number of frames considered in the semantic model and this makes this corpus particularly challenging for machine learning methods. On the contrary partial parsing can be made on large corpora at a reasonable cost: because the amount of frames and LUs is limited, the annotators can focus only on a few words for each sentence, making the task much easier than full parsing. Corpus obtained this way contain much more examples for each frame, opening the door to more machine learning methods than it is the case with full parsing.
From an applicative point of view, partial parsing is also a more realistic option. Although the different senses from the FrameNet model are generic, models trained on the FrameNet 1.5. corpus are not. Assuming that a full text annotation will be available for each new applicative domain is not an option. Moreover a lot of applicative frameworks using semantic models such as frames, like the Information Extraction framework considered in this study, are not interested in full parses but on the contrary only in some specific senses related to the domain targeted. An example of this kind of annotation scheme is given in figure 1. As we can see, only two words as considered as lexical units in the sentence: decide, which triggers the frame Deciding and order triggering the frame Request. This paper presents a study on the use of Sequence Labeling models such as CRF and LSTM for Semantic Frame parsing. Unlike previous studies developing a multi-step approach involving first the Frame identification task, then the frame element detection and labeling tasks, we propose an approach detecting simultaneously LUs and Frame Elements making Frame identification and argument selection an integrated process. A simple heuristic filter is used in order to maintain coherence in the hypotheses produced. We compare two popular sequence labeling methods: conditional random fields and recurrent deep neural networks using Long Short-term Memory.
The CALOR-Frame corpus
We introduce in this paper the corpus CALOR, which is a collection of documents in French language that were hand annotated in frame semantics. This 1.3M words corpus contains documents from 4 different sources: Wikipedia's Archeology portal (WA, 201 documents), Wikipedia's World War 1 portal (WGM, 335 documents), Vikidia's portals of Prehistory and Antiquity (VKH, 183 documents) and ClioTexte's 1 resources about World War One (WW1) Figure 1: Example of Frame multilabel annotation (CTGM, 16 documents). These sources were chosen to guarantee both writing style and domain diversity. By having documents from Vikidia (an encyclopedia addressed to children from 8-13 years old) and Wikipedia, presenting subjects of ancient history and archeology we can compare the influence of the writing style on the complexity of the task. The same analysis is possible on the WW1 documents, as Cliotexte (a collection of historical documents such as letters, essays, speeches from WW1) and Wikipedia share a common domain with a completely different writing style. This document selection allows to study the importance of the nature of the training data on the performance of the system on a test set on the same subject. Moreover, having data from two different portals of Wikipedia allows to study the domain dependency problem. For example: evaluate if a model for a frame F, trained on data from the archeology domain can successfully be applied on data from the WW1.
In contrast to full text parsing corpus, the frame semantic annotations of CALOR are limited to a small subset of frames from FrameNet (Baker et al., 1998). As described in the introduction, the goal of this partial parsing process is to obtain, at a relatively low cost, a large corpus annotated with frames corresponding to a given applicative context. In our case this applicative context is Information Extraction (IE) from encyclopedic texts, mainly historical texts.
To this purpose we extracted from our 1.3M word text corpus the top 100 most frequent verbs, then we kept those that were more likely to correspond to an action or a situation that would be relevant in our IE context. For example, verbs such as discover or build are very relevant for exploring archaeological documents. We looked for the corresponding frames of the selected verbs in the Berkeley FrameNet lexicon and kept a set of 53 different frames. If a verb could trigger several frames, we only kept those which were relevant in our corpus.
By adding noun triggers to the list of selected verb triggers we obtain a list of 145 Lexical Units (LU), with 30,950 occurrences in the training corpus. Selecting the most frequent verbs and nouns from the 1.3M words CALOR corpus as frame triggers is a guarantee that the average number of occurrences per frame is high. Therefore, even if the list of frames annotated in the CALOR corpus is small compared to the Framenet corpus, we have a large variety of occurrences for each of them, allowing us to build robust parsers for encyclopedic texts, which is the goal of the CALOR corpus. The list of Frames in CALOR is provided in Table 1.
The annotation process
Once the corpus, the lexical units and the frame set were chosen, we developed an iterative process for the manual annotation of the CALOR corpus. Preliminarily to this annotation process, the documents were automatically processed by the Macaon (Nasr et al., 2011) tool suite (sentence segmentation, tokenization, POS Tagging, lemmatization, and dependency parsing). Every word within the documents which lemma belongs to the set of selected LUs generates an example to be annotated. It is possible that one sentence generate several examples to annotate if it contains several LUs. We obtained a set of 30,950 examples to annotate, corresponding to all the LUs occurrences in the CALOR corpus. Three annotators were hired for this project. Their goal was to process these 30,950 examples: decide for each of them if its corresponding LU triggers or not one of the 53 frames selected, and finally annotate, if a frame was triggered, all its Frame Elements (FE) occurring in the sentence. In order to reduce the manual annotation time and perform quality control on the corpus produced, we designed an iterative process based on three principles:
• an automatic pre-annotation scheme based on the frame parser that will be presented in section 4.1.;
• a batch selection process that selects from the unlabeled corpus a set of examples to annotate that corresponds to the same LU in very similar syntactic and lexical contexts.
• an automatic quality control estimator that regularly retrains the frame parser and evaluate its performance thanks to a k-fold experiments on the part of the corpus already manually annotated.
The iterative process based on these principles can be implemented as follows:
1. Frame pre-annotation parsing : an automatic frame parsing process is applied to each example to annotate. It predicts the frame label and the possible FEs for the LU contained in the example. All these automatic annotations are manually checked and eventually corrected by our annotators. At each iteration the frame parser (see section 4.1.) is trained on the subset of the CALOR corpus that is already annotated. For the first iteration, since there is no data to train the parser, all the LUs are labeled with a "no frame" label. Each iteration brings more data to train the frame parser. have not yet been manually processed are sorted according to their LUs, then, using a similarity measure taking into account the lexical and syntactic context in which the LUs occur. They are then grouped into batches that will be sent to the annotators for manual validation. the goal here is to reduce the cognitive load of the annotators by splitting the corpus to annotate into small batches sharing very similar properties, likely to be annotated the same way.
3. Manual correction : a GUI allows annotators to work on the batches of examples produced in the previous step. Annotation is done on text only, no syntactic annotation is provided to the annotators. The frame pre-annotations produced by the frame parser are displayed, annotators can correct them and add what is missing.
4.
Model training and quality control validation : at each iteration, the frame parsing models are re-trained on the corpus of manually processed examples. A k-fold evaluation is also performed to monitor the evolution of the parsing performance of the model when more validated data is added to the training corpus. If the frame parsing performance improves, it is a good indication that the added data is coherent with the annotations already processed in the previous iterations. This can be seen as a quality control measure of the annotation process on the whole dataset, in addition to inter-annotators agreement measures than can also be estimated on small subsets of the corpus.
Corpus Statistics
Comparison with other corpora
The comparison between the CALOR-Frame corpus and other corpora with semantic frame annotations is given in table 3. The column Documents shows the main sources of documents annotated, # Sent counts the number of sentences in each corpus; % Sent. w/Frame shows the percentage of sentences that have a Frame annotation; Word Lexicon displays the size of the lexicon of each corpus; Frame lexicon, LU lexicon and FE lexicon correspond to the number of frames, LUs and FEs considered in the annotation model; finally, # Frame occurrences shows the number of Frames annotated in the whole corpus.
As we can see, the CALOR-Frame corpus is the only corpus that is not oriented towards journalism, news or current events, it is also the corpus with the largest lexicon size. When we compare it to the existing dataset for Semantic Frame Parsing in English (SemEval07) and French (AS-FALDA) (Candito et al., 2014) we observe that the CALOR corpus has the smallest frame lexicon, but the biggest number of annotations.
Frame parsing as a sequence labeling task
As mentioned in the introduction, semantic frame parsing is a structured prediction task where a word can belong to several structures. For example, in figure 1, the word general can belong to both the frame Request as the frame element Speaker and the frame Deciding as the Cognizer.
In order to consider the Frame parsing task as a word sequence labeling task, we need to flatten the frame structures by labeling each word with both semantic and information structure. An example of such a representation for the sentence of Figure 1 is given in table 4. As we can see column 3 corresponds to the frame Request and column 4 to Deciding. The LUs triggering the frames are order at index 11 for the frame Request and decide at index 5 for Deciding. To each frame element is attached the index of the frame it belongs to through a link to the LU that triggered it. Using such a representation for sequence labeling models like Conditional Random Fields (CRF) or Recurrent Neural Networks (RNN) models is challenging for two reasons:
1. Multi-labels: each word can receive more than one label according to the number of frames occurring in a sentence. Each label contains the frame as well as the frame element identifiers, therefore there are too many labels to consider building complex labels combining all of them.
2. Linking: an explicit link to the LU triggering a frame is added to each word label of its FEs, as can be seen in table 4. This information is necessary as several expressions of the same frame can occur in a sentence triggered by several LUs. The absolute value of these links is not meaningful and cannot be predicted the same way as the semantic labels are.
In this study we compare two different strategies in order to deal with the multi-label issue, one based on CRF with a multi-model approach (each LU has its own prediction model) and one based on a bi-LSTM model following a multi-task approach. They are described in the next section.
Sequence labeling models 4.1. Multi-model CRF approach
CRF-based approaches have been used in many NLP tasks involving sequence labeling such as POS tagging, chunking or named entity recognition (McCallum and Li, 2003). In order to apply CRF to frame parsing, as described in section 3., we need to address the multi-label issue. Since we want to perform frame disambiguation and semantic role detection in one step, and because each word in a sentence cannot trigger more than one frame, we chose a multimodel approach where a CRF-model is trained for each word belonging to the LU lexicon. This approach is described in Figure 3. At training time, the corpus is split according to the LU lexicon: to each word W i belonging to this lexicon is attached a sub-corpus containing all the sentences C Wi where W i occurs. For each sentence s ∈ C Wi , W i can trigger a frame F among all the possible frames for this word in the LU lexicon, or nothing. For example, the sentence shown in table 4 will be duplicated into two sub-corpora, C order with column 3 and C decide with column 4. A CRF model is trained on each C wi sub-corpus. At decoding time, when processing a sentence S, the same process is applied: first S is duplicated for each word w i of S belonging to the LU lexicon. Then the CRF model corresponding to each w i is applied and the different predictions made by the CRF models are merged. This approach has the advantage of keeping the number of possible labels to predict for each CRF relatively small, limited to the frames that can be triggered by the word considered. Therefore the ambiguity is limited and CRFs can be trained efficiently even with a large number of features. However the drawback is that the training data is split across words in the LU lexicon, therefore similarities among LU are not exploited. This situation is acceptable if enough training examples are provided for each LUs, which is the case for the CALOR corpus.
Multi-task LSTM approach
Deep Neural Networks (DNN) with word embedding is the state of the art approach for semantic frame parsing (Hermann et al., 2014). More recently recurrent neural networks (RNN) with Long Short Memory (LSTM) cells have been applied to several semantic tagging tasks such as slot filling (Mesnil et al., 2015) or even frame parsing (Hakkani-Tür et al., 2016;Tafforeau et al., 2016) for Spoken Language Understanding. Following these previous works, we propose in this study a single-layered bidirectional LSTM sequence to sequence architecture to perform frame tagging. To deal with the multi-label issue we could train a biLSTM model per LU, using the same approach as for the CRF, however, the number of examples per LU is reduced and neural networks do not perform well on small datasets. We would face the same problem if instead we decided to train one biLSTM per frame. The remaining possibility is to build a single biL-STM to predict all possible frames in order to automatically learn a feature representation meaningful to all frames. We chose a multi-task approach (biLSTM-MT) similar to the one proposed by (Tafforeau et al., 2016), which models each frame as an isolated task. In this model two LSTM models, one forward and one backward are concatenated and shared among all tasks. Then a task-specific fullyconnected output layer is added for each task.
In this work we consider each frame of our FrameNet model as a different task. This approach is described in figure 3. At decoding time each sentence is processed by the network and a distribution probability on the labels of each task for each word is produced. We only keep the labels above a certain threshold.
Coherence filter
Once the sequence tagging process is performed, each word is labeled with a frame element and position labels or a null label (O) as presented in table 4. Because the labels given at the word level might not be coherent at the frame level, we apply a coherence filter to the output of the tagging process. This filter is in charge of removing incoherences (FEs not starting with a B label; FEs without a frame) and linking the FEs to the LU that triggered the corresponding frame. This filter implement is a very simple strategy: in a given sentence, if a word W of index i is labeled as a LU trigger for frame F , we link all the FEs detected in the sentence with the same frame label F to the LU w i . At the end of this process, all FEs that have not been linked to a LU are removed.
Feature selection
The feature sets used for the CRF and the biLSTM approaches differ. For the CRF model, each training sample contains only one trigger word which is clearly identified, therefore we can use this information in order to add global constraints on the feature set of each word to process. On the opposite the multi-task biLSTM models can process several triggers in the same sentence, therefore the feature set cannot be biased toward a specific trigger and only local features are considered. For the CRF models we consider 3 features: word lemma, part-of-speech (POS) and the syntax dependency path between the word to process and the potential frame trigger in the sentence. This dependency path is built through the concatenation of the syntactic functions between the word and the trigger. In the general case, a trigger is not necessarily at the root of the syntactic tree, for this reason, the dependency paths are composed of both links from child to parent (ascending links) and from parent to child (descending links) we make distinction of both types of links in the way we encode the dependency path.
For the biLSTM-MT we consider 4 features: word embeddings (Glove embeddings of 200 dimensions trained on French Wikipedia), POS, syntactic function (without a link) and a boolean indicator of whether the word belongs to the lexicon LU or not. All the features are encoded as trainable embeddings, we allow the network to adapt them during the training of the frame parsing task.
For both systems, in order to extract lemmas, POS and syntactic dependency trees we processed the frame annotated corpus using MACAON (Nasr et al., 2010) trained with a set of POS and dependencies similar to the one proposed in the Paris French TreeBank (Abeillé et al., 2003;Abeillé and Barrier, 2004). The main differences between the multi-model CRF and multi-task biLSTM models are summarized in table 5.
Evaluation
Experimental setup
When a sentence is processed, there are 4 steps or subtasks that take place either explicitly or implicitly in the frame parsing process. Even though our approaches perform frame detection and semantic role labeling in one step, we detail the scores for each sub-task because they are relevant indicators of the performance of a parser, and they serve as point of comparison between our models. These 4 sub-task are:
1. trigger identification (TI) which decides whether a word in a sentence can trigger a frame or not;
2. trigger classification (TC) which assigns the frame label to the trigger word detected to form a LU;
In this study we consider these 4 tasks as a cascade process: an error in task 1 will lead to several errors in task 4 since no correct FEs will be detected if the frame is not triggered. The CALOR corpus is annotated with a small number of frames compared to the FrameNet 1.5 corpus, therefore the ambiguity for subtasks 1 and 2 is rather small. The most complex task is of course task 4, role filler classification, since every detection and classification of LUs and FEs has to be correct. That is why we will pay a special attention to this task to compare the models in the following experiments.
To carry out our experiments we divide the corpus CALOR assigning 80% of the frame occurrences to the train set and 20% to the test set. This split is done in such a way that the frame distribution remains as similar as possible between train and test while considering a document as an atomic unit that cannot be subdivided and should be either in the train or in the test set. This split does not take into account the LU distribution, a LU can therefore appear both in train and test, only in train or only in test.
Similarly to previous work on semantic frame parsing we will use precision, recall and f-measure on the 4 subtasks presented in order to evaluate our approaches. We also compute precision/recall curves by using different acceptation thresholds on the frame and semantic role hypotheses output by our models. In this study we set the operating point for comparing our models to the Equal Error Rate (EER) between the precision and recall measures.
Overall results
The comparison between the multi-model CRF (CRF-MM) and multi-task biLSTM (biLSTM-MT) performance over the set of 4 sub-tasks presented in the previous section is presented in Table 6. Both systems use their full feature set as presented in table 5 with EER between precision and recall as the chosen operating point. As expected performance for subtasks 1 and 2 are very good, much better than those reported on the FrameNet 1.5 corpus (Hermann et al., 2014). This is due to the partial parsing approach used here where only a subset of the FrameNet model is used. CRF-MM model performs better on the trigger identification and classification tasks (subtasks 1 and 2) while the biLSTM-MT model is better on role identification and classification (subtasks 3 and 4). The reason for this behavior is that it is easier for a CRF model to identify the proper frame for each trigger word, as they can only trigger a few frames, while for the biLSTM-MT all frames are in competition. This reduction of ambiguity for CRF-MM models has a cost: splitting the training corpus according to the LU lexicon and therefore reducing the training data for each model. If this is not an issue for the two low ambiguity subtasks 1 and 2, the situation is different for subtasks 3 and 4. Indeed biLSTM-MT is a single model that learns from all training data, it is able to automatically learn relevant features and capture semantic aspects of text using a shared layer and then use these features to classify tokens in roles of different frames learned one by tasks. This ability to use the whole corpus leads the biLSTM-MT model to outperform CRF-MM for tasks 3 and 4. This result is confirmed by figure 4 which displays the precision/recall curves of both methods on subtask 4. As we can see, biLSTM-MT is better in terms of maximal Fmeasure. It is interesting to observe that both models do not show the same precision-recall trade-off. The CRF-MM is able to parse text at a high precision with a low recall, this is not the case of biLSTM-MT, that handles a much larger set of frames and is more prone to precision errors. On the other hand, biLSTM-MT achieves a better recall at a fairly good precision, this happens because it is trained on the full dataset and it is able to learn syntactic patterns from different frames and extrapolate them from one frame to the other.
Conclusion
In this paper we introduced the CALOR corpus as well as a comparison between two different semantic frame parsing Table 6: Comparison of our two models on their full feature set operating at EER models that consider the task as a sequence labeling task. The main contribution of this work is to propose a new corpus, publicly available, that contain semantic frame annotations that differ from previous corpus such as FrameNet and SemEval. In our case only partial annotation is considered, allowing to annotate much larger corpora at a lower cost than full text annotation. Only a small subset of the FrameNet lexicon is used, however the amount of data annotated for each frame is much larger than in other corpora, allowing to develop and test different parsing methods.
In this study two frame parsing models are compared, one based on CRF following a multi-model approach and other based on biLSTM with a multi-task approach. Experiments show that biLSTM-MT model achieves a better recall, while CRF-MM achieves better precision, this is due to the architecture of each model, in CRF-MM we divide frame parsing into small subtasks one per LU, reducing the
Figure 2 :
2Distribution of the frame occurrences in the CALOR-Frame corpus
Figure 4 :
4Precision Recall curves for CRL-LU and biLSTM-MT on the full feature
Table 2
2presents the distribution of the CALOR corpus among its different sources. We observe that the two first rows of this table, corresponding to Wikipedia, represent most of the corpus. After the annotation process, 30,950 LU have been annotated, leading to 26,725 LU associated to a frame and 4,225 (13%) labeled as OTHER. In table 2 the columns # Sentences, # Words, # Frames, # Other and # FE display the number of sentences, words, frames, LUs, and Frame Elements. % Sentences with Frame displays the percentage of sentences with at least one frame and Lexicon corresponds to the size of the vocabulary of each document source. The CALOR corpus contains 57,688 FE annotations which averages to 2.2 FE per Frame occurrence.Figure 2displays the distribution of the number of annotated examples for each Frame in the CALOR corpus.We observe that the corpus has a large number of exam-
ples per frame: half of the frames in CALOR have more
than 400 annotated examples and the 10 most frequent
frames have more than 900 examples. The most common
Frames are Attack (triggers: attaquer, attaque, offen-
sive, bombardement, contre-attaque), Leadership (trig-
gers: commander, diriger, commandement), Activity
Start (triggers: commencer, débuter, commencement,
début), Locating (triggers: retrouver, trouver, locali-
sation) and Building (triggers: construire, fabriquer,
elever, construction, fabrication).
In this study we have decided to use the simple B,I,O encoding for word segments where each word label starts with a B if it starts a segment, with an I if the word is inside a segment and O if it doesn't belong to any segment. Links between segments are represented by word indices.Document Source
# Sentences # Words # Frames
# FE
Lexicon % Sentence with Frame
WGM (Wikipedia WW1)
30994
686355
14227
32708
42635
34.2%
WA (Wikipedia Archeology)
27023
540653
9943
19892
41418
28.0%
CTGM (Cliotexte WW1)
3523
67736
938
1842
10844
21.4%
VKH (Vikidia Prehistory & Antiquity)
5841
85034
1617
3246
11649
21.9%
All
67381
1379778
26725
57688
72127
30.0%
Table 2 :
2Description of the CALOR corpus
Table 3 :
3Semantic Frame corpus comparison CRF-based strategy with multiple models (1 per word in the LU lexicon)Bi-LSTM Multi Task strategy (each Frame is a task)CALOR
corpus
LU
lexicon
C_W1
C_W2
C_Wn
….
CRF
W1
CRF
W2
CRF
Wn
CRF model
training
sentence S
LU
lexicon
S_W2
….
S_W4
S_W8
CRF
tagging
Frame+FE(W2)
Frame+FE(W4)
Frame+FE(W8)
+
+
coherence
filter
sentence S with frames+FEs
CALOR
corpus
Multitask
corpus
MT bi-LSTM
Bi-LSTM model
training
sentence S
Bi-LSTM tagging
coherence
filter
sentence S with frames+FEs
Task 1 : Frame 1
Task 2 : Frame 2
Task n : Frame n
….
Figure 3: Two different strategies (CRF and bi-LSTM multitask) for Frame parsing
1
The
B:Req:Speaker:11
B:Dec:Cogn:5
2
general
I:Req:Speaker:11
I:Dec:Cogn:5
3
has
O
O
4
to
O
O
5
decide
O
LU:Deciding
6
if
O
B:Dec:Decis:5
7
it
O
I:Dec:Decis:5
8
is
O
I:Dec:Decis:5
9
necessary
O
I:Dec:Decis:5
10
to
O
I:Dec:Decis:5
11
order
LU:Request
I:Dec:Decis:5
12
the
B:Req:Addres:11
I:Dec:Decis:5
13
enemy
I:Req:Addres:11
I:Dec:Decis:5
14
the
B:Req:Message:11
I:Dec:Decis:5
15
immediate
I:Req:Message:11
I:Dec:Decis:5
16
surrender
I:Req:Message:11
I:Dec:Decis:5
17
of
I:Req:Message:11
I:Dec:Decis:5
18
Belfort
I:Req:Message:11
I:Dec:Decis:5
Table 4 :
4Example of corpus with B,I,O format
Table 5 :
5Comparative Overview of CRF-LU and biLSTM-MT models
https://clio-texte.clionautes.org/
. role filler identification (RI) which detects potential semantic role fillers in the sentence for the frame detected;4. role filler classification (RC) which assigns a label to each role filler detected in order to obtain the Frame Elements (FE) of the frame detected.
On the other hand biLSTM-MT is able to share data across LUs boosting its capacity to deal with complex syntactic patterns and being able to retrieve more frame elements during parsing. of possible labels in each decision, thus augmenting the precisionof possible labels in each decision, thus augment- ing the precision. On the other hand biLSTM-MT is able to share data across LUs boosting its capacity to deal with complex syntactic patterns and being able to retrieve more frame elements during parsing.
Bibliographical References. Bibliographical References
Enriching a french treebank. A Abeillé, N Barrier, LREC. Abeillé, A. and Barrier, N. (2004). Enriching a french tree- bank. In LREC.
Building a treebank for french. A Abeillé, L Clément, F Toussenel, Treebanks. Abeillé, A., Clément, L., and Toussenel, F. (2003). Build- ing a treebank for french. Treebanks, pages 165-187.
The berkeley framenet project. C F Baker, C J Fillmore, J B Lowe, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational LinguisticsStroudsburg, PA, USA1ACL '98. Association for Computational LinguisticsBaker, C. F., Fillmore, C. J., and Lowe, J. B. (1998). The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Com- putational Linguistics -Volume 1, ACL '98, pages 86- 90, Stroudsburg, PA, USA. Association for Computa- tional Linguistics.
Developing a french framenet: Methodology and first results. M Candito, P Amsili, L Barque, F Benamara, G Chalendar, M Djemaa, P Haas, R Huyghe, Y Mathieu, P Muller, B Sagot, L Vieu, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC). the Ninth International Conference on Language Resources and Evaluation (LREC)Candito, M., Amsili, P., Barque, L., Benamara, F., Chalen- dar, G., Djemaa, M., Haas, P., Huyghe, R., Mathieu, Y., Muller, P., Sagot, B., and Vieu, L. (2014). Developing a french framenet: Methodology and first results. Pro- ceedings of the Ninth International Conference on Lan- guage Resources and Evaluation (LREC).
Multidomain joint semantic frame parsing using bi-directional rnn-lstm. D Hakkani-Tür, G Tur, A Celikyilmaz, Y.-N Chen, J Gao, L Deng, Y.-Y Wang, Proceedings of The 17th Annual Meeting of the International Speech Communication Association. The 17th Annual Meeting of the International Speech Communication AssociationHakkani-Tür, D., Tur, G., Celikyilmaz, A., Chen, Y.-N., Gao, J., Deng, L., and Wang, Y.-Y. (2016). Multi- domain joint semantic frame parsing using bi-directional rnn-lstm. In Proceedings of The 17th Annual Meeting of the International Speech Communication Association.
Semantic frame identification with distributed word representations. K M Hermann, D Das, J Weston, K Ganchev, ACL (1). Hermann, K. M., Das, D., Weston, J., and Ganchev, K. (2014). Semantic frame identification with distributed word representations. In ACL (1), pages 1448-1458.
Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. A Mccallum, W Li, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Stroudsburg, PA, USA4CONLL '03. Association for Computational LinguisticsMcCallum, A. and Li, W. (2003). Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of the Seventh Conference on Natural Language Learn- ing at HLT-NAACL 2003 -Volume 4, CONLL '03, pages 188-191, Stroudsburg, PA, USA. Association for Com- putational Linguistics.
Using recurrent neural networks for slot filling in spoken language understanding. G Mesnil, Y Dauphin, K Yao, Y Bengio, L Deng, D Hakkani-Tur, X He, L Heck, G Tur, D Yu, IEEE/ACM Transactions on Audio, Speech and Language Processing. 233TASLP)Mesnil, G., Dauphin, Y., Yao, K., Bengio, Y., Deng, L., Hakkani-Tur, D., He, X., Heck, L., Tur, G., Yu, D., et al. (2015). Using recurrent neural networks for slot filling in spoken language understanding. IEEE/ACM Transactions on Audio, Speech and Language Process- ing (TASLP), 23(3):530-539.
Macaon : Une chaine linguistique pour le traitement de graphes de mots. A Nasr, F Bechet, J.-F Rey, Traitement Automatique des Langues Naturelles -session de demonstrations. MontrealNasr, A., Bechet, F., and Rey, J.-F. (2010). Macaon : Une chaine linguistique pour le traitement de graphes de mots. In Traitement Automatique des Langues Na- turelles -session de demonstrations, Montreal.
Macaon: An nlp tool suite for processing word lattices. A Nasr, F Béchet, J.-F Rey, B Favre, Le Roux, J , Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations, HLT '11. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations, HLT '11Stroudsburg, PA, USAAssociation for Computational LinguisticsNasr, A., Béchet, F., Rey, J.-F., Favre, B., and Le Roux, J. (2011). Macaon: An nlp tool suite for processing word lattices. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Systems Demonstrations, HLT '11, pages 86-91, Stroudsburg, PA, USA. Association for Computational Linguistics.
Joint syntactic and semantic analysis with a multitask deep learning framework for spoken language understanding. J Tafforeau, F Bechet, T Artiere, B Favre, InterspeechTafforeau, J., Bechet, F., Artiere, T., and Favre, B. (2016). Joint syntactic and semantic analysis with a multitask deep learning framework for spoken language under- standing. Interspeech 2016, pages 3260-3264.
| [] |
[
"Deep Active Learning for Text Classification with Diverse Interpretations",
"Deep Active Learning for Text Classification with Diverse Interpretations"
] | [
"Qiang Liu \nCenter for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences 3 RealAI\n\n",
"Yanqiao Zhu \nCenter for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences 3 RealAI\n\n",
"Zhaocheng Liu ",
"Yufeng Zhang yufeng.zhang@cripac.ia.ac.cn \nCenter for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n\n",
"Shu Wu shu.wu@nlpr.ia.ac.cn \nCenter for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n\n\nSchool of Artificial Intelligence\nUniversity of Chinese Academy of Sciences 3 RealAI\n\n",
"Qiang Liu ",
"Yanqiao Zhu ",
"Zhaocheng Liu ",
"Yufeng Zhang ",
"Shu Wu "
] | [
"Center for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences 3 RealAI\n",
"Center for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences 3 RealAI\n",
"Center for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n",
"Center for Research on Intelligent Perception and Computing\nInstitute of Automation\nChinese Academy of Sciences\n",
"School of Artificial Intelligence\nUniversity of Chinese Academy of Sciences 3 RealAI\n"
] | [
"Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM '21)"
] | Recently, Deep Neural Networks (DNNs) have made remarkable progress for text classification, which, however, still require a large number of labeled data. To train high-performing models with the minimal annotation cost, active learning is proposed to select and label the most informative samples, yet it is still challenging to measure informativeness of samples used in DNNs. In this paper, inspired by piece-wise linear interpretability of DNNs, we propose a novel Active Learning with DivErse iNterpretations (ALDEN) approach. With local interpretations in DNNs, ALDEN identifies linearly separable regions of samples. Then, it selects samples according to their diversity of local interpretations and queries their labels. To tackle the text classification problem, we choose the word with the most diverse interpretations to represent the whole sentence. Extensive experiments demonstrate that ALDEN consistently outperforms several state-of-the-art deep active learning methods. | 10.1145/3459637.3482080 | [
"https://arxiv.org/pdf/2108.10687v1.pdf"
] | 237,278,203 | 2108.10687 | b3afd948d82e5ada6c036e9bd017b8df7ff74e80 |
Deep Active Learning for Text Classification with Diverse Interpretations
ACMCopyright ACM2021. November 1-5, 2021
Qiang Liu
Center for Research on Intelligent Perception and Computing
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences 3 RealAI
Yanqiao Zhu
Center for Research on Intelligent Perception and Computing
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences 3 RealAI
Zhaocheng Liu
Yufeng Zhang yufeng.zhang@cripac.ia.ac.cn
Center for Research on Intelligent Perception and Computing
Institute of Automation
Chinese Academy of Sciences
Shu Wu shu.wu@nlpr.ia.ac.cn
Center for Research on Intelligent Perception and Computing
Institute of Automation
Chinese Academy of Sciences
School of Artificial Intelligence
University of Chinese Academy of Sciences 3 RealAI
Qiang Liu
Yanqiao Zhu
Zhaocheng Liu
Yufeng Zhang
Shu Wu
Deep Active Learning for Text Classification with Diverse Interpretations
Proceedings of the 30th ACM International Conference on Information and Knowledge Management (CIKM '21)
the 30th ACM International Conference on Information and Knowledge Management (CIKM '21)QLD, Australia; New York, NY, USAACM52021. November 1-5, 202110.1145/3459637.3482080KEYWORDSCCS CONCEPTS • Computing methodologies → Natural language processingActive learning settingsNeural networks
Recently, Deep Neural Networks (DNNs) have made remarkable progress for text classification, which, however, still require a large number of labeled data. To train high-performing models with the minimal annotation cost, active learning is proposed to select and label the most informative samples, yet it is still challenging to measure informativeness of samples used in DNNs. In this paper, inspired by piece-wise linear interpretability of DNNs, we propose a novel Active Learning with DivErse iNterpretations (ALDEN) approach. With local interpretations in DNNs, ALDEN identifies linearly separable regions of samples. Then, it selects samples according to their diversity of local interpretations and queries their labels. To tackle the text classification problem, we choose the word with the most diverse interpretations to represent the whole sentence. Extensive experiments demonstrate that ALDEN consistently outperforms several state-of-the-art deep active learning methods.
INTRODUCTION
In recent years, Deep Neural Networks (DNNs) have achieved the state-of-the-art supervised performance in numerous research tasks. Among them, a typical task in natural language processing is text classification, where deep models such as Convolutional Neural Networks (CNNs) [14] and Recurrent Neural Networks (RNNs) [28] are often adopted. However, such deep models require a large number of labeled samples, which are expensive and labor-consuming to obtain in real-world applications. Fortunately, active learning, which aims to identify and label the most informative samples from a pool of unlabeled data to train deep models with limited labels, is a promising approach to relieve this problem [1,3,29,33].
Existing works on active learning mainly select samples based on uncertainty and diversity. Taking Expected Gradient Length (EGL) [12] as an example, it computes the sample uncertainty as the norms of gradients of losses with respect to the model parameters. Following EGL, EGL-Word [33] selects the word with the largest EGL among all samples to query its label so as to maximize the model performance for text classification. In addition, Bayesian Active Learning by Disagreement (BALD) [6] measures the uncertainty according to the probabilistic distribution of the model output via Bayesian inference, where an approximation by dropout is usually incorporated [5]. On the other hand, to measure the diversity of samples, some works define the active learning task as a CORESET problem [24] and uses the embedding of the last layer in deep models as the representation of samples. There are also attempts to trade off between uncertainty and diversity [13,29]. For example, Batch Active learning by Diverse Gradient Embeddings (BADGE) [1] can be viewed as a combination of EGL and CORESET. Meanwhile, there are empirical experiments to evaluate above approaches on text classification [3,21,26,30].
Recently, the interpretability of DNNs has received increasingly attention, among which most works focus on local piece-wise interpretability [2,22]. To be specific, previous works [2,10,17] investigate the local interpretability of DNNs and show that a deep model with piece-wise linear activations, e.g., Maxout [8] and the family of ReLU [7,18], can be regarded as a set of numerous local linear classifiers. The linear separable regions corresponding to these linear classifiers can be determined by the local piece-wise interpretations in DNNs that are calculated via gradient backpropagation [15,23,27,32] or feature perturbation [4,9]. In other words, samples used in a DNN could be divided into numerous linearly separable regions according to their local interpretations and samples in the same linearly separable region are classified by the same local linear classifier [2]. Therefore, fitting a DNN model is roughly equivalent to fitting all the linear classifiers in different linearly separable regions. Inspired by this, we propose to actively select samples in different linearly separable regions with the maximally diverse local interpretations, so that linear classifiers in different linearly separable regions can be all well trained.
In this paper, we propose a novel Active Learning with DivErse iNterpretations (ALDEN) approach for text classification. In our (a) Data distribution (b) Clustering with CORESET [24] (c) Clustering with BADGE [1] (d) Clustering with local interpretations Figure 1: Illustrating local interpretations in DNNs. We artificially generate a series of data samples that could be roughly divided into four linearly separable regions (shown in four triangle areas). We perform -Means clustering on the example data, where the representations of samples are from CORESET [24] and BADGE [1], as well as the local interpretations in DNNs computed using Eq. (1). The clusters are shown in four different colors. It is seen that only with interpretations we are able to correctly identify the four linearly separable regions.
proposed approach, we first calculate the local interpretations in DNN for each sample as the gradient backpropagated from the final predictions to the input features [15,23]. Then, we use the most diverse interpretation of words in a sample to measure its diverseness. Accordingly, we select unlabeled samples with the maximally diverse interpretations for labeling and retrain the model with these labeled samples. We conduct experiments on two text classification datasets, with two representative deep classifiers: CNN [14] and Bi-directional Long Short-Term Memory (BiLSTM). Extensive experimental results show that ALDEN can constantly outperform state-of-the-art deep active learning approaches.
LOCAL INTERPRETATIONS IN DEEP NEURAL NETWORKS
Recently, extensive works have been conducted to study local piecewise interpretability of DNNs, which can be computed using the gradient backpropagation from the predictions to the input features [15,16,23,27,32]. To be specific, we first train a deep model and obtain the predictionˆgiven input features of a specific sample. Then, we can calculate local interpretations as =ˆ.
As in Li et al. [15], local interpretations could be formulated bŷ
≈ ⊤ + ,(2)
where is the bias term. As mentioned in previous works [ To demonstrate that the local Interpretations in DNNs can help promote deep active learning, we present a concrete example as shown in Figure , where example data are drawn from a probability distribution ( = 1 | ) = ( ,1 · ,2 ), where ,1 and ,2 are uniformly drawn from [−5.0, 5.0], and (·) is the sigmoid function. The distribution of these artificially generated samples is shown in Figure 1a, which exhibits clear nonlinear characteristics. In addition, it is seen there are roughly four linearly separable regions, corresponding to the four triangle areas. For these samples, we run -Means clustering on the representations generated by CORESET [24] and BADGE [1], as well as local interpretations in a Multi-Layer Perception (MLP) model, all trained on the example data. We set the number of clusters in -Means to 4 and present the results in Figures 1b, 1c, and 1d respectively. We can observe that CORESET focuses on the original feature distribution and different classes, while BADGE pays more attention to the decision boundaries. Clearly, we can only use local interpretations to distinguish the four linearly separable regions. Therefore, with the help of local interpretations in DNNs, we are able to identify samples in different linearly separable regions. Inspired by this observation, we propose a deep active learning strategy to better fit all the linear classifiers corresponding to the DNN model.
THE PROPOSED ALDEN APPROACH
In this section, we introduce the ALDEN approach for text classification in detail.
Problem Formulation
In this work, we apply pool-based active learning in the batch mode [3,25,31,33]. Specifically, we have a small set of labeled samples L and a large set of unlabeled samples U. Sample ∈ L is associated with label , while sample ∈ U has no labels. The feature vector is denoted as = ( ,1 , ,2 , ..., , | | ), where , is a word in the sample. With the labeled samples in L, we can train a text classifier ( | ): X → Y. We need to develop an active learning strategy to select samples from U and add them to L for further training the classifier. We set the label budget to samples per iteration of sample selection and train the model for a total of iterations.
Approach Details
Regarding active learning for text classification, similar to Eq. (1), for a word in a specific sample used in a deep text classifier, we can compute its local interpretation as
, =ˆ, ,(3)
whereˆis the prediction of sample . Equivalently, we can also calculate Eq. (3) using word embedding , of word , :
, =ˆ, ⊤ , .(4)
Recall that the local interpretation of a word indicates its contribution to the final prediction; similar to Eq. (2), the prediction can be approximated [15] aŝ
≈ ∑︁ 1≤ ≤ | |ˆ, ⊤ , + .(5)
Consider that local interpretations (i.e. the contribution to the model predictions) of the same word may be different among different samples, due to the complex nonlinear feature interactions modeled by deep models [2,17,22]. As discussed in Section 2, we need to select samples with the diverse local interpretations, so that linear classifiers in different linearly separable regions can be well optimized. Meanwhile, since diverse interpretations indicate different decision regions in the deep model, samples with the maximally diverse interpretations can provide the most comprehensive information to learn diverse decision logics in the deep model. For the task of text classification, as different samples consist of various numbers of words, we need to start with analyzing local interpretations of words in each sample. In particular, we calculate the interpretation diversity of a word , compared to the same word appeared in labeled samples
(L, , ) = min ∈ L 1≤ ≤ | | , = , , − , ,(6)
which is similar to the distance calculation in the greedy -Center algorithm [24]. However, some words may not appear in the labeled samples, which makes it infeasible to directly calculate Eq. (6). As a remedy, we search for the most similar embedding of the word appearing in the labeled samples as the neighbor, which is formulated as
(L, , ) = argmin ∈ L 1≤ ≤ | | , − , .(7)
Then, we can rewrite Eq. (6) as
(L, , ) = min ∈ L 1≤ ≤ | | , = ( , , ) , − , .(8)
Considering that there are various numbers of words in sentences, which brings difficulties in directly using the local interpretations of all words in a sample. Therefore, we adopt a pooling strategy for active learning. Recall that in EGL-Word [33], the word with the largest EGL is used to represent the whole sentence. Aligning with EGL-Word, we also use the word with the maximally diverse interpretation to represent the whole sample for active learning. Formally, for a sample ∈ U, we have
(L, ) = max 1≤ ≤ | | (L, , ).(9)
Based on the metric calculated using Eq. (9), we can select the unlabeled sample that has the maximally diverse interpretation for labeling:
= argmax ∈U (L, ).(10)
Since we are given a budget of in each iteration, we repeat the above process for times to select and label samples. Algorithm 1 summarizes the training procedure of the ALDEN approach.
EXPERIMENTS
In this section, we empirically evaluate our proposed ALDEN approach on the task of text classification.
Baseline Approaches
To evaluate the effectiveness of ALDEN, we compare it with the following approaches: • RND is a simple baseline which randomly selects samples in each iteration.
• EGL-Word [33] is an extension of EGL [12], which utilizes norms of gradients to measure uncertainty for the task of text classification.
• BALD [11] is an uncertainty-based approach based on Bayesian inference. We apply dropout approximation [5,6] in our experiments, where the dropout rate is set to 0.5.
• CORESET [24] uses the representations of the last layer in DNN as the representations.
• BADGE [1] can be viewed as a combination of EGL and CORESET.
Experimental Settings
To evaluate the performance of ALDEN, we use two sentence classification datasets 1 : Subj [19] and MR [20], which contain 5000, 5331 positive samples and 5000, 5331 negative samples respectively. In our experiments, we use accuracy as the evaluation metric. We run each approach 10 times and report the median of results. We randomly select 60%, 20%, and 20% samples in each dataset for training, validation, and testing respectively. We train a word2vec 2 model on each dataset to initialize the word embeddings and set the hidden dimensionality to 100. We use two deep models: BiLSTM and CNN for comprehensive evaluation. For the implementation of BiLSTM, we use a single bidirectional LSTM layer with 100 hidden units. For the implementation of CNN, we set the filter size to (3,4,5) and set the hidden dimension to 100 as well. In both BiLSTM and CNN, we apply the ReLU activation and the dropout rate is set to 0.5. We use 2% samples in the training set as the initial seed labeled set. Furthermore, we label 2% samples in the training set during each iteration until 50% samples in the training set have been labeled. In other words, we set yo 24 and to 2% of training samples for each dataset.
Results and Analysis
We present the learning curve of the performance with different ratios of labeled samples in Figure 2. It is seen from the figure that in most cases, active learning approaches outperform random selection, which demonstrates the necessity of deep active learning. EGL-Word and BALD perform similarly and they both slightly outperform CORESET and BADGE. Meanwhile, it is clear that ALDEN constantly outperform other compared approaches, which is demonstrated especially in the middle parts of the learning curves. Additionally in Table 1, we calculate the normalized area under curve scores of learning curves in Figure 2. This metric evaluates the global performance of each compared approach and it is evident that ALDEN achieves the best performance. In summary, these results strongly demonstrate the advantages of our proposed ALDEN approach.
CONCLUSION
In this paper, inspired by the local piece-wise interpretability of DNNs, we introduce the linearly separable regions of samples to the problem of deep active learning. For the task of text classification, we propose a novel ALDEN approach, which selects and labels samples according to the diverse interpretations of unlabeled sample. Specifically, we use the most diverse interpretation of words in a sample to measure the sample diversity. Experimental results on two text classification datasets with CNN and BiLSTM as classifiers show that the ALDEN approach is able to consistently outperform state-of-the-art deep active learning approaches.
Algorithm 1 :Find
1The ALDEN approach Data: Labeled samples L, unlabeled samples U, budget in each iteration, and the number of iterations . 1 Train an initial model ( | 0 ) on L 2 for = 1, 2, ..., the neighbor (L, , ) of word , according to Eq. (7) 11 Compute the diversity (L, , ) of local interpretations of , according to Eq. (8) 12 Compute the diversity (L, ) of the local interpretation of according to Eq. new model ( | ) on L 17 return The final model ( | )
Figure 2 :
2Learning curves in terms of accuracy of compared approaches with various labeling rates of training samples.
That is to say, local interpretations of sample as calculated in Eq. (1) can be partitioned into several clusters and each of them corresponds to a specific local linear classifier. With the local piece-wise interpretations in DNNs, samples can be divided into numerous linearly separable regions and samples in the same linearly separable region are classified by the same local linear classifier[2]. Therefore, fitting a DNN model means fitting all the linear classifiers in different linearly separable regions. Accordingly, if we select samples according to diverse local interpretations, different linear classifiers in different linearly separable regions can be optimized in a more balanced way, so that the corresponding DNN model can be better trained. Thus, we argue that adopting local interpretations in DNN could potentially benefit deep active learning.2, 17, 22],
a DNN model with piece-wise linear activation functions (such as
Maxout and ReLU [7, 8, 18]) can be regarded as a combination of
numbers of local linear classifiers, which are introduced by the local
interpretations in DNN.
Table 1 :
1Normalized area under curve scores of learning curves. The larger the values, the better the performances.Model
Subj
MR
BiLSTM CNN BiLSTM CNN
RND
0.688
0.658
0.531
0.594
EGL-Word
0.750
0.775
0.644
0.650
BALD
0.757
0.773
0.645
0.658
CORESET
0.752
0.764
0.612
0.659
BADGE
0.744
0.767
0.619
0.641
ALDEN
0.803
0.814
0.700
0.746
http://www.cs.cornell.edu/people/pabo/movie-review-data/ 2 https://code.google.com/archive/p/word2vec/
ACKNOWLEDGMENTS
Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. Jordan T Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, Alekh Agarwal, ICLR. Jordan T. Ash, Chicheng Zhang, Akshay Krishnamurthy, John Langford, and Alekh Agarwal. 2020. Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds. In ICLR.
Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution. Lingyang Chu, Xia Hu, Juhua Hu, Lanjun Wang, Jian Pei, Lingyang Chu, Xia Hu, Juhua Hu, Lanjun Wang, and Jian Pei. 2018. Exact and Consistent Interpretation for Piecewise Linear Neural Networks: A Closed Form Solution. In KDD. 1244-1253.
Active Learning for BERT: An Empirical Study. Alon Liat Ein-Dor, Ariel Halfon, Eyal Gera, Lena Shnarch, Leshem Dankin, Marina Choshen, Ranit Danilevsky, Yoav Aharonov, Noam Katz, Slonim, Liat Ein-Dor, Alon Halfon, Ariel Gera, Eyal Shnarch, Lena Dankin, Leshem Choshen, Marina Danilevsky, Ranit Aharonov, Yoav Katz, and Noam Slonim. 2020. Active Learning for BERT: An Empirical Study. In EMNLP. 7949-7962.
Interpretable Explanations of Black Boxes by Meaningful Perturbation. C Ruth, Andrea Fong, Vedaldi, ICCV. Ruth C. Fong and Andrea Vedaldi. 2017. Interpretable Explanations of Black Boxes by Meaningful Perturbation. In ICCV. 3449-3457.
Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. Yarin Gal, Zoubin Ghahramani, ICML. Yarin Gal and Zoubin Ghahramani. 2016. Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning. In ICML. 1050-1059.
Deep Bayesian Active Learning with Image Data. Yarin Gal, Riashat Islam, Zoubin Ghahramani, ICML. Yarin Gal, Riashat Islam, and Zoubin Ghahramani. 2017. Deep Bayesian Active Learning with Image Data. In ICML. 1183-1192.
Deep Sparse Rectifier Neural Networks. Xavier Glorot, Antoine Bordes, Yoshua Bengio, AISTATS. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep Sparse Rectifier Neural Networks. In AISTATS. 315-323.
Ian J Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C Courville, Yoshua Bengio, Maxout Networks. In ICML. Ian J. Goodfellow, David Warde-Farley, Mehdi Mirza, Aaron C. Courville, and Yoshua Bengio. 2013. Maxout Networks. In ICML. 1319-1327.
Towards a Deep and Unified Understanding of Deep Neural Models in NLP. Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, Xing Xie, Chaoyu Guan, Xiting Wang, Quanshi Zhang, Runjin Chen, Di He, and Xing Xie. 2019. Towards a Deep and Unified Understanding of Deep Neural Models in NLP. In ICML. 2454-2463.
Nearly-tight VCdimension bounds for piecewise linear neural networks. Nick Harvey, Christopher Liaw, Abbas Mehrabian, In COLTNick Harvey, Christopher Liaw, and Abbas Mehrabian. 2017. Nearly-tight VC- dimension bounds for piecewise linear neural networks. In COLT. 1064-1068.
Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, Máté Lengyel, arXiv:1112.5745Bayesian Active Learning for Classification and Preference Learning. arXiv.org. Neil Houlsby, Ferenc Huszar, Zoubin Ghahramani, and Máté Lengyel. 2011. Bayesian Active Learning for Classification and Preference Learning. arXiv.org (2011). arXiv:1112.5745
Active Learning for Speech Recognition: the Power of Gradients. Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh, Adam Coates, arXiv:1612.03226Jiaji Huang, Rewon Child, Vinay Rao, Hairong Liu, Sanjeev Satheesh, and Adam Coates. 2016. Active Learning for Speech Recognition: the Power of Gradients. arXiv.org (2016). arXiv:1612.03226
Active Learning by Querying Informative and Representative Examples. Sheng-Jun Huang, Rong Jin, Zhi-Hua Zhou, IEEE Trans. Pattern Anal. Mach. Intell. 36Sheng-Jun Huang, Rong Jin, and Zhi-Hua Zhou. 2014. Active Learning by Query- ing Informative and Representative Examples. IEEE Trans. Pattern Anal. Mach. Intell. 36, 10 (2014), 1936-1949.
Convolutional Neural Networks for Sentence Classification. Yoon Kim, EMNLP. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In EMNLP. 1746-1751.
Visualizing and Understanding Neural Models in NLP. Jiwei Li, Xinlei Chen, Eduard H Hovy, Dan Jurafsky, HLT-NAACL. Jiwei Li, Xinlei Chen, Eduard H. Hovy, and Dan Jurafsky. 2016. Visualizing and Understanding Neural Models in NLP. In HLT-NAACL. 681-691.
Mining Cross Features for Financial Credit Risk Assessment. Qiang Liu, Zhaocheng Liu, Haoli Zhang, Yuntian Chen, CIKM. Qiang Liu, Zhaocheng Liu, Haoli Zhang, Yuntian Chen, and Jun Zhu. 2021. Mining Cross Features for Financial Credit Risk Assessment. In CIKM.
On the Number of Linear Regions of Deep Neural Networks. Guido F Montúfar, Razvan Pascanu, Kyunghyun Cho, Yoshua Bengio, NeurIPS. Guido F. Montúfar, Razvan Pascanu, KyungHyun Cho, and Yoshua Bengio. 2014. On the Number of Linear Regions of Deep Neural Networks. In NeurIPS. 2924- 2932.
Rectified Linear Units Improve Restricted Boltzmann Machines. Vinod Nair, Geoffrey E Hinton, ICML. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified Linear Units Improve Re- stricted Boltzmann Machines. In ICML. 807-814.
A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. Bo Pang, Lillian Lee, ACL. Bo Pang and Lillian Lee. 2004. A Sentimental Education: Sentiment Analysis Using Subjectivity Summarization Based on Minimum Cuts. In ACL. 271-278.
Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales. Bo Pang, Lillian Lee, ACL. Bo Pang and Lillian Lee. 2005. Seeing Stars: Exploiting Class Relationships for Sentiment Categorization with Respect to Rating Scales. In ACL. 115-124.
Sampling Bias in Deep Active Classification: An Empirical Study. Ameya Prabhu, Charles Dognin, Maneesh Singh, EMNLP/IJCNLP. Ameya Prabhu, Charles Dognin, and Maneesh Singh. 2019. Sampling Bias in Deep Active Classification: An Empirical Study. In EMNLP/IJCNLP. 4056-4066.
Why Should I Trust You?": Explaining the Predictions of Any Classifier. Sameer Marco Túlio Ribeiro, Carlos Singh, Guestrin, Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. "Why Should I Trust You?": Explaining the Predictions of Any Classifier. In KDD. 1135-1144.
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. R Ramprasaath, Michael Selvaraju, Abhishek Cogswell, Ramakrishna Das, Devi Vedantam, Dhruv Parikh, Batra, Int. J. Comput. Vis. 128Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedan- tam, Devi Parikh, and Dhruv Batra. 2020. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. Int. J. Comput. Vis. 128, 2 (2020), 336-359.
Active Learning for Convolutional Neural Networks: A Core-Set Approach. Ozan Sener, Silvio Savarese, ICLR. Ozan Sener and Silvio Savarese. 2018. Active Learning for Convolutional Neural Networks: A Core-Set Approach. In ICLR.
Active Learning Literature Survey. Burr Settles, University of Wisconsin-Madison Department of Computer SciencesTechnical ReportBurr Settles. 2009. Active Learning Literature Survey. Technical Report. University of Wisconsin-Madison Department of Computer Sciences.
Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. Aditya Siddhant, Zachary C Lipton, Aditya Siddhant and Zachary C. Lipton. 2018. Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study. In EMNLP. 2904-2909.
Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B Viégas, Martin Wattenberg, arXiv:1706.03825SmoothGrad: removing noise by adding noise. arXiv.org. Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda B. Viégas, and Martin Wat- tenberg. 2017. SmoothGrad: removing noise by adding noise. arXiv.org (2017). arXiv:1706.03825
Convolutional Recurrent Neural Networks for Text Classification. Ruishuang Wang, Zhao Li, Jian Cao, Tong Chen, Lei Wang, Ruishuang Wang, Zhao Li, Jian Cao, Tong Chen, and Lei Wang. 2019. Convolu- tional Recurrent Neural Networks for Text Classification. In IJCNN. 1-6.
Querying Discriminative and Representative Samples for Batch Mode Active Learning. Zheng Wang, Jieping Ye, ACM Trans. Knowl. Discov. Data. 923Zheng Wang and Jieping Ye. 2015. Querying Discriminative and Representative Samples for Batch Mode Active Learning. ACM Trans. Knowl. Discov. Data 9, 3 (2015), 17:1-17:23.
Active Learning with Query Generation for Cost-Effective Text Classification. Yifan Yan, Sheng-Jun Huang, Shaoyi Chen, Meng Liao, Jin Xu, AAAI. Yifan Yan, Sheng-Jun Huang, Shaoyi Chen, Meng Liao, and Jin Xu. 2020. Active Learning with Query Generation for Cost-Effective Text Classification. In AAAI. 6583-6590.
Active Learning for Wireless IoT Intrusion Detection. Kai Yang, Jie Ren, Yanqiao Zhu, Weiyi Zhang, IEEE Wireless Communications. 25Kai Yang, Jie Ren, Yanqiao Zhu, and Weiyi Zhang. 2018. Active Learning for Wireless IoT Intrusion Detection. IEEE Wireless Communications 25, 6 (Dec. 2018), 19-25.
Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods. Yongjun Hao Yuan, Xia Chen, Shuiwang Hu, Ji, AAAI. Hao Yuan, Yongjun Chen, Xia Hu, and Shuiwang Ji. 2019. Interpreting Deep Models for Text Analysis via Optimization and Regularization Methods. In AAAI. 5717-5724.
Active Discriminative Text Representation Learning. Ye Zhang, Matthew Lease, Byron C Wallace, AAAI. Ye Zhang, Matthew Lease, and Byron C. Wallace. 2017. Active Discriminative Text Representation Learning. In AAAI. 3386-3392.
| [] |
[
"Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities",
"Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities"
] | [
"Benjamin Hsu benhsu@amazon.com \nAWS AI Labs\n\n",
"Graham Horwood ghorwood@amazon.com \nAWS AI Labs\n\n"
] | [
"AWS AI Labs\n",
"AWS AI Labs\n"
] | [
"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies"
] | Identifying related entities and events within and across documents is fundamental to natural language understanding. We present an approach to entity and event coreference resolution utilizing contrastive representation learning. Earlier state-of-the-art methods have formulated this problem as a binary classification problem and leveraged large transformers in a cross-encoder architecture to achieve their results. For large collections of documents and corresponding set of n mentions, the necessity of performing n 2 transformer computations in these earlier approaches can be computationally intensive. We show that it is possible to reduce this burden by applying contrastive learning techniques that only require n transformer computations at inference time. Our method achieves state-of-the-art results on a number of key metrics on the ECB+ corpus and is competitive on others. | 10.18653/v1/2022.naacl-main.267 | [
"https://www.aclanthology.org/2022.naacl-main.267.pdf"
] | 248,987,160 | 2205.11438 | 0d9240f01aefba64a0704e0eeeecac96d60203c8 |
Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities
July 10-15, 2022
Benjamin Hsu benhsu@amazon.com
AWS AI Labs
Graham Horwood ghorwood@amazon.com
AWS AI Labs
Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJuly 10-15, 2022
Identifying related entities and events within and across documents is fundamental to natural language understanding. We present an approach to entity and event coreference resolution utilizing contrastive representation learning. Earlier state-of-the-art methods have formulated this problem as a binary classification problem and leveraged large transformers in a cross-encoder architecture to achieve their results. For large collections of documents and corresponding set of n mentions, the necessity of performing n 2 transformer computations in these earlier approaches can be computationally intensive. We show that it is possible to reduce this burden by applying contrastive learning techniques that only require n transformer computations at inference time. Our method achieves state-of-the-art results on a number of key metrics on the ECB+ corpus and is competitive on others.
Introduction
Coreference resolution is the fundamental NLP task of finding all mentions that refer to the same real world entity or event in text. It is an important step for higher level NLP tasks involving natural language understanding, such as text summarization (Azzam et al., 1999), information extraction (Zelenko et al., 2004), and question-answering (Vicedo and Ferrández, 2000). Historically, coreference resolution of entities in the same text document -within document (WD) coreference resolutionhas received the most attention, though more recently focus has moved toward cross-document (CD) coreference resolution.
CD coreference resolution has recently gained renewed interest for its application in multi-document analysis tasks. CD coreference resolution presents unique challenges not found in the WD context. Spans of text come from different documents without any inherent linear order, and there is no notion that antecedents for a given expression typically occur before the expression, as in a single document. Coreferent expressions also cannot be assumed to occur near one another. Furthermore, documents are also assumed to be authored independently and about different-though lexically similar-topics. For instance, the event described in the sentences from Topic 19 in Table 1 below are not coreferential, despite their lexical similarity ("killed").
Another important aspect of CD coreference resolution is the potential scale of the problem. In certain applications, the number of documents can be large and ever growing. In particular, for applications that merge information from across documents, such as multi-document summarization (Falke et al., 2017) or multi-hop question answering (Dhingra et al., 2018), the corpus in question can be both large and dynamically increasing in size.
Past methods of CD coreference resolution have treated the problem as a binary classification task: given two pairs of mentions, classify them as referring to the same entity or not (Bejan and Harabagiu, 2010;Yang et al., 2015;Huang et al., 2019;Kenyon-Dean et al., 2018). In more recent works, contextual embeddings using a cross-encoder architecture have been leveraged to obtain state-of-theart results (Yu et al., 2020;Zeng et al., 2020;Caciularu et al., 2021) on the ECB+ corpus. Despite achieving state-of-the-art results on the benchmark dataset, a shortcoming of these approaches is the fact they use a transformer as a cross-encoder -two sentences are passed to through the transformer network and a label is predicted. For n mentions in a corpus, these approaches require n 2 comparisions at inference time. As Reimers and Gurevych (2019) noted when using BERT in a cross-encoder architecture, finding the most similar pair of sentences in a collection of n = 10000 sentences requires n(n − 1)/2 = 49 995 000 inference computations, which they estimated to take 65 hours using a V100 INITIAL results from the post-mortem on a 15year-old Greek boy whose killing by police sparked five days of rioting show Alexandros Grigoropoulos died from a bullet ricochet.
Yesterday , the police explained that officers shot and killed a 16-year-old Kimani Gray in Brooklyn because he allegedly pointed a gun at the cops.
Fresh riots were reported in Greece on Saturday December 13 2008 in protest at the killing by police of a 15-year-old boy, Alexandros Grigoropoulos, eight days ago Table 1: Examples of cross-document coreference clusters from topics 19 of the ECB+ corpus. Bold text indicate events and the same color indicates that they belong in the same coreference cluster. The addition of lexically similar second subtopic (riots in Greece over teenagers death vs riots in Brooklyn over teenagers death) adds an additional challenge to the ECB+ corpus.
GPU.
Others have sought to address the quadratic scaling of these methods. Recently, Allaway et al. (2021);Cattan et al. (2021a) introduced methods that require n transformer passes. In this work, we introduce a method using contrastive learning to generate mention representations that are useful for the coreference resolution problem. Previous attempts along these lines by Kenyon-Dean et al. (2018) introduced clustering-oriented regularization terms in the loss function. Our method improves on these earlier methods on the benchmark dataset, and achieves results competitive with the more expensive methods of Yu et al. (2020); Zeng et al. (2020);Caciularu et al. (2021). We conduct extensive ablations of our model which we discuss in §4.5. We discuss applications to domains outside of the ECB+ corpus in §4.6.
Related Work
Most recent work on CD coreference resolution has focused on the ECB+ corpus (Cybulska and Vossen, 2014), which we also use in this work. The ECB+ corpus, which is an extension of the Event Coreference Bank (ECB), consists of documents from Google News clustered into topics and annotated for event coreference (Bejan and Harabagiu, 2010). ECB+ increases the difficulty level of the original ECB dataset by adding a second set of documents for each topic (subtopic), discussing a different event of the same type (e.g. riots in Greece over teenagers death vs riots in Brooklyn over teenagers death; see Table 1) (Cybulska and Vossen, 2014). While relatively small, the corpus is representative of the common cross-document coreference use cases across a restricted set of related documents (i.e. results from a search query).
Most approaches to CD coreference resolution address the problem as a binary classification problem between all pairs of events and entities. Early works utilized hand engineered lexical features (e.g. head lemma, word embedding similarities, etc.) (Bejan and Harabagiu, 2010;Yang et al., 2015). More recent works have relied on neural network methods, utilizing character-based embeddings (Huang et al., 2019;Kenyon-Dean et al., 2018) or contextual embeddings (Yu et al., 2020;Cattan et al., 2020;Zeng et al., 2020;Caciularu et al., 2021;Allaway et al., 2021). Recent approaches by Yu et al. (2020) and Caciularu et al. (2021) leveraging RoBERTa and Longformer transformer models have set strong benchmarks. A drawback of these approaches is the necessity to consider all pairs of n mentions in a corpus in a cross encoder architecture. Each unique pair of entities (separated by a special token) is passed through a transformer to generate a similarity score. This requires n 2 transformer computations.
This can be computationally expensive and several works have sought to address this. Allaway et al. (2021) introduced a model that clusters mentions sequentially at inference time. They achieved competitive results using a BERT-base model and without using a hierarchical clustering algorithm to generate coreference chains. Cattan et al. (2021a) adapted the model of Lee et al. (2017) to the crossdocument context. Specifically, they pruned document spans down to the gold mentions and encode each resulting pared document using a RoBERTalarge model. A pairwise (feed-forward network) scorer then generates a score for each pair of spans. They also considered an end-to-end system where they use their model to predict mention spans instead of using gold mentions. In this work, we consider gold mentions only as has been done in earlier works.
In this work, we introduce a method leveraging contrastive learning using a RoBERTa-large model as the base encoder. At inference time, our method requires n passes of the transformer, like earlier methods by Allaway et al. (2021);Cattan et al. (2021a). Our method surpasses their methods on the benchmark ECB+ dataset and is competitive with more expensive cross-encoder approaches of Yu et al. (2020); Zeng et al. (2020);Caciularu et al. (2021).
Methodology
Dataset
We follow earlier works and use the ECB+ corpus, which is an extension of the Event Coreference Bank (ECB), which was discussed in the previous section. Following earlier works by others (Yu et al., 2020;Cattan et al., 2020;Caciularu et al., 2021;Allaway et al., 2021), we follow the setup of Cybulska and Vossen (2015), which was also used by others (Yu et al., 2020;Cattan et al., 2020;Caciularu et al., 2021;Allaway et al., 2021). This setup uses a subset of the annotations which has been validated for correctness and allocates a larger portion of the dataset for training. In this setup, we use topics 1-35 as the train set, setting aside topics 2, 5, 12, 18, 21, 23, 34, 35 for hyperparameter tuning, and 36-45 as the test set. To preprocess mentions, we utilized the reference implementation from Cattan et al. (2020). The distribution of the train, test, and development sets can be seen in Table 2.
Model
We propose a model to learn embeddings useful for clustering events and entities. Our model leverages a Siamese neural network (Bromley et al., 1993) to fine-tune a RoBERTa-large encoder (see Figure 1). We train and evaluate our model using gold mentions as opposed to predicted mentions in order to focus on the cross-document coreference resolution problem. At inference time, our model generates embeddings for the mentions which are then clustered using an agglomerative clustering algorithm as was done previously by Barhom et al. (2021), we use the observation that other parts of the document provide valuable context to the mentions in question. We extract and encode the first two sentences from the document. This takes advantage of the fact that the articles are news articles and in many cases, much of the relevant information is summarized at the beginning of the document. In most cases, these two sentences are the headline and dateline for the article. In cases where the sentence in question is one of the first two sentences, we take the next sentence in the document.
Contextual Embedding In addition to the document context, we also utilize the sentence that the mention appears in and annotate its location in the sentence using [E] and [/E] tokens. The two sequences are concatenated together using a [SEP] token (see Figure 1). In total, we keep 128 word piece tokens and in cases where the combined input exceeds this, we remove tokens from the end of the context before removing tokens from the sentence containing the mention. This combined sequence is encoded using a RoBERTa-large model (Liu et al., 2019), as shown in Figure 1. We fine-tune all layers of the RoBERTa-large model. RoBERTa will produce a representation vector for each token of the input sequence. We then sum up element-wise the tokenlevel representations of the mention and use this as the representation of the mention, v e . Additionally, we utilize the first token of the sequence v cls as the embedding for the entire document context and mention embedding. Each of these contextual embeddings are passed separately through a multilayer perceptron (MLP). We found that 1024 for the hidden layer dimension for both MLPs worked well in our experiments.
v e = M LP 1 (v e ); v cls = M LP 2 (v cls ) (1)
The final representation for the mention i and its context document is given by the concatenation of the two vectors output vectors, indicated by [.; .].
v i = [v cls ; v e ](2)
At inference time, our model takes in the mention and its context (both the head of the document and its sentence) and generates a 2048 dimensional embedding v i . A clustering algorithm is applied to embeddings to generate coreference clusters. In order to compare our language model with earlier approaches, we follow earlier works and use an agglomerative clustering model. We use the implementation from scikit-learn 1 and cluster mention representations using the cosine distance metric. Representations within an average threshold distance τ are considered to be in the same cluster (i.e. coreferences).
Training
To train the model, we consider pairs of sentences -positive samples are pairs of sentences where the mentions are coreferential while negative samples are pairs of sentences where the mentions are not coreferential. Pairs of sentences were chosen from within gold topics and were constructed by first computing the similarity between sequences. This focuses our model to learn features to distinguish between the two closely related subtopics, one of the key aspects of the ECB+ corpus. We Table 3: Statistics for the contrastive pairs generated. Pairs of sentences were chosen from within gold topics and were constructed by first computing the similarity between sequences. Negative samples were downsampled by selecting samples whose similarity was greater than the median similarity among all possible sample pairs. used SBERT (Reimers and Gurevych, 2019) to embed these sequences initially. Positive pairs were created from sequences that were least similar to one another and negative pairs were selected from the set of pairs most similar to one another, both within a particular subtopic and across subtopics (but still within the same topic). Finally, the negative samples were down-sampled by selecting samples whose similarity was greater than the median similarity among all possible positive sample pairs. The resulting distribution for the pairs can be seen in Table 3. The model parameters were then trained using a Siamese network architecture (Chopra et al., 2005) where model weights are shared across both branches. For a given pair of sentences p = (s 1 , s 2 ) and label y = 1, 0 where y = 1 if the pairs are coreferences and y = 0 otherwise, each pair of sentences is encoded using our model. The model was trained by minimizing the contrastive loss, (Hadsell et al., 2006), as implemented by Reimers and Gurevych (2019),
= y * d(i, j) 2 + (1 − y) * max(0, m − d(i, j)) 2 (3) For our purposes, d(i, j) = 1 − cos(v i , v j )
is the cosine distance, m > 0 is a margin, and y is one if the pairs describe coreferent mentions and zero otherwise. Dissimilar pairs contribute to the loss function only if their distance are within m. The loss pushes the embeddings so that positive pairs are closer together in the embedding space and negative pairs are pushed to be more distant than the margin m.
Hyperparameters
In our experiments, we used the AdamW optimizer without warmup and found that a batch size of 16 worked well. We utilized Ray (Liaw et al., 2018) for hyperparmeter tuning and specifically the Bayesian optimization search algorithm from scikitoptimize. 2 We performed our experiments on a p3dn.24xlarge with 8 V100 Tensor Core GPUs and chose the dropout rate, learning rate, contrastive margin m and clustering threshold τ to optimize the CoNLL F1 score on the development set gold topics. This was done to learn representations that address the lexical ambiguity in the ECB+ corpus topics. Resulting hyperparameters can be found in
Results and Discussion
We evaluate our model using four different measures as is common in earlier works. Specifically, we evaluated our model performance using MUC (Vilain et al., 1995), B 3 (Bagga and Baldwin, 1998), CEAF-e (Luo, 2005), and LEA (Moosavi and Strube, 2016) metrics. We also evaluate our model using the CoNLL F1, the average of the MUC, B 3 , and CEAF-e F1 scores. As a baseline, we also show results from a lemma model that takes each span in question and utilizes spaCy 3 to lemmatize each token. Mentions are clustered based on whether their lemmatized tokens are exact matches or not. Evaluations on ECB+ test corpus are not without controversy, and we discuss these subtleties in detail below. For the reader familiar with these issues, our main results are discussed in §4.2 and §4.3. We also conduct an ablation study with results in §4.5.
Evaluation Settings
Many earlier methods leveraged an initial document clustering (Yu et al., 2020;Zeng et al., 2020;Caciularu et al., 2021;Allaway et al., 2021). As observed by Barhom et al. (2019);Upadhyay et al. (2016), clustering the documents as a preprocessing step and performing pairwise classification on mentions within each cluster provides a strong baseline. Barhom et al. (2019) introduced a K-Means algorithm to cluster documents using TF-IDF scores of the unigrams, bigrams and trigrams, where K is chosen by utilizing the Silhouette coefficient method (Rousseeuw, 1987). Models are then applied to mentions within each cluster.
However, this approach has come under criticism (Cremisini and Finlayson, 2020;Cattan et al., 2021a,b). Detractors note that, because of the high lexical similarity between documents within the same subtopic, pre-clustering methods are able to produce near perfectly predicted subtopics, especially in the ECB+ corpus, where only a few coreference links are found across different subtopics. Document clustering is not expected to perform as well in realistic settings where coreferent mentions can spread over multiple topics (Cattan et al., 2021a). More importantly, this bypasses the intention behind the inclusion of subtopics in ECB+ and avoids challenging the coreference models on lexical ambiguity (Cybulska and Vossen, 2014).
In our view, evaluation utilizing the original topic clusters ("gold" topics) is more in line with the original intent of Cybulska and Vossen (2014) and more indicative of realistic settings (Cattan et al., 2021b). We discuss results (1) using ECB+ topics ("gold topics" henceforth) as the initial document clustering and (2) using no initial document clustering ("corpus level" henceforth) in section §4.2. We find that our methodology improves on earlier methods (Tables 5 and 9). Finally, because a majority of earlier works evaluate their models using predicted topics, we discuss our model performance under this setting in §4.3. We report results from a single run.
Gold Topics and Corpus Level
We evaluate our models using the ECB+ topics, in line with the intent of Cybulska and Vossen (2014) and earlier works by Cattan et al. (2021a,b). According to those authors, this setting was designed to approximate an unclustered stream of news articles. Additionally, as noted by Cattan et al. (2020Cattan et al. ( , 2021a, the presence of singletons biases the results towards models that perform well on detecting all the mentions instead of predicting coreference clusters. Furthermore, in using gold mentions in the evaluation (like we do here), including singletons artificially inflates performance metrics (Cattan et al., 2021a). We present our results without singletons (Table 5) using the reference implementation of Moosavi and Strube (2016). In Appendix A, we give results with singletons in Table 9.
On the gold topic and corpus level subsets, our model performs well. In all cases, we surpass the current state-of-the-art model on the CoNLL F1 metric for both event and entity coreference resolution by large margins without singletons (see Table 5)). We suspect this improvement to be a feature of contrastive learning and methodology we used to choose pairs -coreferential mentions are pushed closer together in the embedding space while mentions that are not coreferences are pushed further apart. We do observe a larger drop in performance in going from gold topics to the corpus level subsets. This is due to the choice in contrastive pairs, where negative examples come from the same gold topic.
Aside from improved performance, our methodology differs in some key aspects to the recent works by Cattan et al. (2021aCattan et al. ( ,b, 2020. Their methodology also leverages a RoBERTa-large model to embed documents, but breaks long documents into 512 word piece token chunks. The authors used as their feature vector for a span in question: the sum of the span embeddings, the embeddings for the span beginning and end, and a vector encoding the span length as their feature vector, which they feed into a pairwise classifier to generate pairwise scores. We on the other hand use the sentences containing the span in question and additional context sentences from the document, keeping a total of 128 word piece tokens. This additional context from the document, despite keeping fewer tokens, accounts for much of the performance gain. This is discussed in further detail in §4.5.
Predicted Topic Clusters
We compare our model against the majority of earlier works that used predicted topic clusters and gold mentions (see Table 6 and Appendix A Table 8 for more complete results). We used the reference implementation by Pradhan et al. (2014) to score our models with singletons. Our model is competitive with earlier approaches (Yu et al., 2020;Zeng et al., 2020;Caciularu et al., 2021), despite using significantly fewer resources at inference timen transformer computations at inference time as oppose to n 2 transformer computations. We also note that in contrast to our approach, Caciularu et al. (2021) F1 points on average. We note however, that their model used a BERT-base model and that they also introduced a novel sequential clustering approach. Our methodology used the larger RoBERTa-large model, and we utilized an agglomerative clustering algorithm as in previous works. Finally, in contrast to earlier works, we note that our model performs equally well when using predicted clusters and ECB+ gold topics. In fact, our model does better (by 0.9 CoNLL F1 points) on entities when going to gold topics, and achieves the same performance on events using gold topics. This is related to how we selected our contrastive pairs -negative and positive pairs were selected from within each topic and so our model focused on the lexical ambiguity in the ECB+ corpus.
Training and Inference Time
Our model is larger than earlier models by Cattan et al. (2021a,b); Allaway et al. (2021). On a single V100 Tensor Core GPU with 32 GB of RAM, training took approximately two days. This is comparable to reported times for the cross-encoder model (using Longformer) by (Caciularu et al., 2021). We note that contrastive learning methods have been found to converge slowly (Sohn, 2016). At infer- We found that their model takes approximately 60 seconds under similar settings. In §4.5 we discuss experiments with smaller models.
Entities Events F1 ∆ F1 ∆
Ablations
We ablate several parts of our model using the headlines heuristic and examine the importance of the underlying language model, the token representations, and the document context.
Language Model
We examined the effect different representations have on overall performance by ablating the language model used. We found that the larger and richer representations of the RoBERTa-large model performed better generically. We gained on average 5 CoNLL F1 points in using RoBERTa-large versus BERT-large. We gained on average 7.2 CoNLL F1 points versus the smaller BERT-base model. Details can be found in Table 6.
Token Representation To assess the effect of including the CLS token embedding in the final representations, we trained our model without using its representation, but keeping the mention representation. We find that the CLS representation accounts for roughly 1.3 CoNLL F1 points on average while the mention representation accounts for roughly 2.8 CoNLL F1 points on average (see Table 7 for details). We also examined our model without explicitly using the mention representation, but still tagging the span with [E], [/E] tokens. For our model, we find that the mention representation was a more important factor when considering events. We speculate that tagging the mention location with [E], [/E] tokens allows the transformer to attend to the mention. For events, which have a more complicated structure (e.g. arguments) this likely has a more important effect.
Document Context
Finally, an important component of our model was including the first two sentences of each document in the spirit of Caciularu et al. (2021). For the ECB+ corpus, which is comprised of news articles, much contextual information is contained in the first two sentences of the document. We see that the document context contributes on average 5.4 CoNLL F1 points (see Table 7 for details). This is in line with our expectations for new articles and earlier observations by Caciularu et al. (2021). We suspect the importance of this feature is due to a property of the ECB+ corpus that has been highlighted by othersnamely, the documents form fairly distinct clusters in themselves and so simple document embeddings are able to recover subtopics easily (Cattan et al., 2021a,b;Cremisini and Finlayson, 2020). Note for instance that our model without using document context is competitive (compare with (Eirew et al., 2021)) . We plan to discuss these results in further detail in future work.
TextRank
A limitation of the current work is its specificity to formal text (i.e. news articles, Wikipedia articles). Given the importance of the headlines to our model, we also conducted experiments using the TextRank algorithm (Mihalcea and Tarau, 2004) to extract sentences that best summarize the content of the article instead of using the first two. We expect this method to be more applicable to less formal settings. We embedded each sentence in the document using SBERT and select the top two. On average we found that the headlines heuristic provided a 4.6 and 3.7 CoNLL F1 gain on on event and entity coreference resolution respectively (with singletons) over the TextRank extracted contexts (for detailed metrics see Table 8 in Appendix A). This is expected in the ECB+ context as the TextRank algorithm selects noisier sentences as compared to article headlines.
Conclusions
In this paper, we proposed a new model for withinand cross-document coreference resolution. We demonstrated that contrastive learning approaches are effective at learning representations for coreference resolution. We evaluated our model on gold topics and at the corpus level of the ECB+ corpuswith and without singleton mentions-and found that our approach surpasses current state-of-the-art methods by large margins. We also evaluated our models with an initial document clustering method and found that our model was competitive with earlier works. We presented extensive ablations of our model and discussed limitations of our work including model size, training time, application to formal text domains (i.e. news articles and Wikipedia), and use of agglomerative clustering to generate final coreference clusters. Interesting directions for future work would be testing the TextRank algorithm in less formal contexts (i.e. beyond news articles and Wikipedia articles), investigating higher-order tuples (e.g. triplets) to speed up model convergence, and extending our work to predicted mentions as opposed to gold mentions as has been done by others (Cattan et al., 2021a,b
(2019); Yu et al. (2020); Cattan et al. (2020); Caciularu et al. (2021); Zeng et al. (2020). Below we discuss details of our methodology and training procedure. Document Context Following Caciularu et al.
Riots Erupt Following Death of Brooklyn TeenKilled By PoliceSubtopic 1
Subtopic 2
Topic 19
Table 2 :
2Statistics for the ECB+ corpus. We followed
the setup of (Cybulska and Vossen, 2015) and used top-
ics 36-45 for our test set and topics 1-35 for training
with topics 2, 5, 12, 18, 21, 23, 34, 35 set aside in the
development set for hyperparameter tuning.
1 https://scikit-learn.orgEvents Entities
# of Pairs
19000
27090
# of Positive
2085
4078
# of Negatives
16915
23012
# of Same Subtopic
13694
18847
# of Different Subtopic
5306
8243
Fraction Positive
0.11
0.15
Fraction Same Subtopic
0.72
0.70
Median pos. similarity score
0.62
0.59
Median neg. similarity score
0.80
0.77
Table 4 .
4Events Entities
Epochs
100
50
Learning rate
2e-7
2e-7
Batch Size
16
16
Contrastive margin, m
0.40
0.70
Clustering Threshold, τ
0.2
0.2
Table 4 :
4Hyperparameters for our best performing models on events and entities.
Table 5 :
5Combined within-and cross-document coreference scores for entities and events without singletons, using gold mentions. Gold topics use the ECB+ topics as the initial document pre-clustering while corpus level results do not use any document pre-clustering. Bold values indicate best overall for a particular data subset.
used a total of 600 tokens from each document (most documents are within 512 tokens) whereas we only use 128 tokens. Models by Yu et al. (2020); Zeng et al. (2020) employ a BERT based semantic role labelling (SRL) model. On average, our model lags their models by approximately 1.1 CoNLL F1 points, however, we note that Yu et al. (2020) find that the SRL tagging accounted for roughly 0.4 CoNLL F1 points. When comparing to other models that are linear in transformer computations, our model does well. Compared to the work by Allaway et al. (2021), our model surpasses their results by 3.7 CoNLLScaling Adapt. Fine-
tuned
SRL
Encoder
System
MUC F1 B 3 F1 CEAF-e F1 CoNLL F1
Events
n 2
Baseline
76.7
77.5
73.2
75.7
BERT-large
Zeng et al. (2020)
87.5
83.2
82.3
84.3
RoBERTa-large Yu et al. (2020)
86.6
85.4
81.3
84.4
Longformer
Caciularu et al. (2021)
88.1
86.4
82.2
85.6
n
RoBERTa-large Cattan et al. (2021a)
83.5
82.4
77.0
81.0
BERT-base
Allaway et al. (2021)
82.2
81.1
79.1
80.8
Ours
-RoBERTa-large
85.6
84.8
79.6
83.3
-RoBERTa-base
84.0
82.4
79.0
81.8
-BERT-large
82.8
82.3
77.9
81.0
-BERT-base
79.8
79.4
74.4
77.9
Entities
n 2
Baseline
70.7
61.7
56.9
63.1
Longformer
Caciularu et al. (2021)
89.9
82.1
76.8
82.9
n
RoBERTa-large Cattan et al. (2021a)
83.6
72.7
63.1
73.1
BERT-base
Allaway et al. (2021)
84.3
72.4
69.2
75.3
Ours
-RoBERTa-large
87.1
80.3
73.1
80.2
-RoBERTa-base
83.6
74.1
68.5
75.4
-BERT-large
80.8
71.4
66.2
72.8
-BERT-base
78.2
68.9
62.7
69.9
Table 6 :
6SRL). To better compare to earlier works, we have included results from using different encoders in our model and indicated which encoders were used in earlier works. Finally, Allaway et al. (2021) used sequential clustering algorithm whereas ours and Cattan et al. (2020) utilized an agglomerative clustering algorithm. Bold indicates best overall. Underlined results indicate our best overall.A comparison of methods utilizing contextual embedding models and their performance on the ECB+ test
corpus using predicted topic clusters of Barhom et al. (2019). We have indicated the scaling at inference time (in
terms of transformer computations) above. We have also indicated whether systems utilized adaptive pre-training
(Adapt.), fine-tuned encoders (Fine-tuned), or utilized a semantic role labelling model (
Table 7 :
7Ablation results (CoNLL F1) on the ECB+
test set with singletons.
ence time, however, our model takes approximately
15 seconds to evaluate on the ECB+ test set of
events (using gold mentions and with singletons
included). As a point of comparison, we ran the
model of Cattan et al. (2021a,b) which likewise
uses RoBERTa-large and is linear in transformer
computations.
). Cosmin Bejan and Sanda Harabagiu. 2010. Unsupervised event coreference resolution with rich linguistic features. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1412-1422, Uppsala, Sweden. Association for Computational Linguistics. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Säckinger, and Roopak Shah. 1993. Signature verification using a "siamese" time delay neural network. In Proceedings of the 6th International Conference on Neural Information Processing Systems, NIPS'93, page 737-744, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Avi Caciularu, Arman Cohan, Iz Beltagy, Matthew Peters, Arie Cattan, and Ido Dagan. 2021. CDLM: Cross-document language modeling. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 2648-2662, Punta Cana, Dominican Republic. Association for Computational Linguistics. Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2021a. Cross-document coreference resolution over predicted mentions. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 5100-5107, Online. Association for Computational Linguistics. Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar Joshi, and Ido Dagan. 2021b. Realistic evaluation principles for cross-document coreference resolution. In Proceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, pages 143-151, Online. Association for Computational Linguistics. S. Chopra, R. Hadsell, and Y. LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), volume 1, pages 539-546 vol. 1. Andres Cremisini and Mark Finlayson. 2020. New insights into cross-document event coreference: Systematic comparison and a simplified approach. In Proceedings of the First Joint Workshop on Narrative Understanding, Storylines, and Events, pages 1-10, Online. Association for Computational Linguistics. Agata Cybulska and P. Vossen. 2014. Using a sledgehammer to crack a nut? lexical diversity and event coreference resolution. In LREC. Agata Cybulska and Piek Vossen. 2015. Translating granularity of event slots into features for event coreference resolution. In Proceedings of the The 3rd Workshop on EVENTS: Definition, Detection, Coreference, and Representation, pages 1-10, Denver, Colorado. Association for Computational Linguistics. Bhuwan Dhingra, Qiao Jin, Zhilin Yang, William Cohen, and Ruslan Salakhutdinov. 2018. Neural models for reasoning over multiple mentions using coreference. In Proceedings of the 2018 Conference of RoBERTa-large + TextRank 80.0 83.6 81.8 76.9 86.4 81.4 78.6 74.7 76.6 64.1 74.3 68.8 79.9 RoBERTa-large + TextRank 75.6 91.2 82.7 59.1 93.2 72.3 80.8 57.4 67.1 49.6 78.9 60.9 74.1Saliha Azzam, Kevin Humphreys, and Robert
Gaizauskas. 1999.
Using coreference chains
for text summarization. In Coreference and Its
Applications.
A. Bagga and B. Baldwin. 1998. Algorithms for
scoring coreference chains. In The first interna-
tional conference on language resources and evalua-
tion workshop on linguistics coreference, volume 1,
pages 563-566.
Shany Barhom, Vered Shwartz, Alon Eirew, Michael
Bugert, Nils Reimers, and Ido Dagan. 2019. Re-
visiting joint modeling of cross-document entity and
event coreference resolution. In Proceedings of the
57th Annual Meeting of the Association for Com-
putational Linguistics, pages 4179-4189, Florence,
Italy. Association for Computational Linguistics.
Arie Cattan, Alon Eirew, Gabriel Stanovsky, Mandar
Joshi, and Ido Dagan. 2020. Streamlining cross-
document coreference resolution: Evaluation and
modeling.
MUC
B 3
CEAF-e
LEA
CoNLL
R
P
F1
R
P
F1
R
P
F1
R
P
F1
F1
Events
Baseline
72.5 81.1 76.6 69.6 87.4 77.5 77.9 69 73.2 55.63 72.9 63.1
75.7
Zeng et al. (2020)
85.6 89.3 87.5 77.6 89.7 83.2 84.5 80.1 82.3
-
-
-
84.3
Yu et al. (2020)
88.1 85.1 86.6 86.1 84.7 85.4 79.6 83.1 81.3
-
-
-
84.4
Caciularu et al. (2021)
87.1 89.2 88.1 84.9 87.9 86.4 83.3 81.2 82.2 76.7 77.2 76.9
85.6
Cattan et al. (2021a)
85.1 81.9 83.5 82.1 82.7 82.4 75.2 78.9 77
68.8
72 70.4
81
Allaway et al. (2021)
81.7 82.8 82.2 80.8 81.5 81.1 79.8 78.4 79.1
-
-
-
80.8
Ours
-RoBERTa-large
87.9 83.4 85.6 86.2 83.4 84.8 76.9 82.4 79.6 74.1 74.2 74.1
83.3
-RoBERTa-base
83.6 84.5 84.0 78.9 86.1 82.4 79.5 78.5 79.0 67.1 75.8 71.2
81.8
-BERT-large
82.9 82.7 82.8 81.3 83.4 82.3 77.8 78.0 77.9 68.9 72.5 70.6
81.0
-BERT-base
80.3 79.3 79.8 78.0 80.9 79.4 73.8 75.0 74.4 63.4 68.8 66.0
77.9
-Entities
Baseline
58.7 88.6 70.7 46.2 93.1 61.7 79.7 44.2 56.9 35.6 68.2 46.8
63.1
Caciularu et al. (2021)
88.1 91.8 89.9 82.5 81.7 82.1 81.2 72.9 76.8 76.4
73 74.7
82.9
Cattan et al. (2021a)
85.7 81.7 83.6 70.7 74.8 72.7 59.3 67.4 63.1 56.8 65.8 61
73.1
Allaway et al. (2021)
83.9 84.7 84.3 74.5 70.5 72.4
70 68.1 69.2
-
-
-
75.3
Ours
-RoBERTa-large
83.1 91.6 87.1 72.2 90.4 80.3 81.1 66.5 73.1 63.7 79.3 70.6
80.2
-RoBERTa-base
77.2 91.1 83.6 61.6 92.8 74.1 81.0 59.4 68.5 52.2 79.2 62.9
75.4
-BERT-large
72.8 90.7 80.8 58.1 92.7 71.4 81.8 55.6 66.2 49.0 76.6 59.7
72.8
-BERT-base
69.9 ' 88.7 78.2 55.5 90.9 68.9 78.5 52.2 62.7 45.0 72.3 55.5
69.9
-
Table 8 :
8Detailed results comparing methods utilizing contextual embedding models and their performance on the ECB+ test corpus using predicted topic clusters. Note that the systems of Zeng et al. (2020); Yu et al. (2020); Caciularu et al. (2021) require significantly more resources than the others (n 2 versus n transformer computations). Finally, Allaway et al. (2021) uses a BERT-base model and a sequential clustering algorithm whereas ours and Cattan et al. (2020) utilize RoBERTa-large models and an agglomerative clustering algorithm.MUC
B 3
CEAF-e
LEA
CoNLL
R
P
F1
R
P
F1
R
P
F1
R
P
F1
F1
Events
Gold
Topics
Baseline
72.9 72.4 72.7 69.7 73.5 71.5 71.1 71.7 71.4 53.5 59.2 56.1
71.9
Cattan et al. (2021b)
80.1 76.3 78.1 77.4 71.7 74.5 73.1 77.8 75.4 62.9 59.1 61
76
Ours
87.8 82.9 85.3 86.5 83.1 84.8 76.9 82.8 79.7 74.4 74.0 74.2
83.3
Corpus
Baseline
72.9 60.5 66.1 69.7 56.4 62.4 51.5 68.6 58.8 45.3 42.6 43.9
62.4
Kenyon-Dean et al. (2018) †
67
71
69
71
67
69
71
67
69
-
-
-
69
Ours
86.4 74.9 80.2 85.3 67.9 75.6 65.3 80.1 71.9 68.3 57.5 62.4
75.9
Entities
Gold
Topics
Baseline
61.6 85.9 71.8 48.6 89 62.9 76.7 45.9 57.4 37.3 65.5 47.5
64
Cattan et al. (2021a)
-
-
-
-
-
-
-
-
-
-
-
-
70.9
Ours
84.5 90.1 87.2 79.3 86.6 82.8 78.7 68.6 73.3 70.3 75.7 72.9
81.1
Corpus
Baseline
61.9 77.5 68.8 48.7 79.6 60.4 68.2 46.1 55
35.2 57.8 43.7
61.4
Ours
83.9 86.6 85.2 78.5 82.7 80.5 73.0 67.9 70.4 67.8 71.7 69.7
78.7
Table 9 :
9Combined within-and cross-document coreference scores for entities and events with singletons, using gold mentions. Gold topics use the ECB+ topics as the initial document pre-clustering while corpus level results do not use any document pre-clustering. We note that the system proposed byKenyon-Dean et al. (2018) does not use contextual embeddings whereas ours and Cattan et al. (2021a) make use of RoBERTa-large. To the best of our knowledge, we have the only results at the corpus level for entities. Bold values indicate best overall for a particular data subset.
https://github.com/scikit-optimize/scikit-optimize 3 https://spacy.io/
AcknowledgementsThe authors thank the anonymous reviewers for their advice and comments.Ethical ConsiderationsIn this work, we used the ECB+ corpus (Cybulska and Vossen, 2014) which consists of news articles from the open domain. Our use was consistent with the intended use of the dataset. Our model does not contain any intentional biases. As discussed in §3.4, §4.4, we ran our experiments on a single p3dn.24xlarge with 8 V100 32GB GPUs. Model training and inference was relatively short and does not present ethical issues.
Sequential cross-document coreference resolution. Emily Allaway, Shuai Wang, Miguel Ballesteros, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingOnline and Punta Cana, Dominican Republic. Association for Computational LinguisticsEmily Allaway, Shuai Wang, and Miguel Ballesteros. 2021. Sequential cross-document coreference res- olution. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 4659-4671, Online and Punta Cana, Domini- can Republic. Association for Computational Lin- guistics.
Association for Computational Linguistics. the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, Louisiana2Short Papersthe North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 2 (Short Papers), pages 42-48, New Orleans, Louisiana. Association for Computa- tional Linguistics.
WEC: Deriving a large-scale cross-document event coreference dataset from Wikipedia. Alon Eirew, Arie Cattan, Ido Dagan, 10.18653/v1/2021.naacl-main.198Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsOnlineAlon Eirew, Arie Cattan, and Ido Dagan. 2021. WEC: Deriving a large-scale cross-document event corefer- ence dataset from Wikipedia. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 2498-2510, On- line. Association for Computational Linguistics.
Concept-map-based multi-document summarization using concept coreference resolution and global importance optimization. Tobias Falke, Christian M Meyer, Iryna Gurevych, Proceedings of the Eighth International Joint Conference on Natural Language Processing. the Eighth International Joint Conference on Natural Language ProcessingTaipei, TaiwanLong Papers1Asian Federation of Natural Language ProcessingTobias Falke, Christian M. Meyer, and Iryna Gurevych. 2017. Concept-map-based multi-document summa- rization using concept coreference resolution and global importance optimization. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 801-811, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Dimensionality reduction by learning an invariant mapping. R Hadsell, S Chopra, Y Lecun, 10.1109/CVPR.2006.100IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'06). 2R. Hadsell, S. Chopra, and Y. LeCun. 2006. Dimen- sionality reduction by learning an invariant map- ping. In 2006 IEEE Computer Society Confer- ence on Computer Vision and Pattern Recognition (CVPR'06), volume 2, pages 1735-1742.
Improving event coreference resolution by learning argument compatibility from unlabeled data. Yin Jou Huang, Jing Lu, Sadao Kurohashi, Vincent Ng, 10.18653/v1/N19-1085Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, Minnesota1Association for Computational LinguisticsYin Jou Huang, Jing Lu, Sadao Kurohashi, and Vincent Ng. 2019. Improving event coreference resolution by learning argument compatibility from unlabeled data. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, Volume 1 (Long and Short Papers), pages 785- 795, Minneapolis, Minnesota. Association for Com- putational Linguistics.
Resolving event coreference with supervised representation learning and clusteringoriented regularization. Kian Kenyon-Dean, Jackie Chi Kit Cheung, Doina Precup, 10.18653/v1/S18-2001Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics. the Seventh Joint Conference on Lexical and Computational SemanticsNew Orleans, LouisianaAssociation for Computational LinguisticsKian Kenyon-Dean, Jackie Chi Kit Cheung, and Doina Precup. 2018. Resolving event coreference with supervised representation learning and clustering- oriented regularization. In Proceedings of the Seventh Joint Conference on Lexical and Com- putational Semantics, pages 1-10, New Orleans, Louisiana. Association for Computational Linguis- tics.
End-to-end neural coreference resolution. Kenton Lee, Luheng He, Mike Lewis, Luke Zettlemoyer, 10.18653/v1/D17-1018Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsKenton Lee, Luheng He, Mike Lewis, and Luke Zettle- moyer. 2017. End-to-end neural coreference reso- lution. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 188-197, Copenhagen, Denmark. Association for Computational Linguistics.
Richard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, Ion Stoica, arXiv:1807.05118Tune: A research platform for distributed model selection and training. arXiv preprintRichard Liaw, Eric Liang, Robert Nishihara, Philipp Moritz, Joseph E Gonzalez, and Ion Stoica. 2018. Tune: A research platform for distributed model selection and training. arXiv preprint arXiv:1807.05118.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
On coreference resolution performance metrics. Xiaoqiang Luo, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingVancouver, British Columbia, CanadaAssociation for Computational LinguisticsXiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of Human Lan- guage Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 25-32, Vancouver, British Columbia, Canada. Association for Computational Linguistics.
TextRank: Bringing order into text. Rada Mihalcea, Paul Tarau, Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing. the 2004 Conference on Empirical Methods in Natural Language ProcessingBarcelona, SpainAssociation for Computational LinguisticsRada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics.
Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. Sadat Nafise, Michael Moosavi, Strube, 10.18653/v1/P16-1060Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsNafise Sadat Moosavi and Michael Strube. 2016. Which coreference evaluation metric do you trust? a proposal for a link-based entity aware metric. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 632-642, Berlin, Germany. As- sociation for Computational Linguistics.
Scoring coreference partitions of predicted mentions: A reference implementation. Xiaoqiang Sameer Pradhan, Marta Luo, Eduard Recasens, Vincent Hovy, Michael Ng, Strube, 10.3115/v1/P14-2006Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland2Short Papers). Association for Computational LinguisticsSameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted men- tions: A reference implementation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 30-35, Baltimore, Maryland. Associa- tion for Computational Linguistics.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
Silhouettes: A graphical aid to the interpretation and validation of cluster analysis. J Peter, Rousseeuw, 10.1016/0377-0427(87)90125-7Journal of Computational and Applied Mathematics. 20Peter J. Rousseeuw. 1987. Silhouettes: A graphical aid to the interpretation and validation of cluster analy- sis. Journal of Computational and Applied Mathe- matics, 20:53-65.
Improved deep metric learning with multi-class n-pair loss objective. Kihyuk Sohn, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersNitish Gupta, Christos Christodoulopoulos, and Dan Roth; Osaka, JapanCurran Associates, Inc. Shyam Upadhyay29The COLING 2016 Organizing CommitteeKihyuk Sohn. 2016. Improved deep metric learn- ing with multi-class n-pair loss objective. In Ad- vances in Neural Information Processing Systems, volume 29. Curran Associates, Inc. Shyam Upadhyay, Nitish Gupta, Christos Christodoulopoulos, and Dan Roth. 2016. Re- visiting the evaluation for cross document event coreference. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 1949-1958, Osaka, Japan. The COLING 2016 Organizing Committee.
Importance of pronominal anaphora resolution in question answering systems. L José, Antonio Vicedo, Ferrández, 10.3115/1075218.1075288Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics. the 38th Annual Meeting of the Association for Computational LinguisticsHong Kong. Association for Computational LinguisticsJosé L. Vicedo and Antonio Ferrández. 2000. Impor- tance of pronominal anaphora resolution in question answering systems. In Proceedings of the 38th An- nual Meeting of the Association for Computational Linguistics, pages 555-562, Hong Kong. Associa- tion for Computational Linguistics.
Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme. Marc Vilain, John Burger, John Aberdeen, 10.3115/1072399.1072405Proceedings of the 6th Conference on Message Understanding, MUC6 '95. the 6th Conference on Message Understanding, MUC6 '95USAAssociation for Computational LinguisticsMarc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th Conference on Message Understand- ing, MUC6 '95, page 45-52, USA. Association for Computational Linguistics.
A hierarchical distance-dependent bayesian model for event coreference resolution. Bishan Yang, Claire Cardie, Peter Frazier, Transactions of the Association for Computational Linguistics. 30Bishan Yang, Claire Cardie, and Peter Frazier. 2015. A hierarchical distance-dependent bayesian model for event coreference resolution. Transactions of the As- sociation for Computational Linguistics, 3(0):517- 528.
Paired representation learning for event and entity coreference. Xiaodong Yu, Wenpeng Yin, Dan Roth, Xiaodong Yu, Wenpeng Yin, and Dan Roth. 2020. Paired representation learning for event and entity coreference.
Coreference resolution for information extraction. Dmitry Zelenko, Chinatsu Aone, Jason Tibbetts, Proceedings of the Conference on Reference Resolution and Its Applications. the Conference on Reference Resolution and Its ApplicationsBarcelona, SpainAssociation for Computational LinguisticsDmitry Zelenko, Chinatsu Aone, and Jason Tibbetts. 2004. Coreference resolution for information extrac- tion. In Proceedings of the Conference on Refer- ence Resolution and Its Applications, pages 24-31, Barcelona, Spain. Association for Computational Linguistics.
Event coreference resolution with their paraphrases and argument-aware embeddings. Yutao Zeng, Xiaolong Jin, Saiping Guan, Jiafeng Guo, Xueqi Cheng, 10.18653/v1/2020.coling-main.275Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (OnlineInternational Committee on Computational LinguisticsYutao Zeng, Xiaolong Jin, Saiping Guan, Jiafeng Guo, and Xueqi Cheng. 2020. Event coreference reso- lution with their paraphrases and argument-aware embeddings. In Proceedings of the 28th Inter- national Conference on Computational Linguistics, pages 3084-3094, Barcelona, Spain (Online). Inter- national Committee on Computational Linguistics.
| [
"https://github.com/scikit-optimize/scikit-optimize"
] |
[
"Copenhagen at CoNLL-SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding",
"Copenhagen at CoNLL-SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding"
] | [
"Yova Kementchedjhieva \nUniversity of Copenhagen\nUniversity of Copenhagen\nUniversity of Copenhagen\n\n",
"Johannes Bjerva bjerva@di.ku.dk \nUniversity of Copenhagen\nUniversity of Copenhagen\nUniversity of Copenhagen\n\n",
"Isabelle Augenstein augenstein@di.ku.dk \nUniversity of Copenhagen\nUniversity of Copenhagen\nUniversity of Copenhagen\n\n"
] | [
"University of Copenhagen\nUniversity of Copenhagen\nUniversity of Copenhagen\n",
"University of Copenhagen\nUniversity of Copenhagen\nUniversity of Copenhagen\n",
"University of Copenhagen\nUniversity of Copenhagen\nUniversity of Copenhagen\n"
] | [] | This paper documents the Team Copenhagen system which placed first in the CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection, Task 2 with an overall accuracy of 49.87. Task 2 focuses on morphological inflection in context: generating an inflected word form, given the lemma of the word and the context it occurs in. Previous SIGMORPHON shared tasks have focused on context-agnostic inflection-the "inflection in context" task was introduced this year. We approach this with an encoder-decoder architecture over character sequences with three core innovations, all contributing to an improvement in performance: (1) a wide context window; (2) a multi-task learning approach with the auxiliary task of MSD prediction; (3) training models in a multilingual fashion. | 10.18653/v1/k18-3011 | [
"https://www.aclweb.org/anthology/K18-3011.pdf"
] | 52,164,624 | 1809.01541 | c3f66d33b97ca4f7b642cb15981a61d8e034bde7 |
Copenhagen at CoNLL-SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding
Yova Kementchedjhieva
University of Copenhagen
University of Copenhagen
University of Copenhagen
Johannes Bjerva bjerva@di.ku.dk
University of Copenhagen
University of Copenhagen
University of Copenhagen
Isabelle Augenstein augenstein@di.ku.dk
University of Copenhagen
University of Copenhagen
University of Copenhagen
Copenhagen at CoNLL-SIGMORPHON 2018: Multilingual Inflection in Context with Explicit Morphosyntactic Decoding
Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection, pages 93-98, Brussels, Belgium, October 31,
This paper documents the Team Copenhagen system which placed first in the CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection, Task 2 with an overall accuracy of 49.87. Task 2 focuses on morphological inflection in context: generating an inflected word form, given the lemma of the word and the context it occurs in. Previous SIGMORPHON shared tasks have focused on context-agnostic inflection-the "inflection in context" task was introduced this year. We approach this with an encoder-decoder architecture over character sequences with three core innovations, all contributing to an improvement in performance: (1) a wide context window; (2) a multi-task learning approach with the auxiliary task of MSD prediction; (3) training models in a multilingual fashion.
Introduction
This paper describes our approach and results for Task 2 of the CoNLL-SIGMORPHON 2018 shared task on universal morphological reinflection (Cotterell et al., 2018). The task is to generate an inflected word form given its lemma and the context in which it occurs.
Morphological (re)inflection from context is of particular relevance to the field of computational linguistics: it is compelling to estimate how well a machine-learned system can capture the morphosyntactic properties of a word given its context, and map those properties to the correct surface form for a given lemma.
There are two tracks of Task 2 of CoNLL-SIGMORPHON 2018: in Track 1 the context is given in terms of word forms, lemmas and morphosyntactic descriptions (MSD); in Track 2 only word forms are available. See Table 1 for an example. Task 2 is additionally split in three settings based on data size: high, medium and low, with high-resource datasets consisting of up to 70K instances per language, and low-resource datasets consisting of only about 1K instances.
The baseline provided by the shared task organisers is a seq2seq model with attention (similar to the winning system for reinflection in CoNLL-SIGMORPHON 2016, Kann and Schütze (2016)), which receives information about context through an embedding of the two words immediately adjacent to the target form. We use this baseline implementation as a starting point and achieve the best overall accuracy of 49.87 on Task 2 by introducing three augmentations to the provided baseline system: (1) We use an LSTM to encode the entire available context; (2) We employ a multitask learning approach with the auxiliary objective of MSD prediction; and (3) We train the auxiliary component in a multilingual fashion, over sets of two to three languages.
In analysing the performance of our system, we found that encoding the full context improves performance considerably for all languages: 11.15 percentage points on average, although it also highly increases the variance in results. Multi-task learning, paired with multilingual training and subsequent monolingual finetuning, scored highest for five out of seven languages, improving accuracy by another 9.86% on average.
System Description
Our system is a modification of the provided CoNLL-SIGMORPHON 2018 baseline system, so we begin this section with a reiteration of the baseline system architecture, followed by a description of the three augmentations we introduce.
Baseline
The CoNLL-SIGMORPHON 2018 baseline 1 is described as follows:
The system is an encoder-decoder on character sequences. It takes a lemma as input and generates a word form. The process is conditioned on the context of the lemma [. . . ] The baseline treats the lemma, word form and MSD of the previous and following word as context in track 1. In track 2, the baseline only considers the word forms of the previous and next word. [. . . ] The baseline system concatenates embeddings for context word forms, lemmas and MSDs into a context vector. The baseline then computes character embeddings for each character in the input lemma. Each of these is concatenated with a copy of the context vector. The resulting sequence of vectors is encoded using an LSTM encoder. Subsequently, an LSTM decoder generates the characters in the output word form using encoder states and an attention mechanism.
To that we add a few details regarding model size and training schedule:
• the number of LSTM layers is one;
• embedding size, LSTM layer size and attention layer size is 100;
• models are trained for 20 epochs;
• on every epoch, training data is subsampled at a rate of 0.3;
• LSTM dropout is applied at a rate 0.3;
• context word forms are randomly dropped at a rate of 0.1;
• the Adam optimiser is used, with a default learning rate of 0.001; and • trained models are evaluated on the development data (the data for the shared task comes already split in train and dev sets).
Our system
Here we compare and contrast our system 2 to the baseline system. A diagram of our system is shown in Figure 1.
Entire Context Encoded with LSTMs
The idea behind this modification is to provide the encoder with access to all morpho-syntactic cues present in the sentence. In contrast to the baseline, which only encodes the immediately adjacent context of a target word, we encode the entire context. All context word forms, lemmas, and MSD tags (in Track 1) are embedded in their respective high-dimensional spaces as before, and their embeddings are concatenated. However, we now reduce the entire past context to a fixed-size vector by encoding it with a forward LSTM, and we similarly represent the future context by encoding it with a backwards LSTM.
Auxiliary Task: MSD of the Target Form
We introduce an auxiliary objective that is meant to increase the morpho-syntactic awareness of the encoder and to regularise the learning processthe task is to predict the MSD tag of the target form. MSD tag predictions are conditioned on the context encoding, as described in 2.2.1. Tags are generated with an LSTM one component at a time, e.g. the tag PRO;NOM;SG;1 is predicted as a sequence of four components, PRO, NOM, SG, 1 . For every training instance, we backpropagate the sum of the main loss and the auxiliary loss without any weighting.
As MSD tags are only available in Track 1, this augmentation only applies to this track.
Multilinguality
The parameters of the entire MSD (auxiliary-task) decoder are shared across languages.
Since a grouping of the languages based on language family would have left several languages in single-member groups (e.g. Russian is the sole representative of the Slavic family), we experiment with random groupings of two to three languages. Multilingual training is performed by randomly alternating between languages for every new minibatch. We do not pass any information to the auxiliary decoder as to the source language of the signal it is receiving, as we assume abstract morpho-syntactic features are shared across languages.
Finetuning After 20 epochs of multilingual training, we perform 5 epochs of monolingual finetuning for each language. For this phase, we reduce the learning rate to a tenth of the original learning rate, i.e. 0.0001, to ensure that the models are indeed being finetuned rather than retrained.
Model Size and Training Schedule
We keep all hyperparameters the same as in the baseline. Training data is split 90:10 for training and validation. We train our models for 50 epochs, adding early stopping with a tolerance of five epochs of no improvement in the validation loss. We do not subsample from the training data.
Ensemble Prediction
We train models for 50 different random combinations of two to three languages in Track 1, and 50 monolingual models for each language in Track 2. Instead of picking the single model that performs best on the development set and thus risking to select a model that highly overfits that data, we use an ensemble of the five best models, and make the final prediction for a given target form with a majority vote over the five predictions.
Results and Discussion
Test results are listed in Table 2. Our system outperforms the baseline for all settings and languages in Track 1 and for almost all in Track 2only in the high resource setting is our system not definitively superior to the baseline.
Interestingly, our results in the low resource setting are often higher for Track 2 than for Track 1, even though contextual information is less explicit in the Track 2 data and the multilingual multitasking approach does not apply to this track. We interpret this finding as an indicator that a simpler model with fewer parameters works better in a setting of limited training data. Nevertheless, we focus on the low resource setting in the analysis below due to time limitations. As our Track 1 results are still substantially higher than the baseline results, we consider this analysis valid and insightful.
Ablation Study
We analyse the incremental effect of the different features in our system, focusing on the lowresource setting in Track 1 and using development data.
Entire Context Encoded with LSTMs Encoding the entire context with an LSTM highly increases the variance of the observed results. So we trained fifty models for each language and each architecture. Figure 2 visualises the means and standard deviations over the trained models. In addition, we visualise the average accuracy for the five best models for each language and architecture, as these are the models we use in the final ensemble prediction. Below we refer to these numbers only.
The results indicate that encoding the full context with an LSTM highly enhances the performance of the model, by 11.15% on average. This observation explains the high results we obtain also for Track 2.
Auxiliary Task: MSD of the Target Form Adding the auxiliary objective of MSD prediction has a variable effect: for four languages (DE, EN, ES, and SV) the effect is positive, while for the rest it is negative. We consider this to be an issue of insufficient data for the training of the auxiliary component in the low resource setting we are working with.
Multilinguality
We indeed see results improving drastically with the introduction of multilingual training, with multilingual results being 7.96% higher than monolingual ones on average.
We studied the five best models for each language as emerging from the multilingual training (listed in Table 3) and found no strong linguistic patterns. The EN-SV pairing seems to yield good models for these languages, which could be explained in terms of their common language family and similar morphology. The other natural pairings, however, FR-ES, and DE-SV, are not so frequent among the best models for these pairs of languages.
Finally, monolingual finetuning improves accuracy across the board, as one would expect, by 2.72% on average.
Overall The final observation to be made based on this breakdown of results is that the multitasking approach paired with multilingual training and subsequent monolingual finetuning outperforms the other architectures for five out of seven languages: DE, EN, FR, RU and SV. For the other two languages in the dataset, ES and FI, the difference between this approach and the approach that emerged as best for them is less than 1%. The overall improvement of the multilingual multi-tasking approach over the baseline is 18.30%.
Error analysis
Here we study the errors produced by our system on the English test set to better understand the remaining shortcomings of the approach. A small portion of the wrong predictions point to an incorrect interpretation of the morpho-syntactic conditioning of the context, e.g. the system predicted plan instead of plans in the context Our include raising private capital. The majority of wrong predictions, however, are nonsensical, like bomb for job, fify for fixing, and gnderrate for understand. This observation suggests that generally the system did not learn to copy the characters of lemma into inflected form, which is all it needs to do in a large number of cases. This issue could be alleviated with simple data augmentation techniques that encourage autoencoding (see, e.g., Bergmanis et al., 2017). Figure 3 summarises the average MSD-prediction accuracy for the multi-tasking experiments discussed above. 3 Accuracy here is generally higher than on the main task, with the multilingual finetuned setup for Spanish and the monolingual setup for French scoring best: 66.59% and 65.35%, respectively. This observation illustrates the added difficulty of generating the correct surface form even when the morphosyntactic description has been identified correctly.
MSD prediction
We observe some correlation between these numbers and accuracy on the main task: for DE, EN, RU and SV, the brown, pink and blue bars here pattern in the same way as the corresponding ×'s in Figure 2. One notable exception to this pattern is FR where inflection gains a lot from multilingual training, while MSD prediction suffers greatly. Notice that the magnitude of change is not always the same, however, even when the general direction matches: for RU, for example, multilingual training benefits inflection much more than in benefits MSD prediction, even though the MSD decoder is the only component that is actually shared between languages. This observation illustrates the two-fold effect of multi-task training: an auxiliary task can either inform the main task through the parameters the two tasks share, or it can help the main task learning through its regularising effect.
Related Work
Our system is inspired by previous work on multitask learning and multi-lingual learning, mainly building on two intuitions: (1) jointly learning related tasks tends to be beneficial (Caruana, 1997;Bjerva et al., 2016;Bjerva, 2017b); and (2) jointly learning related languages in an MTL-inspired framework tends to be beneficial (Bjerva, 2017a;Johnson et al., 2017;de Lhoneux et al., 2018). In the context of computational morphology, multilingual approaches have previously been employed for morphological reinflection (Bergmanis et al., 2017) and for paradigm completion . In both of these cases, however, the available datasets covered more languages, 40 and 21, respectively, which allowed for linguistically-motivated language groupings and for parameter sharing directly on the level of characters. De Lhoneux et al. (2018) explore param-eter sharing between related languages for dependency parsing, and find that sharing is more beneficial in the case of closely related languages.
Conclusions
In this paper we described our system for the CoNLL-SIGMORPHON 2018 shared task on Universal Morphological Reinflection, Task 2, which achieved the best performance out of all systems submitted, an overall accuracy of 49.87. We showed in an ablation study that this is due to three core innovations, which extend a characterbased encoder-decoder model: (1) a wide context window, encoding the entire available context; (2) multi-task learning with the auxiliary task of MSD prediction, which acts as a regulariser;
(3) a multilingual approach, exploiting information across languages. In future work we aim to gain better understanding of the increase in variance of the results introduced by each of our modifications and the reasons for the varying effect of multi-task learning for different languages.
Figure 1 :
1Schematic representation of our approach. The focus here is on the prediction of the final character, e, of the word form made. The attention matrix indicates that this character should be based on the final state of the encoder, which contains information about the final character of the input form, and the past and future context. The input and output of the auxiliary decoder are marked in magenta.
Figure 2 :
2Mean (•) and standard deviation (error bars) over 100 models trained for each language and architecture, and average (×) over the 5 best models. LSTM Enc refers to a model that encodes the full context with an LSTM; Multi-task builds on LSTM Enc with an auxiliary objective of MSD prediction; Multilingual refers to a model with an auxiliary component trained in a multilingual fashion; Finetuned refers to a multilingual model topped with monolingual finetuning.
Figure 3 :
3Accuracy on the auxiliary task of MSD prediction with different models. See the caption of Figure 2 for more details.
Table 1 :
1Example input sentence. Context MSD tags and lemmas, marked in gray, are only available in Track 1. The cyan square marks the main objective of predicting the word form made. The magenta square marks the auxiliary objective of predicting the MSD tag V;PST;V.PTCP;PASS.WORD FORMS
We
were
to
feel
very
welcome
.
LEMMAS
we
be
make to
feel
very
welcome
.
MSD TAGS
PRO;NOM;PL;1 AUX;IND;PST;FIN
PART V;NFIN
ADV
ADJ PUNCT
Table 2 :
2Official shared task test set results.
Table 3 :
3Five best multilingual models for each lan-
guage.
Code available at: https://github.com/sigmorphon/conll2018
Code available at: https://github.com/ YovaKem/inflection_in_context
As MSD tags are not available for target forms in the development data, the accuracy of MSD prediction is measured over all other nouns, adjectives and verbs in the dataset.
AcknowledgementsWe gratefully acknowledge the support of the NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.
Training data augmentation for low-resource morphological inflection. Toms Bergmanis, Katharina Kann, Hinrich Schütze, Sharon Goldwater, Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Reinflection. the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological ReinflectionToms Bergmanis, Katharina Kann, Hinrich Schütze, and Sharon Goldwater. 2017. Training data aug- mentation for low-resource morphological inflec- tion. Proceedings of the CoNLL SIGMORPHON 2017 Shared Task: Universal Morphological Rein- flection, pages 31-39.
One Model to Rule them all -Multitask and Multilingual Modelling for Lexical Analysis. Johannes Bjerva, University of GroningenPh.D. thesisJohannes Bjerva. 2017a. One Model to Rule them all -Multitask and Multilingual Modelling for Lexical Analysis. Ph.D. thesis, University of Groningen.
Will my auxiliary tagging task help? Estimating Auxiliary Tasks Effectivity in Multi-Task Learning. Johannes Bjerva, NoDaLiDa. Johannes Bjerva. 2017b. Will my auxiliary tagging task help? Estimating Auxiliary Tasks Effectivity in Multi-Task Learning. In NoDaLiDa, pages 216- 220.
Semantic tagging with deep residual networks. Johannes Bjerva, Barbara Plank, Johan Bos, COLING. Johannes Bjerva, Barbara Plank, and Johan Bos. 2016. Semantic tagging with deep residual networks. In COLING, pages 3531-3541.
Multitask learning. Rich Caruana, Machine Learning. 28Rich Caruana. 1997. Multitask learning. Machine Learning, 28 (1):41-75.
. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, D Arya, Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D.
The CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. Katharina Mccarthy, Sebastian Kann, Garrett Mielke, Miikka Nicolai, David Silfverberg, Yarowsky, Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological ReinflectionBrussels, BelgiumAssociation for Computational LinguisticsJason Eisner, and Mans HuldenMcCarthy, Katharina Kann, Sebastian Mielke, Gar- rett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL- SIGMORPHON 2018 Shared Task: Universal Mor- phological Reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Univer- sal Morphological Reinflection, Brussels, Belgium. Association for Computational Linguistics.
Google's multilingual neural machine translation system: Enabling zero-shot translation. Melvin Johnson, Mike Schuster, Quoc V Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, Jeffrey Dean, Transactions of the Association for Computational Linguistics. 5Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Thorat, Fernanda Viégas, Martin Wattenberg, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine translation system: En- abling zero-shot translation. Transactions of the As- sociation for Computational Linguistics, 5:339-351.
One-shot neural cross-lingual transfer for paradigm completion. Katharina Kann, Ryan Cotterell, Hinrich Schütze, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsKatharina Kann, Ryan Cotterell, and Hinrich Schütze. 2017. One-shot neural cross-lingual transfer for paradigm completion. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1993-2003. Association for Computational Linguis- tics.
MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. Katharina Kann, Hinrich Schütze, Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and MorphologyKatharina Kann and Hinrich Schütze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology, pages 62-70.
Parameter sharing between dependency parsers for related languages. Johannes Miryam De Lhoneux, Bjerva, Proceedings of EMNLP. EMNLPIsabelle Augenstein, and Anders SøgaardMiryam de Lhoneux, Johannes Bjerva, Isabelle Augen- stein, and Anders Søgaard. 2018. Parameter sharing between dependency parsers for related languages. In Proceedings of EMNLP.
Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. Barbara Plank, Anders Søgaard, Yoav Goldberg, Proceedings of ACL. ACLShort PapersBarbara Plank, Anders Søgaard, and Yoav Goldberg. 2016. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. In Proceedings of ACL (Short Pa- pers).
Deep multi-task learning with low level tasks supervised at lower layers. Anders Søgaard, Yoav Goldberg, Proceedings of ACL. ACLShort PapersAnders Søgaard and Yoav Goldberg. 2016. Deep multi-task learning with low level tasks supervised at lower layers. In Proceedings of ACL (Short Pa- pers).
| [
"https://github.com/sigmorphon/conll2018"
] |
[
"Counterfactual Learning from Bandit Feedback under Deterministic Logging: A Case Study in Statistical Machine Translation",
"Counterfactual Learning from Bandit Feedback under Deterministic Logging: A Case Study in Statistical Machine Translation"
] | [
"Carolin Lawrence lawrence@cl.uni-heidelberg.de \nAmazon Development Center &\nHeidelberg University\nGermany\n",
"Artem Sokolov sokolov@cl.uni-heidelberg.de \nHeidelberg University\nGermany\n",
"Stefan Riezler riezler@cl.uni-heidelberg.de \nHeidelberg University\nGermany\n"
] | [
"Amazon Development Center &\nHeidelberg University\nGermany",
"Heidelberg University\nGermany",
"Heidelberg University\nGermany"
] | [
"Natural Language Processing"
] | The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system. A challenge arises by the fact that riskaverse commercial SMT systems deterministically log the most probable translation. The lack of sufficient exploration of the SMT output space seemingly contradicts the theoretical requirements for counterfactual learning. We show that counterfactual learning from deterministic bandit logs is possible nevertheless by smoothing out deterministic components in learning. This can be achieved by additive and multiplicative control variates that avoid degenerate behavior in empirical risk minimization. Our simulation experiments show improvements of up to 2 BLEU points by counterfactual learning from deterministic bandit feedback. | 10.18653/v1/d17-1272 | [
"https://www.aclweb.org/anthology/D17-1272.pdf"
] | 7,181,359 | 1707.09118 | 7a4ae1e6713eae3fce9c762d410f2cbd6e85affc |
Counterfactual Learning from Bandit Feedback under Deterministic Logging: A Case Study in Statistical Machine Translation
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 7-11, 2017. 2017
Carolin Lawrence lawrence@cl.uni-heidelberg.de
Amazon Development Center &
Heidelberg University
Germany
Artem Sokolov sokolov@cl.uni-heidelberg.de
Heidelberg University
Germany
Stefan Riezler riezler@cl.uni-heidelberg.de
Heidelberg University
Germany
Counterfactual Learning from Bandit Feedback under Deterministic Logging: A Case Study in Statistical Machine Translation
Natural Language Processing
Copenhagen, DenmarkAssociation for Computational LinguisticsSeptember 7-11, 2017. 2017
The goal of counterfactual learning for statistical machine translation (SMT) is to optimize a target SMT system from logged data that consist of user feedback to translations that were predicted by another, historic SMT system. A challenge arises by the fact that riskaverse commercial SMT systems deterministically log the most probable translation. The lack of sufficient exploration of the SMT output space seemingly contradicts the theoretical requirements for counterfactual learning. We show that counterfactual learning from deterministic bandit logs is possible nevertheless by smoothing out deterministic components in learning. This can be achieved by additive and multiplicative control variates that avoid degenerate behavior in empirical risk minimization. Our simulation experiments show improvements of up to 2 BLEU points by counterfactual learning from deterministic bandit feedback.
Introduction
Commercial SMT systems allow to record large amounts of interaction log data at no cost. Such logs typically contain a record of the source, the translation predicted by the system, and the user feedback. The latter can be gathered directly if explicit user quality ratings of translations are supported, or inferred indirectly from the interaction of the user with the translated content. Indirect feedback in form user clicks on displayed ads has been shown to be a valuable feedback signal in response prediction for display advertising (Bottou et al., 2013). Similar to the computational advertising scenario, one could imagine a scenario where SMT systems are optimized from partial information in form of user feedback to predicted translations, instead of from manually created reference translations. This learning scenario has been investigated in the areas of bandit learning (Bubeck and Cesa-Bianchi, 2012) or reinforcement learning (RL) (Sutton and Barto, 1998). Figure 1 illustrates the learning protocol using the terminology of bandit structured prediction (Sokolov et al., 2016;Kreutzer et al., 2017), where at each round, a system (corresponding to a policy in RL terms) makes a prediction (also called action in RL, or pulling an arm of a bandit), and receives a reward, which is used to update the system. Counterfactual learning attempts to reuse existing interaction data where the predictions have been made by a historic system different from the target system. This enables offline or batch learning from logged data, and is important if online experiments that deploy the target system are risky and/or expensive. Counterfactual learning tasks include policy evaluation, i.e. estimating how a target policy would have performed if it had been in control of choosing the predictions for which the rewards were logged, and policy optimization (also called policy learning), i.e. optimizing parameters of a target policy given the logged data from the historic system. Both tasks are called counterfactual, or off-policy in RL terms, since the target policy was actually not in control during logging. Figure 2 shows the learning protocol for off-policy learning from partial feedback. The crucial trick to obtain unbiased estimators to evaluate and to optimize the off-policy system is to correct the sampling bias of the logging policy. This can be done by importance sampling where the estimate is corrected by the inverse propensity score (Rosenbaum and Rubin, 1983) of the historical algorithm, mitigating the problem that predictions there were favored by the historical system are over-represented in the logs. As shown by Langford et al. (2008) or Strehl et al. (2010), a sufficient exploration of the output space by the logging system is a prerequisite for counterfactual learning. If the logging policy acts stochastically in predicting outputs, this condition is satisfied, and inverse propensity scoring can be applied to correct the sampling bias. However, commercial SMT systems usually try to avoid any risk and only log the most probable translation. This effectively results in deterministic logging policies, making theory and practice of off-policy methods inapplicable to counterfactual learning in SMT.
This paper presents a case study in counterfactual learning for SMT that shows that policy optimization from deterministic bandit logs is possible despite these seemingly contradictory theoretical requirements. We formalize our learning problem as an empirical risk minimization over logged data. While a simple empirical risk minimizer can show degenerate behavior where the objective is minimized by avoiding or over-representing training samples, thus suffering from decreased generalization ability, we show that the use of control variates can remedy this problem. Techniques such as doubly-robust policy evaluation and learning (Dudik et al., 2011) or weighted importance sampling (Jiang and Li, 2016;Thomas and Brunskill, 2016) can be interpreted as additive (Ross, 2013) or multiplicative control variates (Kong, 1992) that serve for variance reduction in estimation. We observe that a further effect of these techniques is that of smoothing out deterministic components by taking the whole output space into account. Furthermore, we conjecture that while outputs are logged deterministically, the stochastic selection of inputs serves as sufficient exploration in parameter optimization over a joint feature representation over inputs and outputs. We present experiments using simulated bandit feedback for two different SMT tasks, showing improvements of up to 2 BLEU in SMT domain adaptation from deterministically logged bandit feedback. This result, together with a comparison to the standard case of policy learning from stochastically logged simulated bandit feedback, confirms the effectiveness our proposed techniques.
Related Work
Counterfactual learning has been known under the name of off-policy learning in various fields that deal with partial feedback, namely contextual bandits (Langford et al. (2008); Strehl et al. (2010); Dudik et al. (2011);Li et al. (2015), inter alia), reinforcement learning (Sutton and Barto (1998); Precup et al. (2000); Jiang and Li (2016); Thomas and Brunskill (2016), inter alia), and structured prediction (Swaminathan and Joachims (2015a,b), inter alia). The idea behind these approaches is to first perform policy evaluation and then policy optimization, under the assumption that better evaluation leads to better optimization. Our work puts a focus on policy optimization in an empirical risk minimization framework for deterministically logged data. Since our experiment is a simulation study, we can compare the deterministic case to the standard scenario of policy optimization and evaluation under stochastic logging.
Variance reduction by additive control variates has implicitly been used in doubly robust techniques (Dudik et al., 2011;Jiang and Li, 2016). However, the connection to Monte Carlo techniques has not been made explicit until Thomas and Brunskill (2016), nor has the control variate technique of optimizing the variance reduction by adjusting a linear interpolation scalar (Ross, 2013) been applied in off-policy learning. Similarly, the technique of weighted importance sampling has been used as variance reduction technique in off-policy learning (Precup et al., 2000;Jiang and Li, 2016;Thomas and Brunskill, 2016). The connection to multiplicative control variates (Kong, 1992) has been made explicit in Swaminathan and Joachims (2015b). To our knowledge, our analysis of both control variate techniques from the perspective of avoiding degenerate behavior in learning from deterministically logged data is novel.
Counterfactual Learning from
Deterministic Bandit Logs Problem Definition. The problem of counterfactual learning (in the following used in the sense of counterfactual optimization) for bandit structured prediction can be described as follows: Let X be a structured input space, let Y(x) be the set of possible output structures for input
x, and let ∆ : Y → [0, 1] be a reward function (and δ = −∆ be the corresponding task loss function) 1 quantifying the quality of structured outputs. We are given a data log of triples D = {(x t , y t , δ t )} n t=1 where outputs y t for inputs x t were generated by a logging system, and loss values δ t were observed only at the generated data points. In case of stochastic logging with probability π 0 , the inverse propensity scoring approach (Rosenbaum and Rubin, 1983) uses importance sampling to achieve an unbiased estimate of the expected loss under the parametric target policy π w :
R IPS (π w ) = 1 n n t=1 δ t π w (y t |x t ) π 0 (y t |x t ) (1) ≈ E p(x) E π 0 (y|x) [δ(y) π w (y|x) π 0 (y|x) ] = E p(x) E πw(y|x) [δ(y)].
In case of deterministic logging, we are confined to empirical risk minimization:
R DPM (π w ) = 1 n n t=1 δ t π w (y t |x t ).(2)
Equation (2) assumes deterministically logged outputs with propensity π 0 = 1, t = 1, . . . , n of the historical system. We call this objective the deterministic propensity matching (DPM) objective since it matches deterministic outputs of the logging system to outputs in the n-best list of the target system. For optimization under deterministic logging, a sampling bias is unavoidable since objective (2) does not correct it by importance sampling. Furthermore, the DPM estimator may show a degenerate behavior in learning. This problem can be remedied by the use of control variates, as we will discuss in Section 5.
Learning Principle: Doubly Controlled Empirical Risk Minimization. Our first modification of Equation (2) has been originally motivated by the use of weighted importance sampling in inverse propensity scoring because of its observed stability and variance reduction effects (Precup et al., 2000;Jiang and Li, 2016;Thomas and Brunskill, 2016). We call this objective the reweighted deterministic propensity matching (DPM+R) objective:
∇R DPM = 1 n n t=1 δ t π w (y t |x t )∇ log π w (y t |x t ). ∇R DPM+R = 1 n n t=1 [δ tπw (y t |x t )(∇ log π w (y t |x t ) − n u=1π w (y u |x u )∇ log π w (y u |x u ))]. ∇Rĉ DC = 1 n n t=1 [(δ t −ĉδ t )π w (y t |x t )(∇ log π w (y t |x t ) − n u=1π w (y u |x u )∇ log π w (y u |x u )) +ĉ y∈Y(xt)δ (x t , y)π w (y|x t )∇ log π w (y|x t )].R DPM+R (π w ) = 1 n n t=1 δ tπw (y t |x t ) (3) = 1 n n t=1 δ t π w (y t |x t ) n t=1 π w (y t |x t )
.
From the perspective of Monte Carlo simulation, the advantage of this modification can be explained by viewing reweighting as a multiplicative control variate (Swaminathan and Joachims, 2015b). Let Z = δ t π w (y t |x t ) and W = π w (y t |x t ) be two random variables, then the variance of r = can be approximately written as follows (Kong, 1992): Var(r) ≈ 1 n (r 2 Var(W ) + Var(Z) − 2r Cov(W, Z)). This shows that a positive correlation between the variable W , representing the target model probability, and the variable Z, representing the target model scaled by the task loss function, will reduce the variance of the estimator. Since there are exponentially many outputs to choose from for each input during logging, variance reduction is useful in counterfactual learning even in the deterministic case. Under a stochastic logging policy, a similar modification can be done to objective (1) by reweighting the ratio ρ t = πw(yt|xt) π 0 (yt|xt) asρ t = ρt t ρt . We will use this reweighted IPS objective, called IPS+R, in our comparison experiments that use stochastically logged data.
A further modification of Equation (3) is motivated by the incorporation of a direct reward estimation method in the inverse propensity scorer as proposed in the doubly-robust estimator (Dudik et al., 2011;Jiang and Li, 2016;Thomas and Brunskill, 2016). Letδ(x t , y t ) be a regression-based reward model trained on the logged data, and letĉ be a scalar that allows to optimize the estimator for minimal variance (Ross, 2013). We define a doubly controlled empirical risk minimization objectiveRĉ DC as follows (forĉ = 1 we arrive at a similar objective calledR DC ):
Rĉ DC (π w ) = 1 n n t=1 (δ t −ĉδ t )π w (y t |x t ) (4) +ĉ y∈Y(xt)δ (x t , y) π w (y|x t ) .
From the perspective of Monte Carlo simulation, the doubly robust estimator can be seen as variance reduction via additive control variates (Ross, 2013). Let X = δ t and Y = δ t be two random variables. ThenȲ = y∈Y(xt)δ (x t , y) π w (y|x t ) is the expectation 2 of Y , and Equation (4) can be rewritten as (2013), Chap. 9.2). Again this shows that variance of the estimator can be reduced if the variable X, representing the reward function, and the variable Y , representing the regression-based reward model, are positively correlated. The optimal scalar parameterĉ can be derived easily by taking the derivative of variance term, leading tô
Eπ w (x) (X −ĉ Y )+ĉȲ . The variance of the term X −ĉ Y is Var(X −ĉ Y ) = Var(X)+ĉ 2 Var(Y )− 2ĉ Cov(X, Y ). (Rossc = Cov(X, Y ) Var(Y ) .(5)
In case of stochastic logging the reweighted target probabilityπ w (y t |x t ) is replaced by a reweighted ratioρ t . We will use such reweighted models of the original doubly robust model, with and without optimalĉ, called DR andĉ DR, in our experiments that use stochastic logging.
Learning Algorithms. Applying a stochastic gradient descent update rule w t+1 = w t − η∇R(π w ) t to the objective functions defined above leads to a variety of algorithms. The gradients of the objectives can be derived by using the score function gradient estimator (Fu, 2006) and are shown in Table 1. Stochastic gradient descent algorithms apply to any differentiable policy π w , thus our methods can be applied to a variety of systems, including linear and non-linear models. Since previous work on off-policy methods in RL and contextual bandits has been done in the area of linear classification, we start with an adaptation of off-policy methods to linear SMT models in our work. We assume a Gibbs model π w (y t |x t ) = e α(w φ(xt,yt)) y∈Y(xt) e α(w φ(xt,y)) ,
based on a feature representation φ : X × Y → R d , a weight vector w ∈ R d , and a smoothing parameter α ∈ R + , yielding the following sim-
ple derivative ∇ log π w (y t |x t ) = α φ(x t , y t ) − y∈Y(xt) φ(x t , y)π w (y t |x t ) .
Experiments
Setup. In our experiments, we aim to simulate the following scenario: We assume that it is possible to divert a small fraction of the user interaction traffic for the purpose of policy evaluation and to perform stochastic logging on this small data set. The main traffic is assumed to be logged deterministically, following a conservative regime where one-best translations are used for an SMT system that does not change frequently over time. Since our experiments are simulation studies, we will additionally perform stochastic logging, and compare policy learning for the (realistic) case of deterministic logging with the (theoretically motivated) case of stochastic logging.
In our deterministic-based policy learning experiments, we evaluate the empirical risk minimization algorithms derived from objectives (3) (DPM+R) and (4). For the doubly controlled objective we employ two variants: First,ĉ is set to 1 as in (Dudik et al., 2011) (DC). Second, we calculateĉ as described in Equation (5) (ĉ DC). The algorithms used in policy evaluation and for stochastic-based policy learning are variants of these objectives that replaceπ byρ to yield estimators IPS+R, DR, andĉ DR of the expected loss.
All objectives will be employed in a domain adaptation scenario for machine translation. A system trained on out-of-domain data will be used to collect feedback on in-domain data. This data will serve as the logged data D in the learning experiments. We conduct two SMT tasks with hypergraph re-decoding: The first is German-to-English and is trained using a concatenation of the Europarl corpus (Koehn, 2005), the Common Crawl corpus 3 and the News Commentary corpus (Koehn and Schroeder, 2007). The goal is to adapt the trained system to the domain of transcribed TED talks using the TED parallel corpus (Tiedemann, 2012). A second task uses the French-to-English Europarl data with the goal of domain adaptation to news articles with the News Commentary corpus (Koehn and Schroeder, 2007). We split off two parts from the TED corpus to be used as validation and test data for the learning experiments. As validation data for the News Commentary corpus we use the splits provided at the WMT shared task, namely nc-devtest2007 as validation data and nc-test2007 as test data. An overview of the data statistics can be seen in Table 2.
As baseline, an out-of-domain system is built using the SCFG framework CDEC (Dyer et al., 2010) with dense features (10 standard features and 2 for the language model). After tokenizing and lowercasing the training data, the data were word aligned using CDEC's fast align. A 4-gram language model is build on the target languages for the out-of-domain data using KENLM (Heafield et al., 2013). For News, we additionally assume access to in-domain target language text and train another in-domain language model on that data, increasing the number of features to 14 for News.
The framework uses a standard linear Gibbs model whose distribution can be peaked using a parameter α (see Equation (6)): Higher value of α will shift the probability of the one-best translation closer to 1 and all others closer to 0. Using α > 1 during training will promote to learn models that are optimal when outputting the one-best translation. In our experiments, we found α = 5 to work well on validation data.
Additionally, we tune a system using CDEC's MERT implementation (Och, 2003) on the indomain data with their references. This fullinformation in-domain system conveys the best possible improvement using the given training data. It can thus be seen as the oracle system for the systems which are learnt using the same input-side training data, but have only bandit feedback available to them as a learning signal. All systems are evaluated using the corpus-level BLEU metric (Papineni et al., 2002).
The logged data D is created by translating the in-domain training data of the corpora using the original out-of-domain systems, and logging the one-best translation. For the stochastic experiments, the translations are sampled from the model distribution. The feedback to the logged translation is simulated using the reference and sentence-level BLEU (Nakov et al., 2012).
Direct Reward Estimation. When creating the logged data D, we also record the feature vectors of the translations to train the direct reward estimate that is needed for (ĉ)DC. Using the feature vector as input and the per-sentence BLEU as the output value, we train a regressionbased random forest with 10 trees using scikitlearn (Pedregosa et al., 2011). To measure performance, we perform 5-fold cross-validation and measure the macro average between estimated rewards and the true rewards from the log:
| 1 n δ(x t , y t ) − 1 n δ (x t , y t )|.
We also report the micro average which quantifies how far off one can expect the model to be for a random sample: 1 n |δ(x t , y t ) −δ(x t , y t )|. The final model used in the experiments is trained on the full training data. Cross-validation results for the regression-based direct reward model can be found in Table 3.
Policy Evaluation. Policy evaluation aims to use the logged data D to estimate the performance of the target system π w . The small logged data D eval that is diverted for policy evaluation is created by translating only 10k sentences of the in-domain training data with the out-ofdomain system and sample translations according to the model probability. Again we record the sentence-level BLEU as the feedback. The reference translations that also exist for those 10k sentences are used to measure the ground truth BLEU value for translations using the fullinformation in-domain system. The goal of evaluation is to achieve a value of IPS+R, DR, and c DR on D eval that are as close as possible to the ground truth BLEU value.
To be able to measure variance, we create five folds of D eval , differing in random seeds. We report the average difference between the ground truth BLEU score and the value of the log-based policy evaluation, as well as the standard deviation in Table 4. We see that IPS+R underestimates the BLEU value by 7.78 on News. DR overestimates instead.ĉ DR achieves the closest estimate, overestimating the true value by less than 1 BLEU. On TED, all policy evaluation results are overestimates. For the DR variants the overestimation result can be explained by the random forests' tendency to overestimate. Optimalĉ DR can correct for this, but not always in a sufficient way.
Policy Learning. In our learning experiments, learning starts with the weights w 0 from the outof-domain model. As this was the system that produced the logged data D, the first iteration will have the same translations in the one-best position. After some iterations, however, the translation that was logged may not be in the first position any more. In this case, the n-best list is searched for the correct translation. Due to speed reasons, the scores of the translation system are normalized to probabilities using the first 1,000 unique entries in the n-best list, rather than using the full hypergraph. Our experiments showed that this did not impact the quality of learning.
In order for the multiplicative control variate to be effective, the learning procedure has to utilize mini-batches. If the mini-batch size is chosen too small, the estimates of the control variates may not be reliable. We test mini-batch sizes of 30k and 10k examples, whereas 30k on News means that we perform batch training since the mini-batch spans the entire training set. Minibatch size β and early stopping point where selected by choosing the setup and iteration that achieved the highest BLEU score on the one-best translations for the validation data. The learning rate η was selected in the same way, whereas the possible values were 1e−4, 1e−5, 1e−6 or, alternatively, Adadelta (Zeiler, 2012), which sets the learning rate on a per-feature basis. The results on both validation and test set are reported in Table 5. Statistical significance of the outof-domain system compared to all other systems is measured using Approximate Randomization testing (Noreen, 1989).
For the deterministic case, we see that in general DPM+R shows the lowest increase but can still significantly outperform the baseline. An explanation of why DPM+R cannot improve any further, will be addressed separately below. DC yields improvements of up to 1.5 BLEU points, whileĉ DC obtains improvements of up to 2 BLEU points over the out-of-domain baseline. In more detail on the TED data, DC can close the gap of nearly 3 BLEU by half between the out-of-domain and the full-information indomain system.ĉ DC can improve by further 0.6 BLEU which is a significant improvement at p = 0.0017. Also note that, whileĉ DC takes more iterations to reach its best result on the validation data,ĉ DC already outperforms DC at the stopping iteration of DC. At this pointĉ DC is better by 0.18 BLEU on the validation set and continues to increase until its own stopping iteration. The final results ofĉ DC falls only 0.8 BLEU behind the oracle system that had references available during its learning process. Considering the substantial difference in information that both systems had available, this is remark- able. The improvements on the News corpus show similar tendencies. Again there is a gap of nearly 3 BLEU to close and with an improvement of 1.05 BLEU points, DC can achieve a notable result.ĉ DC was able to further improve on this but not as successfully as was the case for the TED corpus. Analyzing the actualĉ values that were calculated in both experiments allows us to gain an insight as to why this was the case: For TED,ĉ is on average 1.35. In the case of News, however,ĉ has a maximum value of 1.14 and thus stays quite close to 1, which would equate to using DC. It is thus not surprising that there is no significant difference between DC andĉ DC.
Comparison to the Stochastic Case. Even if not realistic for commercial applications of SMT, our simulation study allows us to stochastically log large amounts of data in order to compare learning from deterministic logs to the standard case. As shown in Table 5, the relations between algorithms and even the absolute improvements are similar for stochastic and deterministic logging. Significance tests between each deterministic/stochastic experiment pair show a significant difference only in case of DC/DR on TED data. However, the DR result still does not significantly outperform the best deterministic objective on TED (ĉ DC). The p values for all other experiment pairs lie above 0.1. From this we can conclude that it is indeed an acceptable practice to log deterministically. Langford et al. (2008) show that counterfactual learning is impossible unless the logging system sufficiently explores the output space. This condition is seemingly not satisfied if the logging systems acts according to a deterministic policy. Furthermore, since techniques such as "exploration over time" (Strehl et al., 2010) are not applicable to commercial SMT systems that are not frequently changed over time, the case of counterfactual learning for SMT seems hopeless. However, our experiments present evidence to the contrary. In the following, we present an analysis that aims to explain this apparent contradiction.
Analysis
Implicit Exploration. In an experimental comparison between stochastic and deterministic logging for bandit learning in computational advertising, Chapelle and Li (2011) observed that varying contexts (representing user and page visited) induces enough exploration into ad selection such that learning becomes possible. A similar implicit exploration can also be attributed to the case of SMT: An identical input word or phrase can lead, depending on the other words and phrases in the input sentence, to different output words and phrases. Moreover, an identical output word or phrase can appear in different output sentences. Across the entire log, this implicitly performs the exploration on phrase translations that seems to be missing at first glance.
Smoothing by Multiplicative Control Variates. The DPM estimator can show a degenerate behavior in that the objective can be minimized simply by setting the probability of every logged data point to 1.0. This over-represents logged data that received low rewards, which is undesired. Furthermore, systems optimized with this objective cannot properly discriminate between the translations in the output space. This can be seen as a case of translation invariance of the objective, as has been previously noted by Swaminathan and Joachims (2015b): Adding a small constant c to the probability of every data point in the log increases the overall value of the objective without improving the discriminative power between high-reward and low-reward translations. DPM+R solves the degeneracy of DPM by defining a probability distribution over the logged data by reweighting via the multiplicative control variate. After reweighting, the objective value will decrease if the probability of a low-reward translation increased, as it takes away probability mass from other, higher reward samples. Because of this trade-off, balancing the probabilities over low-reward and high-reward samples becomes important, as desired.
Smoothing by Additive Control Variates.
Despite reweighting, DPM+R can still show a degenerate behavior by setting the probabilities of only the highest-reward samples to 1.0, while avoiding all other logged data points. This clearly hampers the generalization ability of the model since inputs that have been avoided in training will not receive a proper ranking of their translations.
The use of an additive control variate can solve this problem by using a reward estimate that takes the full output space into account. The objective will now be increased if the probability of translations with high estimated reward is increased, even if they were not seen in training. This will shift probability mass to unseen data with high estimated-reward, and thus improve the generalization ability of the model.
Conclusion
In this paper, we showed that off-policy learning from deterministic bandit logs for SMT is possible if smoothing techniques based on control variates are used. These techniques will avoid degenerate behavior in learning and improve generalization of empirical risk minimization over logged data. Furthermore, we showed that standard off-policy evaluation is applicable to SMT under stochastic logging policies.
To our knowledge, this is the first application of counterfactual learning to a complex structured prediction problem like SMT. Since our objectives are agnostic of the choice of the underlying model π w , it is also possible to transfer our techniques to non-linear models such as neural machine translation. This will be a desideratum for future work.
Figure 1 :
1Online learning from partial feedback.
Figure 2 :
2Offline learning from partial feedback.
Table 1 :
1Gradients of counterfactual objectives.
Table 2 :
2Number of sentences for in-domain data splits of SMT train, validation, and test data.
Table 4 :
4Policy evaluation by macro averaged
difference between estimated and ground truth
BLEU on 10k stochastically logged data, aver-
aged over 5 runs.
Table 5 :
5BLEU increases for learning, over the out-of-domain baseline on validation and test set. Out-
of-domain is the baseline and starting system and in-domain is the oracle system tuned on in-domain
data with references. For the deterministic case, all results are statistically significant at p ≤ 0.001
with regards to the baseline. For the stochastic case, all results are statistically significant at p ≤ 0.002
with regards to the baseline, except for IPS+R on the News corpus.
We will use both terms, reward and loss, in order to be consistent with the respective literature.
Note that we introduce a slight bias by using πw versus πw in sampling probability and control variate.
http://www.statmt.org/wmt13/ training-parallel-commoncrawl.tgz
AcknowledgmentsThe research reported in this paper was supported in part by the German research foundation (DFG), and in part by a research cooperation grant with the Amazon Development Center Germany.
Counterfactual reasoning and learning systems: The example of computational advertising. Léon Bottou, Jonas Peters, Joaquin Quiñonero-Candela, Denis X Charles, D Max Chickering, Elon Portugaly, Dipanakar Ray, Journal of Machine Learning Research. Patrice Simard, and Ed Snelson. 201314Léon Bottou, Jonas Peters, Joaquin Quiñonero- Candela, Denis X. Charles, D. Max Chickering, Elon Portugaly, Dipanakar Ray, Patrice Simard, and Ed Snelson. 2013. Counterfactual reasoning and learning systems: The example of computa- tional advertising. Journal of Machine Learning Research, 14:3207-3260.
Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning. Sébastian Bubeck, Nicolò Cesa-Bianchi, 5Sébastian Bubeck and Nicolò Cesa-Bianchi. 2012. Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Foundations and Trends in Machine Learning, 5(1):1-122.
An empirical evaluation of Thompson sampling. Olivier Chapelle, Lihong Li, Advances in Neural Information Processing Systems (NIPS). Granada, SpainOlivier Chapelle and Lihong Li. 2011. An empirical evaluation of Thompson sampling. In Advances in Neural Information Processing Systems (NIPS), Granada, Spain.
Doubly robust policy evaluation and learning. Miroslav Dudik, John Langford, Lihong Li, Proceedings of the 28th International Conference on Machine Learning (ICML). the 28th International Conference on Machine Learning (ICML)Bellevue, WAMiroslav Dudik, John Langford, and Lihong Li. 2011. Doubly robust policy evaluation and learn- ing. In Proceedings of the 28th International Con- ference on Machine Learning (ICML), Bellevue, WA.
cdec: A decoder, alignment, and learning framework for finite-state and contextfree translation models. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Johnathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, Philip Resnik, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL). the 48th Annual Meeting of the Association for Computational Linguistics (ACL)Uppsala, SwedenChris Dyer, Adam Lopez, Juri Ganitkevitch, Johnathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context- free translation models. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics (ACL), Uppsala, Sweden.
Gradient estimation. C Michael, Fu, Handbook in Operations Research and Management Science. S.G. Henderson and B.L. Nelson13Michael C. Fu. 2006. Gradient estimation. In S.G. Henderson and B.L. Nelson, editors, Handbook in Operations Research and Management Science, volume 13, pages 575-616.
Scalable modified Kneser-Ney language model estimation. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H Clark, Philipp Koehn, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL). the 51st Annual Meeting of the Association for Computational Linguistics (ACL)Sofia, BulgariaKenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable mod- ified Kneser-Ney language model estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), Sofia, Bulgaria.
Doubly robust offpolicy value evaluation for reinforcement learning. Nan Jiang, Lihong Li, Proceedings of the 33rd International Conference on Machine Learning (ICML). the 33rd International Conference on Machine Learning (ICML)New York, NYNan Jiang and Lihong Li. 2016. Doubly robust off- policy value evaluation for reinforcement learning. In Proceedings of the 33rd International Confer- ence on Machine Learning (ICML), New York, NY.
Europarl: A parallel corpus for statistical machine translation. Philipp Koehn, Proceedings of the Machine Translation Summit. the Machine Translation SummitPhuket, ThailandPhilipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceedings of the Machine Translation Summit, Phuket, Thai- land.
Experiments in domain adaptation for statistical machine translation. Philipp Koehn, Josh Schroeder, Proceedings of the Workshop on Machine Translation (WMT). the Workshop on Machine Translation (WMT)Prague, Czech RepublicPhilipp Koehn and Josh Schroeder. 2007. Experi- ments in domain adaptation for statistical machine translation. In Proceedings of the Workshop on Machine Translation (WMT), Prague, Czech Re- public.
A note on importance sampling using standardized weights. Augustine Kong, IllinoisDepartment of Statistics, University of ChicagoTechnical Report 348Augustine Kong. 1992. A note on importance sam- pling using standardized weights. Technical Re- port 348, Department of Statistics, University of Chicago, Illinois.
Bandit structured prediction for neural sequence-to-sequence learning. Julia Kreutzer, Artem Sokolov, Stefan Riezler, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL). the 55th Annual Meeting of the Association for Computational Linguistics (ACL)Vancouver, CanadaJulia Kreutzer, Artem Sokolov, and Stefan Riezler. 2017. Bandit structured prediction for neural sequence-to-sequence learning. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (ACL), Vancouver, Canada.
Exploration scavenging. John Langford, Alexander Strehl, Jennifer Wortman, Proceedings of the 25th International Conference on Machine Learning (ICML). the 25th International Conference on Machine Learning (ICML)Helsinki, FinlandJohn Langford, Alexander Strehl, and Jennifer Wort- man. 2008. Exploration scavenging. In Proceed- ings of the 25th International Conference on Ma- chine Learning (ICML), Helsinki, Finland.
Counterfactual estimation and optimization of click metrics in search engines: A case study. Lihong Li, Shunbao Chen, Jim Kleban, Ankur Gupta, Proceedings of the International World Wide Web Conference. the International World Wide Web ConferenceFlorence, ItalyWWWLihong Li, Shunbao Chen, Jim Kleban, and Ankur Gupta. 2015. Counterfactual estimation and opti- mization of click metrics in search engines: A case study. In Proceedings of the International World Wide Web Conference (WWW), Florence, Italy.
Optimizing for sentence-level bleu+1 yields short translations. Preslav Nakov, Francisco Guzmán, Stephan Vo, Proceedings of the 24th International Conference on Computational Linguistics (COLING). the 24th International Conference on Computational Linguistics (COLING)Bombay, IndiaPreslav Nakov, Francisco Guzmán, and Stephan Vo- gel. 2012. Optimizing for sentence-level bleu+1 yields short translations. In Proceedings of the 24th International Conference on Computational Linguistics (COLING), Bombay, India.
Computer Intensive Methods for Testing Hypotheses: An Introduction. Eric W Noreen, WileyNew YorkEric W. Noreen. 1989. Computer Intensive Methods for Testing Hypotheses: An Introduction. Wiley, New York.
Minimum error rate training in statistical machine translation. Franz Josef Och, Proceedings of the Human Language Technology Conference and the 3rd Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL). the Human Language Technology Conference and the 3rd Meeting of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL)Edmonton, CanandaFranz Josef Och. 2003. Minimum error rate train- ing in statistical machine translation. In Proceed- ings of the Human Language Technology Confer- ence and the 3rd Meeting of the North American Chapter of the Association for Computational Lin- guistics (HLT-NAACL), Edmonton, Cananda.
Bleu: A method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics (ACL). the 40th Annual Meeting on Association for Computational Linguistics (ACL)Stroudsburg, PAKishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for au- tomatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on As- sociation for Computational Linguistics (ACL), Stroudsburg, PA.
Scikit-learn: Machine learning in Python. F Pedregosa, G Varoquaux, A Gramfort, V Michel, B Thirion, O Grisel, M Blondel, P Prettenhofer, R Weiss, V Dubourg, J Vanderplas, A Passos, D Cournapeau, M Brucher, M Perrot, E Duchesnay, Journal of Machine Learning Research. 12F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Pretten- hofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Pas- sos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learn- ing in Python. Journal of Machine Learning Re- search, 12:2825-2830.
Eligibility traces for off-policy policy evaluation. Doina Precup, Richard S Sutton, Satinder P Singh, Proceedings of the Seventeenth International Conference on Machine Learning (ICML). the Seventeenth International Conference on Machine Learning (ICML)San Francisco, CADoina Precup, Richard S. Sutton, and Satinder P. Singh. 2000. Eligibility traces for off-policy pol- icy evaluation. In Proceedings of the Seventeenth International Conference on Machine Learning (ICML), San Francisco, CA.
The central role of the propensity score in observational studies for causal effects. R Paul, Donald B Rosenbaum, Rubin, Biometrika. 701Paul R. Rosenbaum and Donald B. Rubin. 1983. The central role of the propensity score in obser- vational studies for causal effects. Biometrika, 70(1):41-55.
Simulation, fifth edition. Elsevier. Sheldon M Ross, Sheldon M. Ross. 2013. Simulation, fifth edition. El- sevier.
Stochastic structured prediction under bandit feedback. Artem Sokolov, Julia Kreutzer, Christopher Lo, Stefan Riezler, Advances in Neural Information Processing Systems (NIPS). Barcelona, SpainArtem Sokolov, Julia Kreutzer, Christopher Lo, and Stefan Riezler. 2016. Stochastic structured pre- diction under bandit feedback. In Advances in Neural Information Processing Systems (NIPS), Barcelona, Spain.
Learning from logged implicit exploration data. Alexander L Strehl, John Langford, Lihong Li, M Sham, Kakade, Advances in Neural Information Processing Sytems (NIPS). CanadaVancouverAlexander L. Strehl, John Langford, Lihong Li, and Sham M. Kakade. 2010. Learning from logged implicit exploration data. In Advances in Neural Information Processing Sytems (NIPS), Vancou- ver, Canada.
Reinforcement Learning. An Introduction. Richard S Sutton, Andrew G Barto, The MIT PressRichard S. Sutton and Andrew G. Barto. 1998. Re- inforcement Learning. An Introduction. The MIT Press.
Batch learning from logged bandit feedback through counterfactual risk minimization. Adith Swaminathan, Thorsten Joachims, Journal of Machine Learning Research. 16Adith Swaminathan and Thorsten Joachims. 2015a. Batch learning from logged bandit feedback through counterfactual risk minimization. Journal of Machine Learning Research, 16:1731-1755.
The self-normalized estimator for counterfactual learning. Adith Swaminathan, Thorsten Joachims, Advances in Neural Information Processing Systems (NIPS). Montreal, CanadaAdith Swaminathan and Thorsten Joachims. 2015b. The self-normalized estimator for counterfactual learning. In Advances in Neural Information Pro- cessing Systems (NIPS), Montreal, Canada.
Dataefficient off-policy policy evaluation for reinforcement learning. Philip S Thomas, Emma Brunskill, Proceedings of the 33rd International Conference on Machine Learning (ICML). the 33rd International Conference on Machine Learning (ICML)New York, NYPhilip S. Thomas and Emma Brunskill. 2016. Data- efficient off-policy policy evaluation for reinforce- ment learning. In Proceedings of the 33rd Interna- tional Conference on Machine Learning (ICML), New York, NY.
Parallel data, tools and interfaces in OPUS. Jörg Tiedemann, Proceedings of the Eight International Conference on Language Resources and Evaluation (LREC). the Eight International Conference on Language Resources and Evaluation (LREC)Istanbul, TurkeyJörg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eight Inter- national Conference on Language Resources and Evaluation (LREC), Istanbul, Turkey.
ADADELTA: An adaptive learning rate method. D Matthew, Zeiler, ArXiv:1212.5701cs.LGMatthew D. Zeiler. 2012. ADADELTA: An adaptive learning rate method. ArXiv:1212.5701 [cs.LG].
| [] |
[
"Did you hear that? Adversarial Examples Against Automatic Speech Recognition",
"Did you hear that? Adversarial Examples Against Automatic Speech Recognition"
] | [
"Moustafa Alzantot malzantot@ucla.edu \nDepartment of Computer Science\nUniversity of California\n90095Los Angeles Los AngelesCA\n",
"Bharathan Balaji bbalaji@ucla.edu \nDepartment of Computer Science\nUniversity of California\n90095Los Angeles Los AngelesCA\n",
"Mani Srivastava \nDepartment of Computer Science\nUniversity of California\n90095Los Angeles Los AngelesCA\n"
] | [
"Department of Computer Science\nUniversity of California\n90095Los Angeles Los AngelesCA",
"Department of Computer Science\nUniversity of California\n90095Los Angeles Los AngelesCA",
"Department of Computer Science\nUniversity of California\n90095Los Angeles Los AngelesCA"
] | [] | Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener's perception of the audio clip as evaluated in our human study. | null | [
"https://arxiv.org/pdf/1801.00554v1.pdf"
] | 34,941,466 | 1801.00554 | b74c39dc11ed0fa62c0b4e4e4428267da413f589 |
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
Moustafa Alzantot malzantot@ucla.edu
Department of Computer Science
University of California
90095Los Angeles Los AngelesCA
Bharathan Balaji bbalaji@ucla.edu
Department of Computer Science
University of California
90095Los Angeles Los AngelesCA
Mani Srivastava
Department of Computer Science
University of California
90095Los Angeles Los AngelesCA
Did you hear that? Adversarial Examples Against Automatic Speech Recognition
Speech is a common and effective way of communication between humans, and modern consumer devices such as smartphones and home hubs are equipped with deep learning based accurate automatic speech recognition to enable natural interaction between humans and machines. Recently, researchers have demonstrated powerful attacks against machine learning models that can fool them to produce incorrect results. However, nearly all previous research in adversarial attacks has focused on image recognition and object detection models. In this short paper, we present a first of its kind demonstration of adversarial attacks against speech classification model. Our algorithm performs targeted attacks with 87% success by adding small background noise without having to know the underlying model parameter and architecture. Our attack only changes the least significant bits of a subset of audio clip samples, and the noise does not change 89% the human listener's perception of the audio clip as evaluated in our human study.
Introduction
Recent progress in machine learning and artificial intelligence is shaping the way we interact with our everyday devices. Speech based interaction is one of the most effective means and is widely used in personal assistants of smartphones (e.g. Siri, Google Assistant). These systems rely on running speech classification model to recognize the user's voice commands. Although traditional speech recognition models were based on hidden markov models (HMMs), deep learning models are currently the state of art for automatic speech recognition (ASR) [7], [2] and speech generation [9]. Despite their outstanding performance accuracies in many applications, recent research has shown that neural networks are easily fooled by malicious attackers who can force the model to produce wrong result or to even generate a targeted output value. This kind of attack known as adversarial examples has been demonstrated with high success against image recognition, and object detection models. However, to the best of our knowledge there have been no successful equivalent attacks against automatic speech recognition (ASR) models.
In this paper, we present an attack approach that fools neural-network-based speech recognition model. Similar to adversarial example generation for images, the attacker will perturb benign (correctly classified) audio files by adding a small amount of noise to cause the ASR model to mis-classify or produce a specific target output label. The added noise is small and will be observed by a human listening to the attack audio clip as background noise and will not change how a human recognizes the audio file. However, it will be sufficient to change the model prediction from the true label to another target label chosen by the attacker.
Existing methods for adversarial examples generation such as FGSM [6], Jacobian-based Saliency Map Attack [10], DeepFool [8], and Carlini [5] depend on computing the gradient of some output of the network with respect to its input in order to compute the attack noise. For example, in the FGSM [6] the adversarial noise is computed as: The gradient needed to compute adversarial noise can be efficiently computed using backpropagation assuming attacker knows model architecture and parameters. However, backpropagation, being based on the chain rule, requires the ability to compute the derivative of each network layer output with respect to the layer inputs. While it is easily done in image recognition models where all layers in the pipeline are differentiable, it becomes problematic to apply same techniques for speech recognition models as they rely on the Mel Frequency Cepstral Coefficients (MFCCs) as features of the input audio data. Therefore, the first layers of an ASR model typically pre-process the raw audio by computing the spectrogram and the MFCC inputs. These two layers are not differentiable and there is no efficient way to compute the gradient through them. While the training process of the neural network does not require backpropagation because MFCCs are considered as model inputs, the generation of adversarial examples would require the gradient. Therefore, gradient-based methods [6,10,5,8]) to generate adversarial noise are not directly applicable to speech recognition models based on MFCCs.
x adv = x + sign(∇ x J(θ, x, y))
Our algorithm generates adversarial noise to perform targeted attacks on ASR. To avoid computing MFCC derivatives, we use a genetic algorithm which is a gradient-free optimization method. Our genetic algorithm based method does not require knowledge of the victim model architecture or parameters and can be used for black box attacks without training substitutive models. We evaluate our attack using the speech commands recognition model [12] and the speech commands dataset [14]. Our results show that targeted attacks succeed 87% of the time while adding noise to only the 8 least-significant-bits of a subset of samples in a 16 bits-per-sample audio file. We evaluate the effect of noise on human perception of the audio clip with a user study. Results show that the noise did not change the human decision in 89% of our samples and listeners still recognize the audio as its original label.
Adversarial Attacks on Audio:
Adversarial examples refer to inputs that are maliciously crafted by an attacker to fool machine learning models. Adversarial examples are typically generated by adding noise to the inputs that are correctly classified by the model, and the added noise should be imperceptible for humans. To create adversarial examples for speech recognition models an attacker takes a legitimate audio file perturbs it by adding an imperceptible noise that causes the machine learning speech recognition model to mis-classify the input and possibly produce a desired target label. We demonstrate this in Figure 1, where the attacker adds noise to an audio clip of the word "YES" that the machine learning model classifies as "NO" while the human still recognizes as "YES".
Prior Audio Attacks: While recent research uncovered potential attacks against speech recognition models, the demonstrated attacks do not represent an instantiation of adversarial examples [6] as witnessed with image recognition models. Backdoor [11] exploits the non-linearities of microphones in smart devices to play audio at a frequency that is inaudible to humans (40 kHz), but creates a shadow in the audible range of the microphone. Backdoor harnesses this phenomenon to block microphone in places such as movie theaters. However, the attack requires an array of specialized high frequency speakers. DolphinAttack [13] exploits the same non-linearities in microphones to create commands audible to speech assistants but inaudible to humans. Notably, in both methods [11,13] the attack sound is not heard by the human at all, while an adversarial example should be recognized by a human as benign while misclassified by the speech recognition model. The attack to closest adversarial examples is the "Hidden Voice Commands" by Carlini et al. [4] that generates sounds that are unintelligible to human listeners but interpreted as commands by devices. Nevertheless, it does not represent an adversarial attack because the samples they generate are aimed to be 'unrecognizable' by humans, but it can still lead to suspicion. A more stealthy and powerful attack will maintain the listener interpretation of the attack samples as something benign.
Threat Model: Our attack assumes a black-box threat model where the attacker knows nothing about the model architecture and parameter values, but is capable of querying the model results.
Precisely, the victim model is used by the attacker as a black box function f (x) while mounting his attacks. Such that: f : X −→ [0, 1] K where X is the space of all possible input audio files, and the output [0, 1] K represent the prediction probability scores to each one of the possible K output labels. The output values are obtained from the final Softmax layer commonly used in classification models. */ select_probs ←− Sof tmax( scores T emp ) Next population ←− { } for i ← 1 to size do Select parent 1 from population according to probabilities select_probs Select parent 2 from population according to probabilities select_probs child = Crossover(parent 1 , parent 2 ) Next population = Next population {child} end foreach child of Next population do Mutate(child) population ←− Next population iter_num = iter_num + 1 end return x adv We use gradient free genetic algorithm based approach to generate our adversarial examples as shown in 1. The algorithm accepts an original benign audio clip x and a target label t as its inputs. It creates a population of candidate adversarial examples by adding random noise to a subset of the samples within the given audio clip. To minimize the noise effect on human perception, we add noise to only least-significant bits of a random subset of audio samples. We compute fitness score to each population member based on the prediction score of the target label and produce the next generation of adversarial examples from the current generation by applying selection, crossover and mutation. Selection means that population members with higher fitness value are more likely to become part of the next generation. Crossover takes pairs of population members and mixes them to generate a new 'child' that will be added to the new population. Finally, mutation adds random noise with very small probability to the child before passing it to the future generation. The algorithm iterates on this process for preset number of epochs or until the attack is found successful.
Generating Adversarial Speech Commands
Due to space constraints, we omit the detailed description of some subroutines and hyper-parameters used in our algorithm. To assist other researchers to reproduce our results, we have made our implementation (with the same hyper-parameter values used for evaluation results reported in this paper) available at https://git.io/vFs8X.
Evaluation
Speech Recognition Model: We evaluate our attack against the Speech Commands classification model [12] implemented in the TensorFlow [1] software framework. This model is an efficient and light-weight keyword spotting model based on convolutional neural network and achieves 90% classification accuracy on the speech commands [14] dataset. The speech commands dataset [14] is a crowd-sourced dataset consisting of 65,000 audio files. Each file is a one second audio clip of single words like: "yes", "no", "up", "down", "left", "right", "on", "off", "stop", or "go".
Targeted Attack Results: For the targeted attack experiment, we randomly select 500 audio clips from the dataset at 50 clips per labels (after we exclude the "silence" and "unknown" labels). We produce adversarial examples from each file such that it will be classified as a different target label. For example, for an audio clip of "yes", we produce adversarial examples that are targeted to be classified as "no", "up", "down", "left", etc. This means for input audio clip we produce 9 adversarial examples leading to a total count of 4500 output files. Samples of our targeted attack output can be listened to at https://git.io/vFs42. Figure 2 shows the result of our targeted attack. Our algorithm was successful 87% in performing targeted adversarial attack between any source-target pair. We limit the number of iterations in our algorithm to 500. If the algorithm fails to find a successful targeted attack within 500 iterations, we declare this as failure. The median time to generate an adversarial audio file is 37 seconds on a desktop machine with Nvidia Titan X GPU. A more successful attack can be possible if we increase the limit of noise or number of iterations.
Human Perception Results:
In order to assess the effect of added adversarial noise on human listeners, we conducted a human study where we recruited 23 participants, and we asked them to listen to and label successful Attack labeled as source Attack labeled as target Attack labeled as other 89% 0.6% 9.4% Table 1: Human perception of adversarial examples generated by our attack adversarial audio clips we generated. In total, the study participants labeled 1500 audio clips. The participants were not told what is the source or target labels of the audio clips they were provided. Table 1 show that 89% of participants were not affected by the added noise and they still label the heard audio at the source label while the machine learning model labels all of them as the target label.
Results from our human experiment shown in
Discussion
In this section, we discuss the limitations and possible future directions for our study.
Using MFCC inversion for white box attack: Our attack algorithm does not require knowing the model architecture or its parameters and it only uses the victim model as a black box. In a white-box scenario where an attacker can utilize his knowledge about victim model, a stronger attack may be possible. However, this approach will face the hurdle of how to do back-propagation through the MFCC and spectrogram layer. One idea is to compute the adversarial noise with respect to the MFCC layer outputs as the classification model inputs, then use MFCC inversion [3] to reconstruct the adversarial audio. Further experiments should be done to evaluate the quality of this approach.
Evaluation against a larger ASR model and complete sentence generation: An interesting question is if the more powerful state-of-art ASR models are also affected by adversarial examples, and whether we can generate adversarial sentences instead of just adversarial audio clips of single words.
Untargeted attacks: We reported the results of our targeted attacks where the attacker specifies the desired output label. In addition, we achieved 100% success rate with our untargeted attacks. Although the untargeted attack is considered a weaker type of attack, further study of the untargeted attacks can be useful to study model robustness against adversarial noise.
Over the air attack: In our evaluation, we assume that the attacker feeds the audio clip directly to the classification model. However, a more realistic and powerful attack will succeed even when we play the adversarial audio clip from the speaker while the victim model picks the audio from the microphone. This is harder to achieve, and we plan to study it in follow-up research.
31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA.
Figure 1 :
1Adversarial attacks on speech commands: a malicious attacker adds small noise to the audio such that it is misclassified by the speech recognition model but does not change human perception.
Algorithm 1 :
1Generation of Targeted Adversarial Audio Files using Genetic Algorithm Inputs :Original benign example x target classification label t Output :Targeted attack example x adv /* Initialize the population of candidate solutions */ population ←− InitializePopulation(x) iter_num = 0 while iter_num < max_iter do scores ←− ComputeFitness(population) x adv ←− population [argmax(scores)] if argmax f (x adv ) = t then break // Attack succeeded, Stop early. end /* Compute selection probabilities.
Figure 2 :
2Percentage of success for every (source, target) targeted adversarial attack.
AcknowledgmentsThis research was supported in part by the NIH Center of Excellence for Mobile Sensor Data-to-Knowledge (MD2K) under award 1-U54EB020404-01, the U.S. Army Research Laboratory and the UK Ministry of Defence under Agreement Number W911NF-16-3-0001, and the National Science Foundation under award # CNS-1705135. Any findings in this material are those of the author(s) and do not reflect the views of any of the above funding agencies. The U.S. and U.K. Governments are authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation hereon.
M Abadi, A Agarwal, P Barham, E Brevdo, Z Chen, C Citro, G S Corrado, A Davis, J Dean, M Devin, arXiv:1603.04467Large-scale machine learning on heterogeneous distributed systems. arXiv preprintM. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467, 2016.
Deep speech 2: End-to-end speech recognition in english and mandarin. D Amodei, S Ananthanarayanan, R Anubhai, J Bai, E Battenberg, C Case, J Casper, B Catanzaro, Q Cheng, G Chen, International Conference on Machine Learning. D. Amodei, S. Ananthanarayanan, R. Anubhai, J. Bai, E. Battenberg, C. Case, J. Casper, B. Catanzaro, Q. Cheng, G. Chen, et al. Deep speech 2: End-to-end speech recognition in english and mandarin. In International Conference on Machine Learning, pages 173-182, 2016.
On the inversion of mel-frequency cepstral coefficients for speech enhancement applications. L E Boucheron, P L De Leon, Signals and Electronic Systems, 2008. ICSES'08. International Conference on. IEEEL. E. Boucheron and P. L. De Leon. On the inversion of mel-frequency cepstral coefficients for speech enhancement applications. In Signals and Electronic Systems, 2008. ICSES'08. International Conference on, pages 485-488. IEEE, 2008.
Hidden voice commands. N Carlini, P Mishra, T Vaidya, Y Zhang, M Sherr, C Shields, D Wagner, W Zhou, USENIX Security Symposium. N. Carlini, P. Mishra, T. Vaidya, Y. Zhang, M. Sherr, C. Shields, D. Wagner, and W. Zhou. Hidden voice commands. In USENIX Security Symposium, pages 513-530, 2016.
Towards evaluating the robustness of neural networks. N Carlini, D Wagner, Security and Privacy (SP), 2017 IEEE Symposium on. IEEEN. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In Security and Privacy (SP), 2017 IEEE Symposium on, pages 39-57. IEEE, 2017.
Explaining and harnessing adversarial examples. I J Goodfellow, J Shlens, C Szegedy, arXiv:1412.6572arXiv preprintI. J. Goodfellow, J. Shlens, and C. Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
Speech recognition with deep recurrent neural networks. A Graves, A Mohamed, G Hinton, Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEEA. Graves, A.-r. Mohamed, and G. Hinton. Speech recognition with deep recurrent neural networks. In Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, pages 6645-6649. IEEE, 2013.
Deepfool: a simple and accurate method to fool deep neural networks. S.-M Moosavi-Dezfooli, A Fawzi, P Frossard, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionS.-M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 2574-2582, 2016.
A V Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, N Kalchbrenner, A Senior, K Kavukcuoglu, arXiv:1609.03499Wavenet: A generative model for raw audio. arXiv preprintA. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.
The limitations of deep learning in adversarial settings. N Papernot, P Mcdaniel, S Jha, M Fredrikson, Z B Celik, A Swami, Security and Privacy (EuroS&P). IEEEN. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pages 372-387. IEEE, 2016.
Backdoor: Making microphones hear inaudible sounds. N Roy, H Hassanieh, R Roy Choudhury, Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services. the 15th Annual International Conference on Mobile Systems, Applications, and ServicesACMN. Roy, H. Hassanieh, and R. Roy Choudhury. Backdoor: Making microphones hear inaudible sounds. In Proceedings of the 15th Annual International Conference on Mobile Systems, Applications, and Services, pages 2-14. ACM, 2017.
Convolutional neural networks for small-footprint keyword spotting. T N Sainath, C Parada, Sixteenth Annual Conference of the International Speech Communication Association. T. N. Sainath and C. Parada. Convolutional neural networks for small-footprint keyword spotting. In Sixteenth Annual Conference of the International Speech Communication Association, 2015.
. L Song, P , arXiv:1708.07238Inaudible voice commands. arXiv preprintL. Song and P. Mittal. Inaudible voice commands. arXiv preprint arXiv:1708.07238, 2017.
Speech commands: A public dataset for single-word speech recognition. P Warden, Dataset availableP. Warden. Speech commands: A public dataset for single-word speech recognition. Dataset available, 2017.
| [] |
[
"Improving Probabilistic Models in Text Classification via Active Learning *",
"Improving Probabilistic Models in Text Classification via Active Learning *"
] | [
"Mitchell Bosley mcbosley@umich.edu. \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Saki Kuzushima \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Ted Enamorado \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Yuki Shiraito shiraito@umich.edu@url:shiraito.github.io. \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Ken Benoit \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Yaoyao Dai \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Chris Fariss \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Yusaku Horiuchi \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Kosuke Imai \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Walter Mebane \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Daichi Mochihashi \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n",
"Kevin Quinn \nInstitute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI\n"
] | [
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI",
"Institute for Social Research\n426 Thompson Street4259, 48104-2321Ann ArborMI"
] | [] | Social scientists often classify text documents to use the resulting labels as an outcome or a predictor in empirical research. Automated text classification has become a standard tool, since it requires less human coding. However, scholars still need many human-labeled documents to train automated classifiers. To reduce labeling costs, we propose a new algorithm for text classification that combines a probabilistic model with active learning. The probabilistic model uses both labeled and unlabeled data, and active learning concentrates labeling efforts on difficult documents to classify. Our validation study shows that the classification performance of our algorithm is comparable to state-of-the-art methods at a fraction of the computational cost. Moreover, we replicate two recently published articles and reach the same substantive conclusions with only a small proportion of the original labeled data used in those studies. We provide activeText, an open-source software to implement our method. * We thank 1 See e.g.,Grimmer and Stewart (2013)for an excellent overview of these methods in political science. | null | [
"https://export.arxiv.org/pdf/2202.02629v2.pdf"
] | 246,634,534 | 2202.02629 | a475c87c8e5f051057bb9fb3fcd7eda1b66189a4 |
Improving Probabilistic Models in Text Classification via Active Learning *
26 Sep 2022
Mitchell Bosley mcbosley@umich.edu.
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Saki Kuzushima
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Ted Enamorado
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Yuki Shiraito shiraito@umich.edu@url:shiraito.github.io.
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Ken Benoit
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Yaoyao Dai
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Chris Fariss
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Yusaku Horiuchi
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Kosuke Imai
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Walter Mebane
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Daichi Mochihashi
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Kevin Quinn
Institute for Social Research
426 Thompson Street4259, 48104-2321Ann ArborMI
Improving Probabilistic Models in Text Classification via Active Learning *
26 Sep 2022First draft: September 10, 2020 This draft: September 23, 2022and audiences at the 2020 Annual Meeting of the American Political Science Association, the 2021 Annual Meeting of the Midwest Political Science Association, the 11th Annual Confer-ence on New Directions in Analyzing Text as Data, and the 2022 Summer Meeting of the Japanese Society for Quantitative Political Science, and seminar participants at the University of Michigan and members of the Junior Faculty Workshop at Washington University in St. Louis for useful comments and suggestions. † These authors have contributed equally to this work. ‡ Ph.D. Candidate, Department of Political Science, University of Michigan. § Ph.D. Candidate, Department of Political Science, University of Michigan. Email: skuzushi@umich.edu ¶ Assistant Professor, Department of Political Science, Washington University in St. Louis. Siegle Hall, 244. One Brookings Dr. St Louis, MO 63130-4899. Phone: 314-935-5810, Email: ted@wustl.edu, URL: www.tedenamorado.com. ‖ Assistant Professor, Department of Political Science, University of Michigan. Center for Political Studies,
Social scientists often classify text documents to use the resulting labels as an outcome or a predictor in empirical research. Automated text classification has become a standard tool, since it requires less human coding. However, scholars still need many human-labeled documents to train automated classifiers. To reduce labeling costs, we propose a new algorithm for text classification that combines a probabilistic model with active learning. The probabilistic model uses both labeled and unlabeled data, and active learning concentrates labeling efforts on difficult documents to classify. Our validation study shows that the classification performance of our algorithm is comparable to state-of-the-art methods at a fraction of the computational cost. Moreover, we replicate two recently published articles and reach the same substantive conclusions with only a small proportion of the original labeled data used in those studies. We provide activeText, an open-source software to implement our method. * We thank 1 See e.g.,Grimmer and Stewart (2013)for an excellent overview of these methods in political science.
Introduction
As the amount and diversity of available information have rapidly increased, social scientists are increasingly resorting to multiple forms of data to answer substantive questions. In particular, the use of text-as-data in social science research has exploded over the past decade. 1 Document classification has been the primary task in political science, with researchers classifying documents such as legislative speeches (Peterson and Spirling, 2018;Motolinia, 2021), correspondences to administrative agencies (Lowande, 2018(Lowande, , 2019, public statements of politicians (Airoldi et al., 2007;Stewart and Zhukov, 2009), news articles (Boydstun, 2013), election manifestos (Catalinac, 2016), social media posts (King et al., 2017), treaties (Spirling, 2012), religious speeches (Nielsen, 2017), and human rights text (Cordell et al., 2021;Greene et al., 2019) into two or more categories. Researchers use the category labels of documents produced by the classification task as the outcome or predictive variable to test substantive hypotheses.
Statistical methods are used for document classification. Although text data in political science is typically smaller than data in some other fields (where millions of documents are common), the cost of having human coders categorize all documents is still prohibitively high. Relying on automated text classification allows researchers to avoid classifying all documents in their data set manually.
Broadly speaking, there are two types of classification methods: supervised and unsupervised algorithms. Supervised approaches use labels from a set of hand-coded documents to categorize unlabeled documents, whereas unsupervised methods cluster documents without needing labeled documents. Both of these methods have downsides, however: in the former, hand-coding documents is labor-intensive and costly; in the latter, the substantive interpretation of the categories discovered by the clustering process can be difficult.
Supervised methods are more popular in political science research because substantive interpretability is important in using category labels to test substantive hypotheses, and justifies the cost associated with labeling many documents manually. For example, Gohdes (2020) hand-labeled about 2000 documents, and Park et al. (2020) used 4000 human-coded documents. These numbers are much smaller than the size of their entire data sets (65, 274 and 2, 473, 874, respectively), however, having human coders label thousands of (potentially long and complicated) documents still requires a large amount of researchers' time and effort.
We propose activeText, a new algorithm that augments a probabilistic mixture model with active learning. We use the mixture model of Nigam et al. (2000) to combine the information from both labeled and unlabeled documents, making use of all available information. In the model, latent classes are observed as labels for labeled documents and estimated as a latent variable for unlabeled documents. Active learning is a technique that reduces the cost of hand-coding. It uses measures of label uncertainty to iteratively flag highly informative documents to reduce the number of labeled documents needed to train an accurate classifier, particularly when the classification categories are imbalanced.
Our validation study shows that our model outperforms Support Vector Machines (SVM), a popular supervised learning model when both models are using active learning. We also show that our algorithm performs favorably in terms of classification accuracy when compared to an off-the-shelf version of Bidirectional Encoder Representations from Transformers (BERT), a state-of-the-art classification model in natural language processing, using several orders of magnitude less computational resources. Furthermore, because our model is generative, it is straightforward to use a researcher's domain expertise, such as keywords associated with a category, to improve text classification.
We also use activeText to replicate two published political science studies and show that the authors of these papers could have reached the same substantive conclusions with fewer labeled documents. The first study is Gohdes (2020), which focuses on the relationship between internet access and the form of state violence. The second study is Park et al. (2020), which analyzes the association (or the lack thereof) between information communication technologies (ICTs) and the U.S. Department of State's reports on human rights. For both studies, we replicate their text classification tasks using activeText and conduct the same empirical analyses using the document labels. Our replication analysis recovers their original conclusions-a higher level of internet access is associated with a larger proportion of targeted killings, and ICTs are not associated with the sentiment of the State Department's human rights reports, respectively-using far fewer labeled documents. These replication exercises demonstrate that activeText performs well on complex documents commonly used in political science research, such as human rights reports.
We provide an R package called activeText with the goal of providing researchers from all backgrounds with easily accessible tools to minimize the amount of hand-coding of documents and improve the performance of classification models for their own work.
Before proceeding to a description of our algorithm and analysis, we first offer an accessible primer on the use of automated text classification. We introduce readers to several basic concepts in machine learning: tokenization, preprocessing, and the encoding of a corpus of text data into a matrix; the difference between supervised and unsupervised learning, between discriminative and generative models, and between active and passive learning; and a set of tools for the evaluation of classification models. Readers who are already well acquainted with these concepts may prefer to skip directly to the description of our model in Section The Method.
Using Machine Learning for Text Classification Encoding Text in Matrix Form
Suppose that a researcher has a collection of social media text data, called a corpus, and wishes to classify whether each text in a corpus is political (e.g., refers to political protest, human rights violations, unfavorable views of a given candidate, targeted political repression, etc.) or not solely based on the words used in a given observation. 2 Critically, the researcher does not yet know which of the texts are political or not at this point.
The researcher must first choose how to represent text as a series of tokens, and decide which tokens to include in their analysis. This involves a series of sub-choices, such as whether each token represents an individual word (such as "political") or a combination of words (such as "political party"), whether words should be stemmed or not (e.g., reducing both "political" and "politics" their common stem "politic"), and whether to remove stopwords (such as "in", "and", "on", etc.) that are collectively referred to as pre-processing. 3 The researcher must then choose how to encode information about these tokens in matrix form. The most straightforward way to accomplish this is using a bag-of-words approach, where the corpus is transformed into a document-feature matrix (DFM) X with n rows and m columns, where n is the number of documents and m is the number of tokens, which are more generally referred to as features. 4 Each element of the DFM encodes the frequency that a token occurs in a given document. 5 Once the researcher chooses how to encode their corpus as a matrix, she is left with a set of features corresponding to each document X and an unknown vector of true labels Y , where each element of Y indicates whether a given document is political or not. Then, we can repose the classification question as follows: given 2 For simplicity, the exposition here focuses on a binary classification task, however, our proposed method can be extended to multiple classes e.g., classifying a document as either a positive, negative, or neutral position about a candidate. See Sections The Method and Reanalysis with Fewer Human Annotations, and Supplementary Information (SI) ?? for more details.
3 For a survey of pre-processing techniques and their implications for political science research, see Denny and Spirling (2018). 4 Note that in the machine learning literature, the concept typically described by the term "variable" is communicated using the term "feature." 5 An alternative to the bag-of-words approach is to encode tokens as word embeddings, where in addition to the matrix summarizing the incidences of words in each document, neural network models are used to create vector representations of each token. In this framework, each token is represented by a vector of some arbitrary length, and tokens that are used in similar contexts in the corpus (such as "minister" and "cabinet") will have similar vectors. While this approach is more complicated, it yields considerably more information about the use of words in the corpus than the simple count that the bag-of-words approach does. For an accessible introduction to the construction and use of word embeddings in political science research, see Rodriguez and Spirling (2022). For a more technical treatment, see Pennington et al. (2014).
X, how might we best learn Y , that is, whether each document is political or not?
Supervised vs. Unsupervised Learning
A researcher must then choose whether to use a supervised or unsupervised approach to machine learning. 6 The supervised approach to this problem would be to (1) obtain true labels of some of the documents using human coding e.g., an expert classifies documents such as the following news headline by CNN: "White House says Covid-19 policy unchanged despite President Biden's comments that the 'pandemic is over"' as political or not; (2) learn the relationship between the text features encoded in the matrix X and the true label encoded in the vector Y for the documents with known labels. In other words, it learns the importance of words such as "policy", "President", "Biden", "pandemic" in explaining whether a document refers to politics or not; 7 and (3) using the learned association between the text data and the known labels, predict whether the remaining documents in the corpus (that is, those that were not coded by a human) are political or not.
In contrast, an unsupervised approach would not obtain the true labels of some of the documents. Rather, a researcher using an unsupervised approach would choose a model that clusters documents from the corpus that have common patterns of word frequency. 8 Using the assignment of documents to clusters, the researcher would then use some scheme to decide which of the clusters corresponds to the actual outcome of interest: whether a document is political or not.
The main advantage of a supervised approach over an unsupervised approach is the direct interpretability of results, since it requires the translation of clusters to classifications. This also allows for a more straightforward evaluation of model performance in terms of the distance between the predictions made by the supervised learning algorithm and the true values of Y . Because such an objective measure does not exist in unsupervised learning, the researcher needs to rely on heuristics to assess the adequacy of the algorithm (Hastie et al., 2009). 9 On the other hand, the main disadvantage of a supervised approach is that obtaining labels for the documents in the corpus is often time-consuming and costly. For example, it requires expert knowledge to classify each document to be either political or non-political.
Researchers using an unsupervised approach instead will avoid this cost since they do not require a set of labels a priori.
Semi-supervised methods combine the strengths of supervised and unsupervised approaches to improve classification (Miller and Uyar, 1996;Nigam et al., 2000). These methods are particularly useful in situations where there is a large amount of unlabeled data, and acquiring labels is costly. A semi-supervised model proceeds similarly to the supervised approach, with the difference being that the model learns the relationship between the matrix of text data X and the classification outcome Y using information from both the labeled and unlabeled data. 10 Since a supervised approach learns the relationship between the labels and the data solely based on the labeled documents, a classifier trained with a supervised approach maybe less accurate than if it were provided information from both the labeled and unlabeled documents (Nigam et al., 2000).
Discriminative vs. Generative Models
In addition to choosing a supervised, unsupervised, or semi-supervised approach, a researcher must also choose whether to use a discriminative or generative model. As noted by Ng and Jordan (2001) and Bishop and Lassarre (2007), when using a discriminative model (e.g., logistic regression, SVM, etc.), the goal is to directly estimate the probability of the classification outcomes Y given the text data X i.e., directly estimate p(Y |X). In contrast, when using a generative model (e.g., Naive Bayes), learning the relationship between the Y and X is a two-step process. In the first step, the likelihood of the matrix of text data X and outcome labels Y is estimated given the data and a set of parameters θ that indicate structural assumptions about how the data is generated i.e., p(X, Y |θ) is directly estimated. In the second step, the researcher uses Bayes' rule to calculate the probability of the outcome vector given the features and the learned distribution of the parameters i.e., p(Y |X; θ).
In addition to allowing for the use of unlabeled data (which reduces labeling costs), one of the main benefits of a generative rather than a discriminative model is that the researcher can include information they know about the data generating process by choosing appropriate functional forms. 11 This can help prevent overfitting when the amount of data in a corpus is small. 12 Conversely, because it is not necessary to model the data generating process directly, the main benefit of a discriminative rather than generative model is simplicity (in general it involves estimating fewer parameters). Discriminative models are therefore appropriate in situations where the amount of data in a corpus is very large, and/or when the researcher is unsure about the data-generating process, which could lead to mis-specification (Bishop and Lassarre, 2007). 13
Model Evaluation
A researcher must also decide when she is satisfied with the predictions generated by the model. In most circumstances, the best way to evaluate the performance of a classification algorithm is to reserve a subset of the corpus for validation, which is sometimes referred to as validation and/or test set. At the very beginning of the classification process, a researcher puts aside and label a set of randomly chosen documents that the active learning algorithm does not have access to. 14 Then, after training the model on the remainder of the documents (often called the training set), the researcher should generate predictions for the documents in the validation set using the trained model. By comparing the predicted labels generated by the model to the actual labels, the researcher can evaluate how well the model does at predicting the correct labels.
A common tool for comparing the predicted labels to the actual labels is a confusion matrix. In a binary classification setting, a confusion matrix will be a 2 by 2 matrix, with rows corresponding to the actual label, and the columns corresponding to the predicted label. Returning to our running example, imagine that the classification is to predict whether documents are political or not, Table 1 shows the corresponding confusion matrix. In this scenario, True Positives (TP) are the number of documents that the model predicts to be about politics and that is in fact labeled as such. Correspondingly, True Negatives (TN), are the number of documents that the model predicts to be non-political and is labeled as such in the validation set. A False Negative (FN) occurs when the model classifies a document as non-political, but according to the validation set, the document is about politics. Similarly, a False Positive (FP) occurs when the model classifies as political a document that is nonpolitical.
Using the confusion matrix, the researcher can calculate a variety of evaluation statistics. Some of the most common of these are accuracy, precision, and recall. Accuracy is the the model has not seen before.
13 Another benefit of generative models is that they can yield better estimates of how certain we are about the relationship between the outcome and the features. This is the case when a researcher uses an inference algorithm like Markov Chain Monte Carlo (MCMC) that learns the entire distribution for each of the parameters, rather than only point estimates.
14 It is important to use a set-aside validation set for testing model performance, rather than a subset of the documents used to train the model, to avoid overfitting. proportion of documents that have been correctly classified. Precision is used to evaluate the false positivity rate and is the proportion of the model's positive classifications that are true positives. As the number of false positives increases (decreases), precision decreases (increases). Recall is used to evaluate the false negativity rate, and is the proportion of the actual positive documents that are true positives. As the number of false negatives increases, recall decreases, and vice-versa. Accuracy, precision, and recall can be formally calculated as: Accuracy = TP + TN TP + TN + FP + FN Precision = TP TP + FP Recall = TP TP + FN When the proportion of political and non-political documents in a corpus is balanced, accuracy is an adequate measure of model performance. However, it is often the case in text classification that the corpus is unbalanced, and the proportion of documents associated with one class is low. When this is the case, accuracy does a poor job at model evaluation. Consider the case when 99 percent of documents are non-political, and 1 percent are about politics. A model which simply predicts that all documents belong to the non-politics class would have an accuracy score of 0.99, but would be poorly suited to the actual classification task. In contrast, the precision and recall rates would be 0, which would signal to the researcher that the model does a poor job at classifying documents as political. Precision and recall are not perfect measures of model performance, however. There is a fundamental trade-off involved in controlling the false positivity and false negativity rates: you can have few false positives if you are content with an extremely high number of false negatives, and you can have few false negatives if you are content with an extremely high number of false positives.
Recognizing this trade-off, researchers often combine precision and recall scores to find a model that has the optimal balance of the two. One common way of combining the two is an F1 score, which is the harmonized mean of precision and recall. Formally, the F1 score is calculated as: F1 = 2 · Precision · Recall Precision + Recall The F1 score evenly weights precision and recall, and so a high F1 score would indicate that both the false negativity and false positivity rate are low. It is worth noting these evaluation measures (accuracy, precision, recall, and the F1 score) are computed using labeled data ("ground truth"), which in practice, are available only for a limited subset of the records.
Active vs. Passive Learning
Finally, if the researcher in our running example decides to use a supervised or semisupervised approach for predicting whether documents in their corpus are political or not, the next step is to decide how many documents to label, and how to choose them. Since labeling is the bottleneck of any classification task of this kind, it is critical that she also selects an approach to label observations that minimizes the number of documents to be labeled in order to produce an accurate classifier.
There are two popular strategies on how to retrieve cases to be labeled: 1) passively and 2) actively. The difference between a passive and an active approach amounts to whether the researcher randomly chooses which documents to label (i.e., choose documents passively), or whether to use some selection scheme (i.e., choose documents actively). Ideally, an active approach must require fewer labels than the number of randomly labeled data sufficient for a passive approach to achieve the same level of accuracy. Cohn et al. (1994) and Lewis and Gale (1994) established that a good active learning algorithm should be fast, and should reliably choose documents for labeling that provide more information to the model than a randomly chosen document, particularly in situations when the amount of labeled data is scarce. 15 One of the most studied active learning approaches is called uncertainty sampling (Lewis and Gale, 1994;Yang et al., 2015), a process where documents are chosen for labeling based on how uncertain the model is about the correct classification category for each document in the corpus. 16 As noted above, an active learning process using uncertainty sampling alternates between estimating the probability that each document belongs to a particular classification outcome, sampling a subset of the documents that the model is most uncertain about for labeling, 17 then estimating the probabilities again using the information from the newly labeled documents. In our running example, a researcher is interested in classifying documents or Non-political (N). A a passive learning algorithm will request the labels of • and * with equal probability (Panel C). In contrast, in active learning approach, • will be prioritized for labeling as it is located in the region where the classifier is most uncertain (shaded region).
as political (P) or non-political (N), and needs to decide how to prioritize her labeling efforts. As shown in Figure 1 (Panel A), imagine there are two new data points to be labeled (denoted by "•" and " * "). A passive learning algorithm would give equal labeling priority to both (Panel B). However, an active approach would give priority to "•" as the classifier is most uncertain about the label of "•" if compared to " * " (which is surrounded by many non-political documents).
A critical question for a researcher using an iterative algorithm is when to stop labelling. Many active learning algorithms resort to heuristics such as a fixed-budget approach, which stops when the number of newly labeled data points reaches a predetermined size. The problem with such an approach is that it may lead to under-or over-sampling. 18 One popular strategy is to randomly label a subset of documents at the beginning of the process, which is then used for assessing the performance of the classifier on data that the model has not seen. 19 With this approach, the process stops when the difference in measures of outof-sample accuracy between two consecutive iterations does not surpass a certain threshold pre-established by the researcher (e.g., the F1 score does not improve in more than 0.01 units from iteration to iteration) (Altschuler and Bloodgood, 2019). If labeled data does not exist or cannot be set aside for testing due to its scarcity, a stopping rule where the algorithm stops once in-sample predictions generated by the model (i.e., using the documents that have been labeled by the researcher during the active learning process) do not change from one iteration to the next. This is often referred to as a stability-based method (Ishibashi and Hino, 2020).
With all these concepts in mind, in the next section we describe our proposed approach with a special focus on its flexibility that it affords a researcher to both balance the tradeoffs of working with labeled and unlabeled data, and use existing domain expertise to improve classification with the use of keyword upweighting.
The Method
In this section, we present our modeling strategy and describe our active learning algorithm. For the probabilistic model (a mixture model for discrete data) at the heart of the algorithm, we build on the work of Nigam et al. (2000), who show that probabilistic classifiers can be augmented by combining the information coming from labeled and unlabeled data. In other words, our model makes the latent classes for the unlabeled data interpretable by connecting them to the hand-coded classes from the labeled data. It also takes advantage of the fact that the unlabeled data provides more information about the features used to predict the classes for each document. As we will discuss below, we insert our model into an active learning algorithm and use the Expectation-Maximization (EM) algorithm (Dempster et al., 1977) to maximize the observed-data log-likelihood function and estimate the model parameters.
Model
Consider the task of classifying N documents as one of two classes (e.g., political vs nonpolitical). Let D be a N × V document feature matrix, where V is the size of features. We use Z, a vector of length N , where each entry represents the latent classes assigned to each document. If a document i is assigned to the kth class, we have that Z i = k, where k ∈ {0, 1} e.g., in our running example, k = 1 represents the class of documents about politics, and k = 0 those that are non-political. Because we use a semi-supervised approach, it can be the case that some documents are already hand-labeled. This means that the value of Z i is known for the labeled documents and is unknown for unlabeled documents. To facilitate exposition, we assume that the classification goal is binary, however, our approach can be extended to accommodate for 1) multiclass classification setting, where k > 2 and each document needs to be classified into one of the k classes e.g., classifying news articles into 3 classes: politics, business, and sports; and 2) modeling more than two classes but keeping the final classification to be binary. In other words, a hierarchy that maps multiple sub-classes into one class e.g., collapsing the classification of documents that are about business and sports into a larger class (non-politics), and letting the remaining documents to be about politics (the main category of interest). (For more details, see SI ??, ??, and ??).
The following sets of equations summarize the model:
Labeled Data Z i = k ∼ hand-coded, k ∈ {0, 1} η ·k i.i.d ∼ Dirichlet(β k ) D i· |Z i = k i.i.d ∼ M ultinomial(n i , η ·k ) + λ · Unlabeled Data π ∼ Beta(α 0 , α 1 ) Z i = k i.i.d ∼ Bernoulli(π), k ∈ {0, 1} η ·k i.i.d ∼ Dirichlet(β k ) D i· |Z i = k i.i.d ∼ M ultinomial(n i , η ·k )
If document i is unlabeled, we first draw π = p(Z i = 1), the overall probability that any given document belongs to the first class (e.g., political documents), from a Beta distribution with hyperparameters α 0 and α 1 . Similarly, for the other class (e.g., non-political documents), we have that 1 − π = p(Z i = 0). Given π, for each document indexed by i, we draw the latent cluster assignment indicator Z i from a Bernoulli distribution. Then, we draw features for document i from a multinomial distribution governed by the vector η ·k , where η vk = p(D iv |Z i = k), whose prior is the Dirichlet distribution. If document i is labeled, the main difference with the unlabeled data case is that Z i has been hand-coded, and as a result, we do not draw it from a Bernoulli distribution but the rest of the model's structure remains the same.
It is worth emphasizing that one of the most notorious problems with the implementation of supervised and semi-supervised approaches is the scarcity of labeled data, especially if compared to the abundance of unlabeled data. Due to this imbalance problem, for any classifier to be able to extract signal from the labeled data and not be informed by unlabeled data alone, it is key to devise ways to increase the relative importance of the labeled data. Otherwise, the unlabeled data will mute the signal coming from the labeled data. Following Nigam et al. (2000), we down-weight information from unlabeled documents by λ ∈ [0, 1]. Note that when the λ is equal to 1, the model treats each document equally, regardless of whether the document is labeled deterministically by a human, or probabilistically by the algorithm. As λ moves from 1 towards 0, the model increasingly down-weights the information that the probabilistically labeled documents contribute to the estimation of η and π, such that when λ is 0, the model ignores all information from the probabilistically labeled documents and therefore becomes a supervised algorithm (see SI ??). Finally, because the observed data log-likelihood of our model is difficult to maximize, we use the EM algorithm to estimate the parameters. 20
Active Learning
Our active learning algorithm (see Algorithm 1) can be split into the following steps: estimation of the probability that each unlabeled document belongs to the positive class, selection of the unlabeled documents whose predicted class is most uncertain, and labeling of the selected documents by human coders. The algorithm iterates until a stopping criterion is met (Section Active vs. Passive Learning). We also describe an optional keyword upweighting feature, where a set of user-provided keywords provide prior information about the likelihood that a word is generated by a given class to the model. These keywords can either be provided at the outset of the model or identified during the active learning process.
Estimation
In the first iteration, the model is initialized with a small number of labeled documents. 21 The information from these documents is used to estimate the parameters of the model: the probability of a document being of class 1 (π), and the probability of generating each word given a class, the V ×2 matrix η. From the second iteration on, we use information from both labeled and unlabeled documents to estimate the parameters using the EM algorithm, with the log-likelihood of unlabeled documents being down-weighted by λ, and with the η and π values from the previous iteration as the initial values. Using the estimated parameters, we compute the posterior probability that each unlabeled document belongs to class 1.
Selection
Using the predicted probability that each unlabeled document belongs to class 1, we use Shannon Entropy to determine which of the probabilistically labeled documents that it was least certain about. In the binary classification case, this is the equivalent of calculating the absolute value of the distance of the class 1 probability and 0.50 for each document. Using this criterion, the model ranks all probabilistically labeled documents in descending order of Algorithm 1: Active learning with EM algorithm to classify text
Result: Obtain predicted classes of all documents.
Randomly select a small subset of documents, and ask humans to label them; [Active Keyword]: Ask humans to provide initial keywords; while Stopping conditions are not met yet do (1) [Active Keyword]: Up-weight the important of keywords associated with a class; (2) Predict labels for unlabeled documents using EM algorithm;
(3) Select documents with the highest uncertainty among unlabeled documents, and ask humans to label them; (4) [Active Keyword]: Select words most strongly associated with each class, and ask humans to label them; (5) Update sets of labeled and unlabeled documents for the next iteration; end uncertainty. The n most uncertain documents are then selected for human labeling, where n is the number of documents to be labeled by humans at each iteration.
Labeling
A human coder reads each document selected by the algorithm and imputes the "correct" label. For example, the researcher may be asked to label as political or non-political each of the following sentences:
The 2020 Presidential Election had the highest turnout in US history. Qatar is ready to host the FIFA World Cup this coming November.
These newly-labeled documents are then added to the set of human-labeled documents, and the process is repeated from the estimation stage.
Stopping Rule
Our method is highly modular and supports a variety of stopping rules. This includes an internal stability criterion, where stoppage is based on small amounts of change of the internal model parameters, as well as the use of a small held-out validation set to assess the marginal benefit of labeling additional documents on measures of model evaluation such as accuracy or F1. With either rule, the researcher specifies some bound such that if the change in model parameters or out-of-sample performance is less than the pre-specified bound, then the labeling process ends. We use the out-of-sample validation stopping rule with a bound of 0.01 for the F1 score in our reanalyses in Section Reanalysis with Fewer Human Annotations.
Active Keyword Upweighting
The researcher also has the option to use an active keyword upweighting scheme, where a set of keywords is used to provide additional information. This is done by incrementing elements of the β (the prior of η) by γ, a scalar value chosen by the researcher. In other words, we impose a tight prior on the probability that a given keyword is associated with each class. 22 To build the set of keywords for each class, 1) activeText proposes a set of candidate words, 2) the researcher decides whether they are indeed keywords or not, 23 and 3) activeText updates the parameters based on the set of keywords.
To select a set of candidate keywords, activeText calculates the ratio that each word was generated by a particular class using the η parameter. Specifically, it computes η vk /η vk for k = {0, 1} with k the opposite class of k, and chooses top m words whose η vk /η vk are the highest as candidate keywords to be queried for class k. 24 Intuitively, words closely associated with the classification classes are proposed as candidate keywords. For example, words such as "vote," "election," and "president," are likely to be proposed as the keywords for the political class of documents in the classification between political vs. non-political documents.
After activeText proposes candidate keywords, the researcher decides whether they are indeed keywords or not. This is where the researcher can use her expertise to provide additional information. For example, she can decide names of legislators and acronyms of bills as keywords for the political class. 25 Using the set of keywords for each class, activeText creates a V × 2 keyword matrix κ where each element κ v,k takes the value of γ if word v is a keyword for class k, otherwise 0. Before we estimate parameters in each active iteration, we perform a matrix sum β ← κ+β to incorporate information from keywords. The keyword approach therefore effectively upweights our model with prior information about words that the researcher thinks are likely to be associated with one class rather than another.
Validation Performance
This section shows the performance comparisons between activeTextand other classification methods. First, we show comparisons between active vs. passive learning as well as semi-supervised learning vs. supervised learning. For semi-supervised learning, we use ac-tiveTextwith λ = 0.001. For supervised learning, we use active Support Vector Machines 22 See Eshima et al. (2020) for a similar approach for topic models. 23 The researcher may also provide an initial set of keywords, and then iteratively adds new keywords. 24 Words are excluded from candidate keywords if they are already in the set of keywords, or if they are already decided as non-keywords. Thus, no words are proposed twice as candidate keywords.) 25 See SI ?? for more discussion about what if the researcher mislabels keywords.
(SVM) from Miller et al. (2020) with margin sampling. Then, we compare classification and time performance between activeText and an off-the-shelf version of BERT, a state-of-theart text classification model. Furthermore, we show how keyword upweighting can improve classification accuracy. We compare the classification performance on the following documents: internal forum conversations of Wikipedia editors (class of interest: toxic comment), BBC News articles (political topic), the United States Supreme Court decisions (criminal procedure), and Human Rights allegations (physical integrity rights allegation). 26 We use 80% of each dataset for the training data and hold out the remaining 20% for evaluation. Documents to be labeled are sampled only from the training set, and documents in the test set are not included to train the classifier, even in our semi-supervised approach. The out-of-sample F1 score is calculated using the held-out testing data.
Comparison between activeText and Active SVM Figure 2 shows the results from four model specifications, each representing one of the combinations of active or passive learning, and semi-supervised or supervised learning. The first choice is between active learning (solid lines) vs passive learning (dashed lines). In the active sampling, we select the next set of documents to be labeled based on the entropy of the predicted probabilities of the classes when we use our mixture model, and they are selected based on the margin sampling when we use SVM as the underlying classification method. The second choice is between our semi-supervised learning (darker lines) vs. offthe-shelf supervised learning (lighter lines). For the supervised learning, we replicate the results from Miller et al. (2020) which uses SVM as the classifier. Each panel represents model performance in one of four datasets, with the number in parentheses indicating the proportion of documents associated with the class of interest using ground-truth labels in each dataset. The y-axis indicates the average out-of-sample F1 score across 50 Monte Carlo iterations, and the x-axis shows the total number of documents labeled, with 20 documents labeled at each sampling step. 27 Among the four models, the combination of active learning with the mixture model (activeText in Figure 2) performs the best with most of the specifications. The gain from active learning tends to be higher when the proportion of documents in the class of interest is small. On the Wikipedia corpus with the proportion of the positive labels being 9%, active learning outperforms passive learning, particularly when the number of documents labeled is smaller. In SI ??, we further examine how the class-imbalance influences the benefit of active learning, by varying the proportion of the positive class between 5% and 50%. 28 It shows that active learning performs better than passive learning consistently when the proportion of one class is 5%.One limitation is that activeText did not perform better than SVM on the human rights corpus when the number of documents labeled is small (less than 200 in Figure 2). We examine how the optional keyword labeling can assist such a situation in Benefits of Keyword Upweighting.
Comparison between activeText and BERT
In Figure 3, we compare both classification performance and computational time for ac-tiveText, Active SVM, and BERT, a state-of-the-art text classification model. 29 We trained two sets of models for the F1 and time comparisons, respectively. The left-hand column of panels shows F1 (the y-axis) as a function of the number of documents labeled (the x-axis), as with the results shown in Figure 2. We trained models using 50 random initializations for the activeText and Active SVM models. We trained the BERT models using 10 random initializations using V100 GPUs on a cluster computing platform.
The F1 comparison in the left-hand column of Figure 3 shows that for all four of our corpora, activeText performs favorably in comparison to our off-the-shelf implementation of the BERT language model. We show that with each of the BBC, Supreme Court, and Wikipedia corpora (the first, third, and fourth rows of panels), we significantly outperform BERT when there are very few documents labeled. As the number of labeled documents increases, BERT as expected performs well and even exceeds the F1 score of activeText in the case of Wikipedia. And as shown in the results for the Human Rights corpus (the second row of panels), BERT does outperform activeText at all levels of documents labeled.
The right-hand column of panels in Figure 3 shows computational time, rather than F1, as a function of documents labeled. For this analysis, our goal was to compare how long it would take a researcher without access to a cluster computing platform or a high-powered GPU to train these models. To this end, we re-trained the activeText, Active SVM, and BERT models on a base model M1 Macbook Air with 8 GB of RAM and 7 GPU cores. While the Active SVM and activeText models were trained using a single CPU, we used the recent implementation of support for the GPU in M1 Macs in PyTorch 30 to parallelize the training of the BERT model using the M1 Mac's GPU cores. 31 We also computed the time values cumulatively for the activeText and Active SVM models, since it is expected that model will be fit over and over again as part of the active learning process, whereas for a model like BERT we expect that the model would only be run once, and as such do not calculate its run-time cumulatively. For the Human Rights and Wikipedia corpora, which each have several hundred thousand entries, we used a random subsample of 50,000 documents. For the Supreme Court and BBC corpora, we used the full samples. Finally, we present the time results in logarithmic scale to improve visual interpretation. The right-hand panel of Figure 3 shows that the slight advantages of the BERT models come at a cost of several orders of magnitude of computation time. Using the Wikipedia corpus as an example, at 500 documents labeled the baseline activeText would have run to convergence 25 times, and the sum total of that computation time would have amounted to just under 100 seconds. With BERT, however, training a model with 500 documents and labeling the remaining 45,500 on an average personal computer would take approximately 10,000 seconds (2.78 hours).
Benefits of Keyword Upweighting
In Figure 2, active learning did not improve the performance on the human rights corpus, and the F1 score was lower than other corpora in general. One reason for the early poor performance of activeText may be length of documents. Because each document of the human rights corpus consists of one sentence only, the average length is shorter than other corpora. 32 This means that the information the models can learn from labeled documents is less compared to the other corpora with longer documents. In situations like this, providing keywords in addition to document labels can improve classification performance because it directly shifts the values of the word-class probability matrix, η, even when the provided keywords is not in the already labeled documents. Figure 4 compares the performance with and without providing keywords. The darker lines show the results with keywords and the lighter lines without. The columns specify the proportion of documents associated with the class of interests: 5%, 50% and the population proportion (16%). As in the previous exercises, 20 documents are labeled at each sampling step, and 100 Monte Carlo simulations are performed to stabilize the randomness due to the initial set of documents to be labeled. We simulated the process of a user starting with no keywords for either class, and then being queried with extreme words indexed by v whose η vk /η vk is the highest for each class k, with up to 10 keywords for each class being chosen based on the estimated η at a given iteration of the active process. To determine whether a candidate keyword should be added to the list of keywords or not, our simulated user checked whether the word under consideration was among the set of most extreme words in the distribution of the 'true' η parameter, which we previously estimated by fitting our mixture model with the complete set of labeled documents. 33 The results suggest that providing keywords improves the performance when the proportion of documents is markedly imbalanced across classes. The keywords scheme improved the performance when the number of labeled documents is smaller on the corpus with 5% or 16% (population) labels associated with the class of interest. By contrast, it did not on the corpus where both classes were evenly balanced. These results highlight that our active keyword approach benefits the most when the dataset suffers from serious class-imbalance problems. 34 One caveat is that we provided 'true' keywords, in the sense that we used the estimated η from a fully labeled dataset. In practice, researchers have to decide if candidate keywords are indeed keywords using their substantive knowledge. In this exercise, we believe that the keywords supplied to our simulation are what researchers with substantive knowledge about physical integrity rights can confidently adjudicate. For example, the keywords, such as "torture," "beat," and "murder," match our substantive understanding of physical integrity right violation. Nevertheless, humans can make mistakes, and some words may be difficult to judge. Thus, we examined the classification performance with varying degrees in the amount of error at the keyword labeling step. In SI ??, we show that the active keyword approach still improves the classification performance compared to the no-keyword approach -even 33 Specifically, the simulated user checked whether the word in question was in the top 10% of most extreme words for each class using the 'true' η parameter. If the candidate word was in the set of 'true' extreme words, it was added to the list of keywords and upweighted accordingly in the next active iteration.
34 SI ?? demonstrates how active keyword works by visualizing the word-class matrix, η, at each active iteration.
in the presence of small amounts (less than 20%) of "honest" (random) measurement error in keyword labeling.
Reanalysis with Fewer Human Annotations
To further illustrate our proposed approach for text classification, in this section, we reanalyze the results in Gohdes (2020) and Park et al. (2020). We show that via activeText, we arrive at the same substantive conclusions advanced by these authors but using only a small fraction of the labeled data they originally used.
Internet Accessibility and State Violence (Gohdes, 2020) In the article "Repression Technology: Internet Accessibility and State Violence," Gohdes (2020) argues that higher levels of Internet accessibility are associated with increases in targeted repression by the state. The rationale behind this hypothesis is that through the rapid expansion of the Internet, governments have been able to improve their digital surveillance tools and target more accurately those in the opposition. Thus, even when digital censorship is commonly used to diminish the opposition's capabilities, Gohdes (2020) claims that digital surveillance remains a powerful tool, especially in areas where the regime is not fully in control.
To measure the extent to which killings result from government targeting operations, Gohdes (2020) collects 65,274 reports related to lethal violence in Syria. These reports contain detailed information about the person killed, date, location, and cause of death. The period under study goes from June 2013 to April 2015. Among all the reports, 2,346 were hand-coded by Gohdes, and each hand-coded report can fall under one of three classes: 1) government-targeted killing, 2) government-untargeted killing, and 3) non-government killing. Using a document-feature matrix (based on the text of the reports) and the labels of the hand-coded reports, Gohdes (2020) trained and tested a state-of-the-art supervised decision tree algorithm (extreme gradient boosting, xgboost). Using the parameters learned at the training stage, Gohdes (2020) predicts the labels for the remaining reports for which the hand-coded labels are not available. For each one of the 14 Syrian governorates (the second largest administrative unit in Syria), Gohdes (2020) calculates the proportion of biweekly government targeted killings. In other words, she collapses the predictions from the classification stage at the governorate-biweekly level.
We replicate Gohdes (2020) classification tasks using activeText. In terms of data preparation, we adhere to the very same decisions made by Gohdes (2020). To do so, we use the same 2,346 hand-labeled reports (1,028 referred to untargeted killing, 705 to a targeted killing, and 613 a non-government killing) of which 80% were reserved for training and 20% to assess classification performance. In addition, we use the same document-feature matrices. 35 As noted in Active Learning, because activeText selects (at random) a small number of documents to be hand-labeled to initialize the process, we conduct 100 Monte Carlo simulations and present the average performance across initializations. As in Validation Performance, we set λ = 0.001. The performance of activeText and xgboost is evaluated in terms of out-of-sample F1 score. Following the discussion in Active vs. Passive Learning, we stopped the active labeling process at the 30th iteration when the out-of-sample F1 score stopped increasing by more than 0.01 units (our pre-specified threshold). Table 2 presents the results 36 . Overall, we find that as the number of active learning steps increases, the classification performance of activeText is similar to the one in Gohdes (2020). However, the number of hand-labeled documents that are required by activeText is significantly smaller (around one-third) if compared to the ones used by Gohdes (2020). (2020) results Ouf-of-sample F1 Score per class
Model
Step Labels Untargeted Targeted In social science research, oftentimes, text classification is not the end goal but a means to quantify a concept that is difficult to measure and make inferences about the relationship between this concept and other constructs of interest. In that sense, to empirically test her claims, Gohdes (2020) conducts regression analyses where the proportion of biweekly government targeted killings is the dependent variable and Internet accessibility is the main independent variable -both covariates are measured at the governorate-biweekly level. Gohdes (2020) finds that there is a positive and statistically significant relationship between Internet access and the proportion of targeted killings by the Syrian government. Using the predictions from activeText, we construct the main dependent variable and replicate the main regression analyses in Gohdes (2020).
Tables in SI ?? reports the estimated coefficients, across the same model specifications 35 Gohdes (2020) removed stopwords, punctuation, and words that appear in at most two reports, resulting in 1,342 features and a document-feature matrix that is 99% sparse. The median number of words across documents is 13. 36 The values in the bottom row are based on Gohdes (2020), Table A9.
activeText with 620 labels
Gohdes (2020) in Gohdes (2020). The point estimates and the standard errors are almost identical whether we use xgboost or activeText. Moreover, Figure 5 presents the expected proportion of targeted killings by region and Internet accessibility. Gohdes (2020) finds that in the Alawi region (known to be loyal to the regime) when Internet access is at its highest, the expected proportion of targeted killings is significantly smaller compared to other regions of Syria.
In the absence of the Internet, however, there is no discernible difference across regions (see Figure 5, right panel). Our reanalysis does not change the substantive conclusions by Gohdes (2020) ( Figure 5, left panel), however, it comes just at a fraction of the labeling efforts (labeling 620 instead of 1876 reports). As noted above, these gains come from our active sampling scheme as it can select the most informative documents to be labeled.
Human Rights are Increasingly Plural (Park et al., 2020) The question that drives the work of Park et al. (2020) is as follows: how the rapid growth (in the last four decades) of information communication technologies (ICTs) has changed the composition of texts referring to human rights? Park et al. (2020) make the observation that the average sentiment with which human rights reports are written has not drastically changed over time. Therefore, Park et al. (2020) advance the argument that if one wants to really understand the effect of changes in the access to information on the composition of human rights reports, it is necessary to internalize the fact that human rights are plural (bundles of related concepts). In other words, the authors argue that having access to new information has indeed changed the taxonomy of human rights over time, even when the tone has not.
To empirically test such a proposition, Park et al. (2020) conduct a two-step approach. First, via an SVM for text classification with three classes (negative, neutral, and positive sentiment), the authors show that the average sentiment of human rights reports has indeed remained stable even in periods where the amount of information available has become larger. 37 Second, they use a network modeling approach to show that while the average sentiment of these reports has remained constant over time, the taxonomy has drastically changed. In this section, using activeText, we focus on replicating the text classification task of Park et al. (2020) (which is key to motivating their puzzle).
As in the replication of Gohdes (2020), we adhere to the same pre-processing decisions made by Park et al. (2020) when working with their corpus of Country Reports on Human Rights Practices from 1977 to 2016 by the US Department of State. In particular, we use the same 4000 hand-labeled human rights reports (1182 are positive, 1743 are negative, and 1075 are neutral) and use the same document-feature matrices (which contain 30,000 features, a combination of unigrams and bigrams). Again, we conduct 100 Monte Carlo simulations and present the average performance across initializations. We stopped the active labeling process at the 25th iteration of our algorithm as the out-of-sample F1 score (from an 80/20 training/test split) does not increase by more than 0.01 units (see Figure ?? in SI ??). 38 Using the results from the classification task via activeText, the sentiment scores of 2,473,874 documents are predicted. With those predictions, we explore the evolution of the average 37 As explained in Appendix A1 of Park et al. (2020), negative sentiment refers to text about a clear ineffectiveness in protecting or to violations of human rights; positive sentiment refers to text about clear support (or no restrictions) of human rights; and neutral sentiment, refers to stating a simple fact about human rights. 38 The only point where we depart from Park et al. (2020) is that we use an 80/20 split for training/testing, while they use k-fold cross-validation. Conducting k-fold cross-validation for an active learning algorithm would require over-labeling and it would be computationally more expensive (the process should be repeated k times). Because of this difference we refrain from comparing our model performance metrics to theirs. sentiment of human rights reports per average information density score. 39 Figure 6 shows that by labeling only 500 documents with activeText, instead of 4000 labeled documents used by Park et al. (2020) to fit their SVM classifier, we arrive at the same substantive conclusion: the average sentiment of human rights reports has remained stable and almost neutral over time. In Figure ?? of SI ??, we also show that this result is not an artifact of our stopping rule and it is robust to the inclusion of additional label documents (e.g, labeling 1000, 1500, and 2000 documents instead of just 500).
Discussion
Tuning the value of λ As noted above, we downweight the information from unlabeled documents as we typically have more unlabeled than labeled documents. Moreover, since the labeled documents have been classified by an expert, we want to rely more on the information they bring for prediction.
An important practical consideration is: how to select the value of λ that maximizes the performance. One possible approach would be to adopt popular model selection methods (e.g. cross-validation) to choose the appropriate λ value during the model initialization process. 40 However, cross-validation may not be practical when the labeled data is scarce (or absent at the beginning of the process). Using our active learning approach is particularly, 39 Information density is a proxy for ICTs based on a variety of indicators related to the expansion of communications and access to information, see Appendix B in Park et al. (2020). 40 Indeed, it may be beneficial to tune the lambda value across active learning iterations.
we have observed across a variety of applications that very small values (e.g., 0.001 or 0.01) work the best on the corpora we used (see SI ??). However, more work is needed to clearly understand the optimality criteria needed to select λ. We leave this question for future research.
Labeling Error
While our empirical applications assume that labelers are correct, human labelers do make mistakes. In SI ??, we examine how mislabeling keywords and documents affect classification performance. Our results show that, if compared to the no-keyword approach, a small amount of random noise (classical measurement error) on keyword labeling does not hurt the classification performance. In contrast, random perturbations from true document labels do hurt the classification performance. A promising avenue for future research should center on developing new active learning algorithms that assign labelers based on their labeling ability and/or are robust to more pervasive forms of labeling error (differential and nondifferential measurement error). For instance, assigning the most competent labelers with the most uncertain or difficult documents and assigning the least competent labelers with easier documents can optimize the workload of the labelers. At the same time, we note that users may be able to improve the quality of human labeling by other means, such as polishing cateogry concepts and better training of coders, in practical settings.
Conclusion
Human labeling of documents is the most labor-intensive part of social science research that uses text data. For automated text classification to work, a machine classifier needs to be trained on the relationship between text features and class labels, and the labels in training data are given manually. In this paper we have described a new active learning algorithm that combines a mixture model and active learning to incorporate information from labeled and unlabeled documents and better select which documents to be labeled by a human coder. Our validation study showed that the proposed algorithm performed at least as well as state-of-the-art methods such as BERT while reducing computational costs dramatically. We replicated two published political science studies to show that our algorithm lead to the same conclusions as the original papers but needed much fewer labeled documents. In sum, our algorithm enables researchers to save their manual labeling efforts without sacrificing quality.
Machine learning techniques are becoming increasingly popular in political science, but the barrier to entry remains too high for researchers without a technical background to make use of advances in the field. As a result, there is an opportunity to democratize access to these methods. Towards this, we continue to work towards publishing the R package activeText on CRAN. We believe that our model will provide applied researchers a tool that they can use to efficiently categorize documents in corpuses of varying sizes and topics. Peterson, A., and Spirling, A. (2018), "Classification accuracy as a substantive quantity of interest: Measuring polarization in westminster systems," Political Analysis, 26(1), 120-128. Note that to facilitate exposition, in the main text, we use the words political and nonpolitical labels to describe the problem of binary classification. Without loss of generality, in this supplemental information material, we use the positive vs. negative class dichotomy instead.
Rodriguez
A Detailed explanations about the EM algorithm to estimate parameters
Let D lp , D ln and D u be the document feature matrices for documents with positive labels, documents with negative labels, and unlabeled documents, respectively. Also let N lp , N ln , and N u be the number of documents with positive labels, negative labels, and documents without labels. Likewise, C lp and C ln be the vectors of positive and negative labels. Then, the observed-data likelihood is:
p(π, η|D, C lp , C ln ) ∝ p(π)p(η)p(D lp , C lp |π, η)p(D ln , C ln |π, η) p(D u |π, η) λ = p(π)p(η) × N lp i=1 p(D lp i |Z i = 1, η)p(Z i = 1|π) × N ln i=1 p(D ln i |Z i = 0, η)p(Z i = 0|π) × N u i=1 p(D u i |Z i = 1, η)p(Z i = 1|π) + p(D u i |Z i = 0, η)p(Z i = 0|π) λ ∝ (1 − π) α 0 −1 V v=1 η β 0v −1 v0 × π α 1 −1 V v=1 η β 1v −1 v1 prior × N lp i=1 V v=1 η D iv v1 × π positive labeled doc. likelihood × N ln i=1 V v=1 η D iv v0 × (1 − π) negative labeled doc. likelihood × N u i=1 V v=1 η D iv v0 × (1 − π) + V v=1 η D iv v1 × π λ unlabeled doc. likelihood(1)
We weigh the part of the observed likelihood that refers to the unlabeled document with λ ∈ [0, 1]. This is done because we typically have many more unlabeled documents than labeled documents. By downweighting the information from the unlabeled document (i.e., setting λ to be small), we can use more reliable information from labeled documents than from unlabeled documents. We estimate the parameters π and η using EM algorithm ? and our implementation is presented as pseudocode in Algorithm 1. Note that by taking the expectation of the log Algorithm 1: EM algorithm to classify text Result: Maximize p(π (t) , η (t) | D l , Z l , D u , α, β) if In the first iteration of Active learning then Initialize π and η by Naive Bayes;
π (0) ← NB(D l , Z l , α); η (0) ← NB(D l , Z l , β); else
Inherit π (0) and η (0) from the previous iteration of Active learning; end while p(π (t) , η (t) | D l , Z l , D u , α, β) does not converge do
(1) E step: obtain the probability of the class for unlabeled documents;
p(Z u | π (t) , η (t) D l , Z l , D u ) ← E step(D u , π (t)
, η (t) ); (2) Combine the estimated classes for the unlabeled docs and the known classes for the labeled docs;
p(Z | π (t) , η (t) , D l , Z l , D u ) ← combine(D l , D u , Z l , p(Z u | π (t) , η (t) , D l , Z l , D u )); (3) M step: Maximize Q ≡ E[p(π, η, Z u | D l , Z l , D u , α, β)] w.r.t π and η; π (t+1) ← argmax Q; η (t+1) ← argmax Q; (4) Check convergence: Obtain the value of p(π (t+1) , η (t+1) | D l , Z l , D u , α, β); end complete likelihood function (Q function), Q ≡ E Z|π (t) ,η (t) ,D,C [p(π, η, Z|D, C)] = (α 0 − 1) log(1 − π (t) ) + (α 1 − 1) log π (t) + V v=1 (β 0v − 1) log η (t) v0 + (β 1v − 1) log η (t) v1 + N lp i=1 V v=1 D iv log η (t) v1 + log π (t) + N ln i=1 V v=1 D iv log η (t) v0 + log(1 − π (t) ) + λ N u i=1 p i0 V v=1 D iv log η (t) v0 + log(1 − π (t) ) + p i1 V v=1 D iv log η (t) v1 + log π (t)(2)
where p ik is the posterior probability of a document i being assigned to the k th cluster, k = {0, 1}, given data and the parameters at t th iteration. If a document has a positive label, p i0 = 0 and p i1 = 1.
If a document has no label,
p i0 = 1 − p i1 p i1 = V v=1 η D iv v1 × π V v=1 η D iv v0 × (1 − π) + V v=1 η D iv v1 × π(3)
Equation 3 also works as the prediction equation. The predicted class of a document i is k that maximizes this posterior probability.
In the M-step, we maximize the Q function, and obtain the updating equations for π and η. The updating equation for π is the following.
π (t+1) = α 1 − 1 + N lp + λ N u i=1 p i1 α 1 − 1 + N lp + λ N u i=1 p i1 + α 0 − 1 + N ln + λ N u i=1 p i0(4)
The updating equation for η is the following.
η (t+1) v0 ∝ (β v0 − 1) + N ln i=1 D iv + λ N u i=1 p i0 D iv , v = 1, . . . , V η (t+1) v1 ∝ (β v1 − 1) + N lp i=1 D iv + λ N u i=1 p i1 D iv , v = 1, . . . , V(5)
B EM algorithm for binary classification with multiple clusters B.1 Summary
The model outlined above assumes that there are two latent clusters, each linked to the positive and the negative class. However, this assumption can be relaxed to link multiple clusters to the negative class. In the world of mixture models, the simplest setup is to let K = 2 since the classification goal is binary, and we can link each latent cluster to the final classification categories. A more general setup is to use K > 2 even when a goal is a binary classification. If K > 2, but our focus is to uncover the identity of one cluster, we can choose one of the latent clusters to be linked to the "positive" class and let all other latent clusters be linked to the "negative" class (see e.g., ? for a similar idea in the realm of record linkage). In other words, we collapse the K − 1 latent clusters into one class for the classification purpose. Using K > 2 makes sense if the "negative" class consists of multiple sub-categories. For instance, suppose researchers are interested in classifying news articles into political news or not. Then, it is reasonable to assume that the non-political news category consists of multiple sub-categories, such as technology, entertainment, and sports news.
B.2 Model
This section presents a model and inference algorithm when we use more than 2 latent clusters in estimation but the final classification task is binary. In other words, we impose a hierarchy where many latent clusters are collapsed into the negative class. In contrast, the positive class is made out of just one class. The model presented is as follows:
π ∼ Dirichlet(α) Z i i.i.d ∼ Categorical(π) η ·k i.i.d ∼ Dirichlet(β k ), k = {1, . . . , K} D i· |Z i = k i.i.d ∼ M ultinomial(n i , η ·k )(6)
Note that π is now a probability vector of length K, and it is drawn from a Dirichlet distribution.
Let k * be the index of the cluster linked to the positive class. The observed likelihood is the following.
p(π, η|D, C lp , C ln )
∝ p(π)p(η)p(D lp , C lp |π, η)p(D ln , C ln |π, η) p(D u |π, η) λ = p(π)p(η) × N lp i=1 p(D lp i |Z i = k * , η)p(Z i = k * |π) × N ln i=1 k =k * p(D ln i |Z i = k, η)p(Z i = k|π) × N u i=1 K k=1 p(D u i |Z i = k, η)p(Z i = k|π) λ ∝ K k=1 π α k −1 k V v=1 η β kv −1 vk prior × N lp i=1 V v=1 η D iv vk * × π k positive labeled doc. likelihood × N ln i=1 k =k * V v=1 η D iv vk × π k negative labeled doc. likelihood × N u i=1 K k=1 V v=1 η D iv vk × π k λ unlabeled doc. likelihood(7)
The Q function (the expectation of the complete log likelihood) is
Q ≡ E Z|π (t) ,η (t) ,D,C [p(π, η, Z|D, C)] = K k=1 (α k − 1) log π (t) k + V v=1 (β kv − 1) log η (t) vk + N lp i=1 V v=1 D iv log η (t) vk * + log π (t) k * + N ln i=1 k =k * p ik V v=1 D iv log η (t) vk + log π (t) k + λ N u i=1 K k=1 p ik V v=1 D iv log η (t) vk + log π (t) k(8)
The posterior probability of Z i = k, p ik , is M step estimators are The updating equation for π is the following.
p ik = V v=1 η D iv vk × π k K k=1 V v=1 η D iv vk × π k(9)π k ∝ α k − 1 + N ln i=1 p ik + λ N u i=1 p ik if k = k * α k − 1 + N lp + λ N u i=1 p ik * if k = k *(10)
The updating equation for η is the following.
η vk ∝ (β k − 1) + N ln i=1 p ik D iv + λ N u i=1 p ik D iv if k = k * (β k − 1) + N lp i=1 D iv + λ N u i=1 p ik * D iv if k = k *(11)
Note that we downweight the information from the unlabeled documents by λ, to utilize more reliable information from labeled documents. Overall, the model with 5 clusters performs better or as well as the model with 2 clusters. The gain from using 5 clusters is the highest when the proportion of positive labels is small and when the size of labeled data is small. Figure B.2 shows the results when the multiple cluster approach and keyword upweighting approaches are combined. Using 5 clusters leads to as good or slightly better performance than using 2 clusters. The performance improvement is the largest with the BBC corpus, which consists of 5 news topic categories. Likewise, our mixture models with keywords leads to as good or better performance than the models without keywords. The improvement is the largest with the human rights corpus, where the number of words per document is the smallest.
B.3 Results
C Multiclass Classification C.1 Model
This section presents a model and inference algorithm for multiclass classification. Let K be the number of the clusters and is equal to the number of classes to be classified, with K ≥ 2. Differently than in SI B, we do not impose any hierarchies and the model is a true multi-class mixture model, where the end goal is to classify documents in K ≥ 2 classes. In other words, the model presented below is a generalization of the model presented in the main text.
π ∼ Dirichlet(α) Z i i.i.d ∼ Categorical(π) η ·k i.i.d ∼ Dirichlet(β k ), k = {1, . . . , K} D i· |Z i = k i.i.d ∼ M ultinomial(n i , η ·k )(12)
Note that π is now a probability vector of length K, and it is drawn from a Dirichlet distribution.
The observed likelihood is the following.
p(π, η|D, C l ) ∝ p(π)p(η)p(D, C|π, η) p(D u |π, η)
λ = p(π)p(η) × K k=1 N k i=1 p(D l i |Z i = k, η)p(Z i = k|π) × N u i=1 K k=1 p(D u i |Z i = k, η)p(Z i = k|π) λ ∝ K k=1 π α k −1 k V v=1 η β kv −1 vk prior × K k=1 N k i=1 V v=1 η D iv vk × π k labeled doc. likelihood × N u i=1 K k=1 V v=1 η D iv vk × π k λ unlabeled doc. likelihood(13)
The Q function (the expectation of the complete log-likelihood) is
Q ≡ E Z|π (t) ,η (t) ,D,C [p(π, η, Z|D, C)] = K k=1 (α k − 1) log π (t) k + V v=1 (β kv − 1) log η (t) vk + K k=1 N k i=1 V v=1 D iv log η (t) vk + log π (t) k + λ N u i=1 K k=1 p ik V v=1 D iv log η (t) vk + log π (t) k(14)
The posterior probability of Z i = k, p ik , is
p ik = V v=1 η D iv vk × π k K k=1 V v=1 η D iv vk × π k(15)
M step estimators are The updating equation for π is the following.
π k ∝ α k − 1 + N k + λ N u i=1 p ik(16)
The updating equation for η is the following.
η vk ∝ (β k − 1) + N k i=1 D iv + λ N u i=1 p ik D iv(17)
Note that we downweight the information from the unlabeled documents by λ, to utilize more reliable information from labeled documents. The darker lines show the results with activeText and the lighter lines show the results with SVM. The solid lines use active sampling to decide the next set of documents to be labeled, and the dashed lines use random (passive) sampling. The y-axis indicates the outof-sample F1 score and the x-axis show the number of sampling steps. The left column shows the results on BBC corpus, where the target classes are "Politics," "Entertainment," "Business," "Sports," and "Technology." "Politics" class has 5% of the total dataset, and the rest 95% is evenly split across the rest of classes. The right column shows the results on the Supreme Court corpus, where the target classes are "Criminal Procedure" (32.4% of the corpus), "Civil Rights" (21.4%), "Economic Activity" (22.2%), "Judicial Power" (15.4%), "First Amendment (8.6%)." In our model, we set the number of latent clusters to be the same as the classification categories and linked each latent cluster to one classification category. activeText performs the best across the four specifications on both corpora. The left column shows the results on BBC corpus, and the right column shows the results on the Supreme Court corpus. activeText is much faster than SVM in multiclass classification. This is because multiclass classification with SVM requires fitting the model repeatedly at least the same time as the number of target classes. By contrast, activeText requires to fit only once regardless of the number of target classes.
C.2 Results
BBC
D Model Specifications and Description of the Datasets in the Validation Performance
We explain our decisions regarding pre-processing steps, model evaluation, and model specifications, followed by a detailed discussion of the results for each dataset.
D.1 Pre-processing
We employ the same pre-processing step for each of the four datasets using the R package Quanteda. 1 For each dataset, we construct a document-feature matrix (DFM), where each row is a document and each column is a feature. Each feature is a stemmed unigram. We remove stopwords, features that occur extremely infrequently, as well as all features under 4 characters.
To generate dataset with the proportion of positive class p (e.g. 5% or 50%), we randomly sample documents from the original dataset so that it achieves the proportion of the positive class p.
D.2 Datasets
BBC News The BBC News Dataset is a collection of 2,225 documents from 2004 to 2005 available at the BBC news website (?). This dataset is divided equally into five topics: business, entertainment, politics, sport, and technology. The classification exercise is to correctly predict whether or not an article belongs to the 'politics' topic.
Wikipedia Toxic Comments
The Wikipedia Toxic Comments dataset is a dataset made up of conversations between Wikipedia editors in Wikipedia's internal forums. The dataset was made openly available as part of a Kaggle competition, 2 and was used as a principle dataset of investigation by ?. The basic classification task is to label a given speech as toxic or not, where toxicity is defined as including harassment and/or abuse of other users. 3 The complete dataset is comprised of roughly 560,000 documents, roughly 10 percent of which are labeled as toxic.
Supreme Court Cases
The Supreme Court Rulings dataset is a collection of the text of 2000 US Supreme Court rulings between 1946 and 2012. We use the majority opinion of each case and the text was obtained through Caselaw Access Project. 4 For the classification label, we use the categories created by the Supreme Court Database. 5 The classification exercise here is to correctly identify rulings that are categorized as 'criminal procedure', which is the largest category in the corpus (26% of all rulings).
Human Rights Allegation Human Rights Allegation dataset contains more than 2 million sentences of human rights reports in 196 countries between 1996 and 2016, produced by Amnesty International, Human Rights Watch and the US State Department (?). The classification goal is to identify sentences with physical integrity rights allegation (16% of all reports). Example violations of physical integrity rights include torture, extrajudicial killing, and arbitrary arrest and imprisonment.
E Additional Results on Classification Performance
To complement the results presented in Figure 1 in the main text, Table E.1 presents the results (across datasets) of fitting our model at the initial (iteration 0) and last active step (iteration 30). It is clear from the table that the improvements activeText brings in terms of the F1-score, precision, and recall. Furthermore, after labeling 600 documents (20 per iteration), uncertainty sampling outperforms random sampling across evaluation metrics, which empirically validates the promise of active learning in terms of text classification. Similarly, and as noted in the main text, our results appear to be not too sensitive to the selection of the weighting parameter λ, provided that its value remains small. Figures E.1 confirms this finding. After 30 active steps, the performance of activeText is better in terms of F1-score when λ = 0.001 if compared to λ = 0.01 The x-axis shows the log of η v1 /η v0 , where η v1 corresponds the probability of observing the word v in a document with a positive label and η v0 for a document with a negative label. The high value in the x-axis means that a word is more strongly associated with positive labels. The y-axis is the log of word frequency. A word with high word frequency has more influence in shifting the label probability. In the generative model for activeText , words that appear often and whose ratio of η vk * vs η vk is high play a central role in the label prediction. By shifting the value of η of those keywords, we can accelerate the estimation of η and improve the classification performance. The rows correspond to different datasets and the columns correspond to various values of γ, which controls the degree of keyword upweighting. The y-axis indicates the out-ofsample F1 score and the x-axis shows the number of sampling steps. At each sampling step, 20 documents are labeled. We use λ = 0.001 to downweight information from unlabeled documents. The lines correspond to different levels of mislabels at the keyword labeling. At each iteration, 10 candidate keywords are proposed, and a hypothetical oracle decides if they are indeed keywords or not. 'True' keywords are defined in the same way as in Section ??.
In other words, a candidate keyword v for the positive class is a 'true' keyword, if the value of η v,k /η v,k is above 90% quantile, where k is the positive class and k is the negative class, and this η is what we obtain by training the model with the full labels. The same goes for the negative class. When the probability of mislabeling keywords is p%, an oracle makes a mistake in the labeling with probability p. Specifically, if a candidate keyword v is a 'true' keyword, the oracle would not label v as a keyword with probability p. Likewise, if a candidate keyword v is not a 'true' keyword, they would label v as a keyword.
H.2 Mislabeled Documents
In this section, we present results about the effect of 'honest' (random) mislabeling of documents on the mapping of documents to classes. As Figure H.2 shows, as the proportion of mislabels increases, the classification performance of activeText decreases. The rows correspond to different datasets. The y-axis indicates the out-of-sample F1 score and the x-axis shows the number of sampling steps. 20 documents are labeled at each sampling step. The colors correspond to different levels of mislabels in the labeling of documents. We find that as the proportion of mislabels increases, the classification performance of activeText decreases.
I Comparison of the predictions between activeText
and xgboost predictions for the ? data Figure I.1: Scatter plot of the dependent variable between the one constructed by activeText vs. the original The author performs a binomial logit regression where the dependent variable is the ratio of the number of targeted killings to the total number of government killings. We compare the dependent variable used in the original paper vs. the one we constructed using activeText . The 45-degree line (in red) corresponds to equality between measures. We can see that most observations lie around the 45-degree line while there are some values in the upper triangle. This suggests that activeText yields a similar dependent variable to the original one, while there may be some overestimations of the proportion of target killing with activeText . Table J.2 is a replication of the original table using activeText . In both tables, the coefficients on the Internet access variable are positive and statistically significant, which match the author's substantive conclusion. One may wonder why the absolute values of the coefficients on the IS and Internet is larger in Table J.2. However, we believe that this is because the number of observations in the IS control is small (51) and there is almost no variation of the Internet access variable within the observations with IS control, as shown in Figure J.
J Regression Table in ?
IS control
Internet (
Figure 1 :
1Passive vs Active Learning. For a classifier defined in two dimensions, Panel A illustrates the task: classify unlabeled documents (denoted by • and * ) as Political (P)
28Figure 2 :Figure 3 :
23See SI ?? for how we generate data with class-imbalance. 29 For a technical overview of BERT, and the Transformers technology underpinning it, see Devlin et al. (2018) and Vaswani et al. (2017), respectively. 30 See https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/. 31 Specifically, we trained a DistilBERT model (see Sanh et al. (2019)) for three epochs (the number of passes of the entire training dataset BERT has completed) using the default configuration from the Comparison of Classification Results across Random and Active Versions of activeText and SVM Comparison of Classification and Time Results across activeText, Active SVM, and BERT
Figure 4 :
4Transformers and PyTorch libraries for the Python programming language and used the trained model to predict the labels for the remaining documents for each corpus.32 With the population data, the average length of each document is 121 (BBC), 17 (Wikipedia), 1620 (Supreme Court), and 9 (Human Rights) Classification Results of activeText with and without Keywords
Figure 5 :
5Replication of Figure 3 in Gohdes (2020): Expected Proportion of Target Killings, Given Internet Accessibility and Whether a Region is Inhabitated by the Alawi Minority. The results from activeText are presented in the left panel and those of Gohdes (2020) are on the right.
Figure 6 :
6Replication of Figure 1 in Park et al. (2020): The Relationship Between Information Density and Average Sentiment Score.
, P. L., and Spirling, A. (2022), "Word embeddings: What works, what doesn't, and how to tell the difference for applied research," The Journal of Politics, 84(1), 101-115. Sanh, V., Debut, L., Chaumond, J., and Wolf, T. (2019), "DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter," arXiv preprint arXiv:1910.01108, . URL: https://arxiv.org/abs/1910.01108 Settles, B. (2011), Synthesis Lectures on Artificial Intelligence and Machine Learning : Active Learning Morgan & Claypool Publishers. Spirling, A. (2012), "US treaty making with American Indians: Institutional change and relative power, 1784-1911," American Journal of Political Science, 56(1), 84-97. Stewart, B. M., and Zhukov, Y. M. (2009), "Use of force and civil-military relations in Russia: an automated content analysis," Small Wars & Insurgencies, 20(2), 319-343. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L., and Polosukhin, I. (2017), "Attention is all you need," Advances in neural information processing systems, 30. Yang, Y., Ma, Z., Nie, F., Chang, X., and Hauptmann, A. G. (2015), "Multi-Class Active Learning by Uncertainty Sampling with Diversity Maximization," International Journal of Computer Vision, 113, 113-127.
Figure B. 1 :
1Classification Results with 2 and 5 Clusters. The darker lines show the results with 5 latent clusters and the lighter lines show 2 latent clusters. The columns correspond to various proportions of positive labels in the corpus. The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps. Using multiple clusters improves the classification performance when the number of latent clusters matches the data generating process.
Figure B. 1
1shows the results of a model with just two latent clusters vs. a model with 5 latent clusters but only two final classes (positive vs. negative). The darker lines show the results with 5 latent clusters and the lighter lines show the results with 2 latent clusters.
Figure B. 2 :
2Classification Results with Multiple Clusters and Keywords. The rows correspond to different datasets and the columns correspond to various proportions of positively labeled documents in the corpus. The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps. The linetype show whether keywords are supplied: the solid lines show the results with keywords and the dashed lines without keywords. The colors show the number of latent clusters in the mixture model: the darker lines show the results with 5 latent clusters and the lighter lines with 2 latent clusters.
Figure C. 1 :
1Multiclass Classification Results.
Figure C. 2 :
2Time comparison of Multiclass Classification Results. The darker lines show the results with activeText and the lighter lines show the results with SVM. The solid lines use active sampling to decide the next set of documents to be labeled, and the dashed lines use random (passive) sampling. The y-axis indicates the average cumulative computational time and the x-axis shows the number of sampling steps.
Suppose the number of documents in the original dataset is N with N pos and N neg the number of positive and negative documents, respectively. We compute M pos = floor(N p) and M neg = N − M pos as the ideal numbers of positive and negative documents. While M pos > N pos or M neg > N neg , we decrement M pos and M neg keeping the positive proportion to p. With M pos < N pos and M neg < N neg , we sample M pos positive documents and M neg negative documents from the original dataset. Finally, combine the sampled positive and negative documents to obtain the final dataset.
Figure E. 1 :
1Classification Results with 2 Clusters and λ = 0.01 vs λ = 0.001. The darker lines show the results with λ = 0.001 and the lighter lines show λ = 0.01. The columns correspond to various proportion of positive labels in the corpus. The y-axis indicates the out-of-sample F1 score and the x-axis show the number of sampling steps. The smaller the value of λ the better the performance of our model.
Figure F. 1 :
1Replication of F1 performance from Figures 2 and 3 with 0.05, 0.5, and population positive class rate G Visual Demonstration of Active Keyword Figure G.1 illustrates how the word-class matrix η is updated with and without keywords across iterations. A subset of the keywords supplied is labeled and highlighted by black dots.
Figure G. 1 :
1Update of the Word-class Matrix (η)
Figure H. 1 :
1Classification Results with Mislabels in Active Keywords
Figure H. 2 :
2Classification Results with Mislabels in Active Document Labeling
Figure J. 1 :Figure K. 1 :Figure K. 2 :
112Histogram of the Internet (3G) variable by the IS control in the original data The left histogram is the distribution of the Internet (3G) variable for the observation under IS control, and the right one is not under IS control. The number of observations with IS control is only 51 out of the total observation of 640. In addition, among those with IS control, all observations except one takes the same value for the Internet access variable. This suggests that the regression coefficient on the interaction of IS control and Internet access can be highly unstable.K Effect of Labeling More Sentences for the ? ReanalysisIn this section, we present additional results mentioned in the main text about our reanalysis of ?. Using the Difference in Out of Sample F1 Score to Decide a Stopping Point. Replication ofFigure 1in ?: The Relationship Between Information Density and Average Sentiment Score Across Different Settings for the Total Number of Labeled Documents.
Table 1 :
1Confusion Matrix: Comparison of the Predictions of a Classifier to Documents' True Labels
Table 2 :
2Classification Performance: Comparison with Gohdes
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 B.2 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 B.3 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 C.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 D Model Specifications and Description of the Datasets in the Validation Performance 12 D.1 Pre-processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 D.2 Datasets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 Mislabeled Keywords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 H.2 Mislabeled Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21Supplementary Information for Improving
Probabilistic Models in Text Classification via
Active Learning
Contents
A Detailed explanations about the EM algorithm to estimate parameters
1
B EM algorithm for binary classification with multiple clusters
4
B.1 C Multiclass Classification
8
C.1 E Additional Results on Classification Performance
14
F Main Results when Varying Positive Class Rate
16
G Visual Demonstration of Active Keyword
17
H Classification Performance with Mislabels
19
H.1 I Comparison of the predictions between activeText and xgboost predictions
for the ? data
23
J Regression Table in ?
25
K Effect of Labeling More Sentences for the ? Reanalysis
29
Table E . 1 :
E1Classification Performance: Uncertainty vs Random Sampling with λ = 0.001Dataset
Active Step
Uncertainty Sampling
Random Sampling
Precision Recall F1-score Precision Recall F1-score
Wikipedia
0
0.71
0.13
0.22
0.71
0.13
0.22
30
0.71
0.54
0.61
0.45
0.56
0.50
BBC
0
0.33
0.86
0.48
0.33
0.86
0.48
30
0.92
0.96
0.94
0.92
0.94
0.93
Supreme Court
0
0.46
0.98
0.63
0.46
0.98
0.63
30
0.85
0.91
0.88
0.75
0.96
0.84
Human Rights
0
0.61
0.01
0.02
0.61
0.01
0.02
30
0.53
0.42
0.47
0.46
0.44
0.45
F Main Results when Varying Positive Class Rate0.05
0.50
Population
BBC (0.19)
Human Rights (0.16) Supreme Court (0.26)
Wikipedia (0.09)
0
200
400
600 0
200
400
600 0
200
400
600
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
0.00
0.25
0.50
0.75
1.00
Documents Labeled
Out−of−Sample F1 (Mean)
activeText
Active SVM
Random SVM
Random Mixture
BERT
Table I .
I1 shows the confusion matrix between the prediction based on activeText and the prediction by xgboost used in the original paper. Most observations fall in the diagonal cells of the matrix, and the correlation between the two predictions is quite high (0.93). One difference is that activeText classifies more documents to target killings compared to the original predictions. Note that either prediction claims the ground truth. Both are the results of different classifiers.Original
untargeted targeted non-government
activeText
untargeted
50327
411
135
targeted
1630
10044
31
non-government
382
34
2280
Table I
I.1: Confusion matrix between activeText and xgboost predictions Proportion of Target Killing (Original) Proportion of Target Killing (activeText)0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
Table J .
J1 is the original regression table reported in ? while
1. 3G) * IS control −1.798 * * * −1.525 * * * −1.377 * * * −1.391 * * * −1.336 * * * Internet (3G) * Opp. control −0.605 * * * −0.722 * * * −0.511 * * −0.533 * * * Killings (log) −0.273 * * * −0.271 * * * −0.354 * * * −0.412 * * * −0.584 * * * * * * p < 0.001; * * p < 0.01; * p < 0.05. Reference category: Contested control. Governorate-clustered SEs.I
II
III
IV
V
VI
VII
Intercept
−2.340 * * * −2.500 * * *
−0.899 *
−0.410
−0.019
−1.308
−3.013 * *
(0.205)
(0.267)
(0.403)
(0.521)
(0.357)
(1.057)
(1.103)
Internet access (3G)
0.224 *
0.231 *
0.200 *
0.205 *
0.265 *
0.313 * *
0.909 * * *
(0.095)
(0.094)
(0.085)
(0.087)
(0.113)
(0.116)
(0.124)
% Govt control
0.016 * * *
(0.004)
Internet (3G) * % Govt control
−0.014 * * *
(0.001)
Govt control
0.774 *
0.803 * *
1.167 * * *
1.180 * * *
0.080
0.856 * *
0.811 * * *
(0.332)
(0.272)
(0.284)
(0.288)
(0.344)
(0.313)
(0.237)
IS control
2.027 * * *
1.644 * * *
1.045 *
−0.324
0.432
0.787
−0.663 * *
(0.435)
(0.462)
(0.421)
(0.209)
(0.414)
(0.418)
(0.221)
Kurd control
0.386
−0.243
−0.506
−1.331
−0.402
0.033
−0.616
(0.594)
(0.843)
(0.760)
(1.134)
(0.745)
(0.802)
(0.432)
Opp control
1.160 * * *
1.252 * * *
0.727 *
0.759 *
−0.700 *
−0.281
−0.176
(0.298)
(0.317)
(0.293)
(0.296)
(0.283)
(0.342)
(0.164)
Internet (3G) * Govt control
−0.163
−0.182
−0.327 * *
−0.324 * *
−0.104
−0.358 * *
(0.132)
(0.117)
(0.119)
(0.122)
(0.133)
(0.120)
Internet ((0.220)
(0.281)
(0.251)
(0.264)
(0.261)
Internet (3G) * Kurd control
−0.133
0.336
0.093
0.895
−0.052
−0.202
(0.444)
(0.649)
(0.569)
(0.936)
(0.553)
(0.527)
0.316 *
0.286
(0.159)
(0.173)
(0.157)
(0.158)
(0.151)
(0.186)
# (0.054)
(0.055)
(0.051)
(0.072)
(0.074)
Govt gains
0.643
(0.385)
Govt losses
0.632
(0.413)
Christian
0.068
0.345 * *
0.398 * * *
(0.111)
(0.116)
(0.110)
Alawi
1.479 * *
−1.167 * * * −0.812 * * *
(0.522)
(0.177)
(0.176)
Druze
−0.634 * * *
−0.302
0.135
(0.191)
(0.191)
(0.190)
Kurd
−0.659 * * *
−0.542 *
−0.580 * *
(0.194)
(0.237)
(0.212)
Internet (3G) * Alawi
−0.909 * * *
(0.163)
Pop (log)
0.196
0.408 * *
(0.149)
(0.150)
Unempl. (%)
−0.016
−0.002
(0.012)
(0.012)
AIC
11956.847
9993.704
9665.749
9495.591
7671.979
7873.915
7327.796
BIC
12001.524 10239.427
9915.941
9744.552
7944.509
8150.913
7595.858
Log Likelihood
−5968.424 −4941.852 −4776.875 −4691.796 −3774.990 −3874.958 −3603.898
Deviance
9519.651
7466.508
7136.554
7026.891
5132.784
5332.720
4790.601
Num. obs.
640
640
640
626
640
640
640
Table J .
J1: Table 1 in Gohdes 2020: Original table 3G) * IS control −14.829 * * * −15.506 * * * −15.351 * * * −15.392 * * * −15.330 * * * Killings (log) −0.278 * * * −0.274 * * * −0.356 * * * −0.415 * * * −0.567 * * * * * * p < 0.001; * * p < 0.01; * p < 0.05. Reference category: Contested control. Governorate-clustered SEs.I
II
III
IV
V
VI
VII
Intercept
−2.196 * * *
−2.428 * * *
−0.795 *
−0.351
−0.037
−1.141
−2.695 *
(0.197)
(0.242)
(0.390)
(0.490)
(0.348)
(1.229)
(1.227)
Internet access (3G)
0.277 * *
0.282 * * *
0.242 * *
0.250 * *
0.342 * * *
0.369 * * *
0.853 * * *
(0.091)
(0.081)
(0.075)
(0.077)
(0.103)
(0.107)
(0.118)
% Govt control
0.015 * * *
(0.004)
Internet (3G) * % Govt control
−0.013 * * *
(0.001)
Govt control
0.625 *
0.672 * *
1.048 * * *
1.058 * * *
0.151
0.843 * *
0.559 *
(0.319)
(0.255)
(0.269)
(0.273)
(0.358)
(0.300)
(0.249)
IS control
15.157 * * *
15.688 * * *
15.072 * * *
−0.275
14.551 * * *
14.877 * * *
−0.600 * *
(1.123)
(1.148)
(1.136)
(0.200)
(1.132)
(1.134)
(0.209)
Kurd control
0.795
0.099
−0.227
−0.440
−0.157
0.334
−0.369
(0.516)
(0.729)
(0.671)
(1.119)
(0.677)
(0.744)
(0.405)
Opp control
0.978 * * *
1.134 * * *
0.594 *
0.634 *
−0.606 *
−0.197
−0.278
(0.294)
(0.304)
(0.284)
(0.289)
(0.270)
(0.322)
(0.155)
Internet (3G) * Govt control
−0.169
−0.190
−0.334 * *
−0.335 * *
−0.183
−0.408 * * *
(0.126)
(0.103)
(0.108)
(0.111)
(0.131)
(0.111)
Internet ((1.080)
(1.096)
(1.090)
(1.091)
(1.091)
Internet (3G) * Kurd control
−0.400
0.138
−0.080
0.134
−0.240
−0.366
(0.324)
(0.514)
(0.463)
(0.940)
(0.473)
(0.460)
Internet (3G) * Opp. control
−0.542 * * *
−0.688 * * *
−0.468 * *
−0.497 * *
0.181
0.149
(0.159)
(0.164)
(0.150)
(0.152)
(0.145)
(0.176)
# (0.053)
(0.054)
(0.051)
(0.071)
(0.073)
Govt gains
0.512
(0.349)
Govt losses
0.730 *
(0.334)
Christian
0.092
0.352 * *
0.369 * * *
(0.115)
(0.113)
(0.105)
Alawi
1.329 *
−0.928 * * * −0.585 * * *
(0.528)
(0.167)
(0.168)
Druze
−0.628 * *
−0.310
0.063
(0.196)
(0.197)
(0.209)
Kurd
−0.565 * *
−0.502 *
−0.615 * *
(0.204)
(0.227)
(0.207)
Internet (3G) * Alawi
−0.782 * * *
(0.164)
Pop (log)
0.185
0.391 *
(0.167)
(0.168)
Unempl. (%)
−0.019
−0.007
(0.012)
(0.012)
AIC
12050.644 10116.531
9739.975
9570.556
8038.596
8197.433
7735.527
BIC
12095.321 10362.255
9990.166
9819.517
8311.125
8474.431
8003.589
Log Likelihood
−6015.322 −5003.266 −4813.988 −4729.278 −3958.298 −4036.717 −3807.763
Deviance
9500.059
7475.946
7097.391
6986.658
5386.011
5542.849
5084.942
Num. obs.
640
640
640
626
640
640
640
Table J .
J2: Table 1 in Gohdes 2020: Reanalysis with activeText
For a comprehensive discussion on supervised and unsupervised algorithms for the analysis of text as data, we refer the interested reader toGrimmer et al. (2022). 7 That is, learn P (Y labeled |X labeled ). This can be accomplished with a variety of models, including e.g. linear or logistic regression, support vector machines (SVM), Naive Bayes, K-nearest neighbor, etc.8 Examples of clustering algorithms include K-means and Latent Dirichlet Allocation (LDA). 9 In most political science applications of unsupervised learning techniques, the author either is conducting an exploratory analysis and is therefore uninterested in classification, or performs an ad hoc interpretation of the clusters by reading top examples of a given cluster, and on that basis infers the classification from the clustering(Knox et al., 2022).
While Y is not observed for the unlabeled data, these observations do contain information about the joint distribution of the features X, and as such can be used with labeled data to increase the accuracy of a text classifier(Nigam et al., 2000). 11 This is particularly true when e.g., the researcher knows that the data has a complicated hierarchical structure since the hierarchy can be incorporated directly into the generative model.12 Overfitting occurs when a model learns to predict classification outcomes based on patterns in the training set (i.e., the data used to fit the model) that does not generalize to the broader universe of cases to be classified. A model that is overfitted may predict the correct class with an extremely high degree of accuracy for items in the training set, but will perform poorly when used to predict the class for items that
See alsoDasgupta (2011); Settles (2011);Hanneke (2014);Hino (2021) and the references therein. 16 This is just one of many possible approaches. Other uncertainty-based approaches to active learning include query-by-committee, variance reduction, expected model change, etc. We refer the interested reader to Settles (2011) for an accessible review on active learning and Hanneke (2014) for a more technical exposition.17 While in our presentation, we have focused on instances of labeling one observation per iteration, exactly how many observations to select and label at each active iterations is also an important practical consideration for any researcher. As noted byHoi et al. (2006), to reduce the cost of retraining the model per instance of labeling, labeling many documents per iteration (as a batch) is the best approach. This is especially important when working with a large amount of data.
This is due to the fact the fixed budget has not been set using an optimality criterion other than to stop human coding at some point. SeeIshibashi and Hino (2020) for further discussion of this point.19 For a discussion of this approach in our own application, see Section Model Evaluation.
For a full derivation of the EM algorithm, see SI ??. 21 While we assume that these documents are selected randomly, the researcher may choose any subset of labeled documents with which to initialize the model.
More information about preprocessing and descriptions about the dataset are in SI ?? 27 While we simulate human coders who label all documents correctly at the labeling stage, this may not be the case because humans can make mistakes in practice. SI ?? shows that honest (random) mistakes in the labeling of documents can hurt the classification performance.
See https://quanteda.io 2 See https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge 3 While the dataset also contains finer gradation of 'types' of toxicity, we like ? stick to the binary toxic-or-not classification task.
https://case.law 5 For a full list of categories, see http://www.supremecourtdatabase.org/documentation.php?var= issueArea.
Whose ideas? Whose words? Authorship of Ronald Reagan's radio addresses. E M Airoldi, S E Fienberg, K K Skinner, PS: Political Science & Politics. 403Airoldi, E. M., Fienberg, S. E., and Skinner, K. K. (2007), "Whose ideas? Whose words? Au- thorship of Ronald Reagan's radio addresses," PS: Political Science & Politics, 40(3), 501- 506.
Stopping Active Learning Based on Predicted Change of F Measure for Text Classification. M Altschuler, M Bloodgood, 2019 IEEE 13th International Conference on Semantic Computing (ICSC). Altschuler, M., and Bloodgood, M. (2019), Stopping Active Learning Based on Predicted Change of F Measure for Text Classification,, in 2019 IEEE 13th International Conference on Semantic Computing (ICSC), pp. 47-54.
C M Bishop, J Lassarre, Generative or Discriminative? Getting the Best of Both Worlds. 8Bishop, C. M., and Lassarre, J. (2007), "Generative or Discriminative? Getting the Best of Both Worlds," Bayesian Statistics, 8, 3-24.
Making the news: Politics, the media, and agenda setting. A E Boydstun, University of Chicago PressBoydstun, A. E. (2013), Making the news: Politics, the media, and agenda setting University of Chicago Press.
Electoral reform and national security in Japan: From pork to foreign policy. A Catalinac, Cambridge University PressCatalinac, A. (2016), Electoral reform and national security in Japan: From pork to foreign policy Cambridge University Press.
Improving generalization with active learning. D Cohn, L Atlas, R Ladner, Machine Learning. 15Cohn, D., Atlas, L., and Ladner, R. (1994), "Improving generalization with active learning," Machine Learning, 15(2), 201-221.
Recording repression: Identifying physical integrity rights allegations in annual country human rights reports. R Cordell, K C Clay, C J Fariss, R M Wood, Wright , T , International Studies Quarterly. Cordell, R., Clay, K. C., Fariss, C. J., Wood, R. M., and Wright, T. (2021), "Recording repression: Identifying physical integrity rights allegations in annual country human rights reports," International Studies Quarterly, .
Two Faces of Active Learning. S Dasgupta, Theoretical Computer Science. 41219Dasgupta, S. (2011), "Two Faces of Active Learning," Theoretical Computer Science, 412(19), 1767-1781.
Maximum Likelihood from Incomplete Data via the EM Algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society, Series B. 391Dempster, A. P., Laird, N. M., and Rubin, D. B. (1977), "Maximum Likelihood from In- complete Data via the EM Algorithm," Journal of the Royal Statistical Society, Series B, 39(1), 1-38.
Text preprocessing for unsupervised learning: Why it matters, when it misleads, and what to do about it. M J Denny, A Spirling, Political Analysis. 262Denny, M. J., and Spirling, A. (2018), "Text preprocessing for unsupervised learning: Why it matters, when it misleads, and what to do about it," Political Analysis, 26(2), 168-189.
Bert: Pre-training of deep bidirectional transformers for language understanding. J Devlin, M.-W Chang, K Lee, K Toutanova, arXiv:1810.04805arXiv preprintDevlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2018), "Bert: Pre-training of deep bidirectional transformers for language understanding," arXiv preprint arXiv:1810.04805, .
Keyword assisted topic models. S Eshima, K Imai, T Sasaki, arXiv:2004.05964arXiv preprintEshima, S., Imai, K., and Sasaki, T. (2020), "Keyword assisted topic models," arXiv preprint arXiv:2004.05964, .
Repression technology: Internet accessibility and state violence. A R Gohdes, American Journal of Political Science. 643Gohdes, A. R. (2020), "Repression technology: Internet accessibility and state violence," American Journal of Political Science, 64(3), 488-503.
Machine learning human rights and wrongs: How the successes and failures of supervised learning algorithms can inform the debate about information effects. K T Greene, B Park, M Colaresi, Political Analysis. 272Greene, K. T., Park, B., and Colaresi, M. (2019), "Machine learning human rights and wrongs: How the successes and failures of supervised learning algorithms can inform the debate about information effects," Political Analysis, 27(2), 223-230.
J Grimmer, M E Roberts, B M Stewart, Text as data: A New Framework for Machine Learning and the Social Sciences. Princeton University PressGrimmer, J., Roberts, M. E., and Stewart, B. M. (2022), Text as data: A New Framework for Machine Learning and the Social Sciences Princeton University Press.
Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts. J Grimmer, B Stewart, Political Analysis. 213Grimmer, J., and Stewart, B. (2013), "Text as Data: The Promise and Pitfalls of Automatic Content Analysis Methods for Political Texts.," Political Analysis, 21(3), 267-297.
Theory of Disagreement-Based Active Learning. S Hanneke, Foundations and Trends in Machine Learning. 7Hanneke, S. (2014), "Theory of Disagreement-Based Active Learning," Foundations and Trends in Machine Learning, 7(2-3), 131-309.
T Hastie, R Tibshirani, J Friedman, The Elements of Statistical Learning. New York, NY, USASpringer New York IncHastie, T., Tibshirani, R., and Friedman, J. (2009), The Elements of Statistical Learning, Springer Series in Statistics, New York, NY, USA: Springer New York Inc.
Active Learning: Problem Settings and Recent Developments. H Hino, Journal of the Japan Statistical Society. 502Japanese IssueHino, H. (2021), "Active Learning: Problem Settings and Recent Developments," Journal of the Japan Statistical Society, Japanese Issue, 50(2), 317-342.
Large-Scale Text Categorization by Batch Mode Active Learning. S Hoi, R Jin, M R Lyu, WWW 06: Proceedings of the 15th International Conference on World Wide Web. Edinburgh, Scotland26Hoi, S., Jin, R., and Lyu, M. R. (2006), Large-Scale Text Categorization by Batch Mode Active Learning,, in WWW 06: Proceedings of the 15th International Conference on World Wide Web, Edinburgh, Scotland, May 23, Vol. 26, pp. 633-642.
Stopping criterion for active learning based on deterministic generalization bounds. H Ishibashi, H Hino, PMLRProceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics. S. Chiappa, and R. Calandrathe Twenty Third International Conference on Artificial Intelligence and Statistics108of Proceedings of Machine Learning ResearchIshibashi, H., and Hino, H. (2020), Stopping criterion for active learning based on determin- istic generalization bounds,, in Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics, eds. S. Chiappa, and R. Calandra, Vol. 108 of Proceedings of Machine Learning Research, PMLR, pp. 386-397. URL: https://proceedings.mlr.press/v108/ishibashi20a.html
How the Chinese government fabricates social media posts for strategic distraction, not engaged argument. G King, J Pan, M E Roberts, American political science review. 1113King, G., Pan, J., and Roberts, M. E. (2017), "How the Chinese government fabricates social media posts for strategic distraction, not engaged argument," American political science review, 111(3), 484-501.
Testing Causal Theories with Learned Proxies. D Knox, C Lucas, W K T Cho, Annual Review of Political Science. 251Knox, D., Lucas, C., and Cho, W. K. T. (2022), "Testing Causal Theories with Learned Proxies," Annual Review of Political Science, 25(1), 419-441.
A Sequential Algorithm for Training Text Classifiers. D D Lewis, W A Gale, SIGIR '94. B. W. Croft, and C. J. van RijsbergenLondon, LondonSpringerLewis, D. D., and Gale, W. A. (1994), A Sequential Algorithm for Training Text Classifiers,, in SIGIR '94, eds. B. W. Croft, and C. J. van Rijsbergen, Springer London, London, pp. 3-12.
Who Polices the Administrative State?. K Lowande, American Political Science Review. 1124Lowande, K. (2018), "Who Polices the Administrative State?," American Political Science Review, 112(4), 874-890.
Politicization and Responsiveness in Executive Agencies. K Lowande, The Journal of Politics. 811Lowande, K. (2019), "Politicization and Responsiveness in Executive Agencies," The Journal of Politics, 81(1), 33-48.
Active Learning Approaches for Labeling Text: Review and Assessment of the Performance of Active Learning Approaches. B Miller, F Linder, W R Mebane, Political Analysis. Miller, B., Linder, F., and Mebane, W. R. (2020), "Active Learning Approaches for Label- ing Text: Review and Assessment of the Performance of Active Learning Approaches," Political Analysis, pp. 1-20.
A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data. D J Miller, H Uyar, Advances in Neural Information Processing Systems. M. Mozer, M. Jordan, and T. PetscheMIT Press9Miller, D. J., and Uyar, H. (1996), A Mixture of Experts Classifier with Learning Based on Both Labelled and Unlabelled Data,, in Advances in Neural Information Processing Systems, eds. M. Mozer, M. Jordan, and T. Petsche, Vol. 9, MIT Press.
Electoral Accountability and Particularistic Legislation: Evidence from an Electoral Reform in Mexico. L Motolinia, American Political Science Review. 1151Motolinia, L. (2021), "Electoral Accountability and Particularistic Legislation: Evidence from an Electoral Reform in Mexico," American Political Science Review, 115(1), 97-113.
On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes. A Ng, Jordan , M , Advances in neural information processing systems. 14Ng, A., and Jordan, M. (2001), "On Discriminative vs. Generative Classifiers: A Compari- son of Logistic Regression and Naive Bayes," Advances in neural information processing systems, 14.
Deadly clerics: Blocked ambition and the paths to jihad. R A Nielsen, Cambridge University PressNielsen, R. A. (2017), Deadly clerics: Blocked ambition and the paths to jihad Cambridge University Press.
Text classification from labeled and unlabeled documents using EM. K Nigam, A K Mccallum, S Thrun, Mitchell , T , Machine learning. 392-3Nigam, K., McCallum, A. K., Thrun, S., and Mitchell, T. (2000), "Text classification from labeled and unlabeled documents using EM," Machine learning, 39(2-3), 103-134.
Human Rights are (Increasingly) Plural: Learning the Changing Taxonomy of Human Rights from Large-scale Text Reveals Information Effects. B Park, K Greene, M Colaresi, American Political Science Review. 1143Park, B., Greene, K., and Colaresi, M. (2020), "Human Rights are (Increasingly) Plural: Learning the Changing Taxonomy of Human Rights from Large-scale Text Reveals Infor- mation Effects," American Political Science Review, 114(3), 888-910.
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Pennington, J., Socher, R., and Manning, C. D. (2014), Glove: Global vectors for word representation,, in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543.
| [] |
[
"Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification",
"Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification"
] | [
"Neema Kotonya \nDepartment of Computing\nImperial College London\n\n",
"Thomas Spooner \nJ.P. Morgan AI Research\n\n",
"Daniele Magazzeni daniele.magazzeni@jpmorgan.com \nJ.P. Morgan AI Research\n\n",
"Francesca Toni \nDepartment of Computing\nImperial College London\n\n"
] | [
"Department of Computing\nImperial College London\n",
"J.P. Morgan AI Research\n",
"J.P. Morgan AI Research\n",
"Department of Computing\nImperial College London\n"
] | [
"Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER) at EMNLP 2021"
] | This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1 | 10.18653/v1/2021.fever-1.3 | [
"https://www.aclanthology.org/2021.fever-1.3.pdf"
] | 237,940,771 | 2109.12349 | 3a9b039b0298f94457f78222a327a4bc72d977a7 |
Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification
30 November 10, 2021
Neema Kotonya
Department of Computing
Imperial College London
Thomas Spooner
J.P. Morgan AI Research
Daniele Magazzeni daniele.magazzeni@jpmorgan.com
J.P. Morgan AI Research
Francesca Toni
Department of Computing
Imperial College London
Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification
Proceedings of the Fourth Workshop on Fact Extraction and VERification (FEVER) at EMNLP 2021
the Fourth Workshop on Fact Extraction and VERification (FEVER) at EMNLP 20212130 November 10, 202121
This paper presents an end-to-end system for fact extraction and verification using textual and tabular evidence, the performance of which we demonstrate on the FEVEROUS dataset. We experiment with both a multi-task learning paradigm to jointly train a graph attention network for both the task of evidence extraction and veracity prediction, as well as a single objective graph model for solely learning veracity prediction and separate evidence extraction. In both instances, we employ a framework for per-cell linearization of tabular evidence, thus allowing us to treat evidence from tables as sequences. The templates we employ for linearizing tables capture the context as well as the content of table data. We furthermore provide a case study to show the interpretability our approach. Our best performing system achieves a FEVEROUS score of 0.23 and 53% label accuracy on the blind test data. 1
Introduction
Fact checking has become an increasingly important tool to combat misinformation. Indeed the study of automated fact checking in NLP (Vlachos and Riedel, 2014), in particular, has yielded a number of valuable insights in recent times. These include task formulations such as matching for discovering already fact-checked claims (Shaar et al., 2020), identifying neural fake news (Zellers et al., 2020), fact verification in scientific (Wadden et al., 2020) and public health (Kotonya and Toni, 2020b) domains, and end-to-end fact verification (Thorne et al., 2018), which is the subject of the FEVER-OUS benchmark dataset (Aly et al., 2021).
A majority of automated fact checking studies only consider text as evidence for verifying claims.
Recently, there have been a number of works which look at fact-checking with structured and semistructured data, mainly in the form of tables and knowledge bases ) -but factchecking from both structured and unstructured data has been largely unexplored. Given the sophistication in the presentation of fake news, it is important to develop fact checking tools for assessing evidence from a wide array of evidence sources in order to reach a more accurate verdict regarding the veracity of claims.
In this work, we propose a graph-based representation that supports both textual and tabular evidence, thus addressing some of the key limitations of past architectures. This approach allows us to capture relations between evidence items as well as claim-evidence pairs, borrowing from the argumentation and argument mining literature (Cabrio and Villata, 2020;Vecchi et al., 2021), as well as argument modeling for fact verification (Alhindi et al., 2018).
We experiment with two formulations for graph learning. For the first, we employ a multi-task learning paradigm to jointly train a graph attention network (Velickovic et al., 2018) for both the task of evidence extraction -which we model as a node selection task -and a graph-level veracity prediction task. In the second, we explicitly separate the verification and extraction tasks, where standard semantic search is used for evidence extraction, and veracity prediction is treated as a graph-level classification problem.
For veracity prediction we predict a label for each claim, one of SUPPORTS, REFUTES, or NOT-ENOUGH-INFO (NEI), which is conditioned on all relevant evidence, hence the intuition to frame veracity prediction as a graph-level prediction task. In both formulations, we employ context-aware table linearization templates to produce per-cell sequence representations of tabular evidence and thus construct evidence reasoning graphs where nodes have heterogeneous evidence types (i.e., representing sentences and tables on the same evidence reasoning graph).
Contributions. The three main contributions of the paper are summarized below:
1. Provide insightful empirical analysis of the new FEVEROUS benchmark dataset.
2. Propose a novel framework for interpretable fact extraction using templates to derive context-aware per-cell linearizations.
3. Present a graph reasoning model for fact verification that supports both structured and unstructured evidence data.
Both the joint model and separately trained models exhibit a significant improvement over the FEVEROUS baseline, as well as significant improvements for label accuracy and evidence recall. Our separated approach to fact extraction and verification achieves a FEVEROUS score of 0.23 and label accuracy of 53% on the blind test data.
Related Work
Graph Reasoning for Fact Verification. Several works explore graph neural networks (GNN) for fact extraction and verification, both for finegrained evidence modelling (Liu et al., 2020;Zhong et al., 2020) and evidence aggregation for veracity prediction (Zhou et al., 2019). Furthermore, graph learning has also been leveraged to build fake news detection models which learn from evidence from different contexts; e.g., user-based and content-based data (Liu et al., 2020;Lu and Li, 2020). There are also non-neural approaches to fake news detection with graphs (Ahmadi et al., 2019;Kotonya and Toni, 2019). However, to the best of our knowledge, this work is the first to employ a graph structure to jointly reason over both text and tabular evidence data in both single task learning (STL) and multi-task learning (MTL) settings.
Data Analysis
Further to the FEVEROUS dataset statistics discussed by the task description paper (Aly et al., 2021), we perform our own data exploration. We present insights from our data analysis of the FEVEROUS dataset, which we use to inform system design choices.
Table types. Wikipedia tables can be categorized into one of two classes: infoboxes and general tables. Infoboxes are fixed format tables which typically appear in the top right-hand corner of a Wikipedia article. General tables can convey a wider breadth of information (e.g., election results, sports match scores, the chronology of an event) and typically have more complex structures (e.g., multiple headers). List items can also be considered as a special subclass of tables, where the number of items is analogous to the number of columns and the nests of the list signify table rows.
Evidence types. The first observation we make is that, similar to the FEVER dataset (Thorne et al., 2018), a sizeable portion of the training instances rely on evidence items which are extracted from the first few sentences of a Wikipedia article. The most common evidence items are the first and second sentences in a Wikipedia article, which appear in 36% and 18% of evidence sets, respectively. The four most frequent evidence cells all come from the first table, with 49% of first tables listed as evidence in the train and dev data being infoboxes. Further, the vast majority of cell evidence items are non-header cells, but these only account for approximately 5.1% of tabular evidence in the train and dev datasets. A summary of these findings is provided in Table 1 for the most common evidence types in the training data.
Evidence item co-occurrences. We investigate the most common evidence pairs, both in individual evidence sets and also in the union of all evidence sets relating to a claim. The most common evidence pair in the training data is (SENTENCE_0, SENTENCE_1), which accounts for 3.2% of evidence co-occurrences. The most common sentence- (CELL_0_2_0, CELL_0_2_1). All of the ten most common co-occurrences either contain one of the first four sentences in an article or evidence from one of the first two tables.
NEI label. Lastly, we choose to explore instances of the NEI class. We sample 100 instances of NEI claims from the training data and note their qualitative attributes. We pay particular attention to this label as it is the least represented in the data. Unlike the FEVER score, the FEVEROUS metric requires the correct evidence, as well as the label, to be supplied for an NEI instance for credit to awarded. Our analysis is summarized in Table 2. We categorize mutations, using the FEVEROUS annotation scheme, as one of three types: entity substitution, including more facts than available in the provided evidence (i.e., including additional propositions), and paraphrasing or generalizing. We use Other to categorize claims with a mutation not captured by one of these three categories.
Mutation Type % Sample
Entity Substitution 21% More facts than in evidence 42% Paraphrasing or generalizing 36% Other 1% Table 3 is a mutation of a SUPPORTS instance where entity substitution (humans → reptiles) has been used to make the first clause unverifiable, hence changing the label to NEI.
Claim
Nucleoporin 153, a protein which in reptiles is encoded by the NUP153 gene, is an essential component of the basket of nuclear pore complexes (NPCs) in vertebrates, and required for the anchoring of NPCs.
Evidence Nucleoporin 153 (Nup153) is a protein which in humans is encoded by the NUP153 gene. It is an essential component of the basket of nuclear pore complexes (NPCs) in vertebrates, and required for the anchoring of NPCs.
Methods
Our proposed method for fact verification is an end-to-end system comprising three modules:
(1) A robust document retrieval procedure (see Section 4.1).
(2) An evidence graph construction and intermediate evidence filtering process (see Section 4.2).
(3) A joint veracity label prediction and evidence selection layer that reasons over the evidence graph (see Section 4.3).
An illustration of the complete pipeline is provided in Figure 1, and details of each processing stage are provided in the following sections.
Document Retrieval
For document retrieval, we employ an entity linking and API search approach similar to that of Hanselowski et al. (2018). The WikiMedia API 2 is used to query Wikipedia for articles related to the claim, using named entities and noun phrases from the claim as search terms. These retrieved Wikipedia page titles form our candidate document set. Named entities that are not retrieved by the API are then extracted from the claim as a handful of these identify pages which are present in
Context-Aware Linearizer
Figure 1: Our fact verification pipeline. We employ two graph reasoning approaches: STL where the evidence extraction and modelled separately, and MTL where further evidence filtering is performed jointly with veracity prediction by the Graph Reasoner.
the Wikipedia dump (e.g., /wiki/Lars_Hjorth is present in the provided Wikipedia evidence dump, but is not returned by the WikiMedia API). In the same vein, we discard titles which are returned by the API, but are not in the Wikipedia dump. TF-IDF and cosine similarity are employed to score and rerank the retrieved Wikipedia articles with respect to their similarity to the claim. As in the approach of Hanselowski et al. (2018), the seven highest ranked pages are chosen at test time. For completeness, we also experiment with approaches to document retrieval which select pages based on a threshold score (Nie et al., 2019). Ultimately, we find these methods yield lower precision.
Evidence Reasoning Graph
Similar to other fact verification systems (Augenstein et al., 2019;Hidey et al., 2020), we jointly train our model for both the evidence selection and veracity prediction tasks. In contrast to these approaches, however, we employ a graph reasoning module for the joint learning of the two tasks. We choose this approach to exploit the permutation invariance of evidence with respect to a claim, as there is no canonical ordering of evidence. Our graph formulation differs from previous graphbased fact verification systems in that we construct a heterogeneous graph to model both tabular and sequence evidence data.
In the following sections we will describe two specific approaches that are taken for the fact verification task: (1) where we condition the graph model to learn both node-level, fine-grained evi-dence selection and graph-level veracity label prediction simultaneously, and (2) where we only learn graph-level veracity prediction.
Linearizing Tabular Data. We linearize both table and list evidence data and generate from these linearizations a contextualized sequence representation which captures information about each cell as well as its surrounding page elements. This is accomplished using templates that distinguish explicitly between infoboxes and general tables. For the latter, we engineer the templates to handle two particular complexities that are present only in general tables: (1) nested headers, and (2) table cells which span multiple rows and multiple columns (see Figure 2). Furthermore, we also employ templates for producing context-rich representations of item lists (see Table 4 for more details). Graph Structure. We construct a fully connected graph G = (V, E), where each node n i ∈ V represents a claim-evidence pair, similar to previ- ous evidence graphs for automated fact checking (Zhao et al., 2020;Zhou et al., 2019). Self-loops are also included in G for each node in order to improve evidence reasoning, so the set of edges for the graph is E = {(n i , n j ) | n i , n j ∈ V }.
At test time, we take the Wikipedia pages output by the document retrieval module, segment each Wikipedia page into its constituent page items (i.e., sentences , table cells, table captions and list items), and refer to these as evidence items. These evidence items are then filtered. Using an ensemble of pre-trained S-BERT sentence embeddings (Reimers and Gurevych, 2019), we perform semantic search with the claim as our query. Cosine similarity is then used to rank the evidence items. For the joint and single training approaches, we select a different number of evidence nodes; in particular, a larger graph is used with the former. For training, we select nodes to occupy the graph according to the following rule-set:
(1) If gold evidence, include as a node.
(2) For claims that require a single evidence item, include the top four candidates returned using our semantic search approach as nodes.
(3) For claims with more than one gold evidence item, retrieve the same number of candidates as gold items.
The union of these sets form the collection of nodes, V , that occupy the evidence graph G. Table 4 for the templates used). For each evidence item, we feed this claim-evidence sequence pair to a RoBERTa encoder , and each node n i ∈ V in an evidence graph has the pooled output of the last hidden state of the [CLS] token, h 0 i as its initial state:
n i = h 0 i = RoBERTa CLS (c, e i ).
(1)
Evidence Selection and Veracity Prediction
Training graphs. We train two graph networks, one for joint veracity prediction and evidence extraction, and the second solely for the veracity prediction task.
Oversampling NEI Instances. As discussed in Section 3, the FEVEROUS dataset suffers from a significant class imbalance with respect to the NEI instances. Similar to the baseline approach, we employ techniques for generating new NEI instances in order to address this issue. Concretely, we use two data augmentation strategies in order to increase the number of NEI at train time: (1) evidence set reduction, and (2) claim mutation. For the first case, we randomly sample SUPPORTS and REFUTES instances and drop evidence. Given the distribution of entity substituted and non-entity substituted mutations -as discovered in our data analysis (see Section 3) -we make the choice to include in the training data: 15,000 constructed NEI examples made using the first approach, and 5,946 NEI examples constructed using the second. This means that a total of 92,237 NEI examples were used for model training.
STL: Separate Verification and Extraction.
For the first model, we perform the tasks of fact extraction and verification of evidence selection and veracity prediction separately. We make use of an ensemble semantic search method for extracting top evidence items for claims. We employ S-BERT 3 to encode the claim and the evidence items separately. We then compute cosine similarity for the claim evidence pair. The 25 highest ranking tabular evidence items were chosen, and the top-scoring 5 sentences (and captions) for each claim were selected as the nodes of our evidence reasoning graph at test time. This is the evidence limit stated by the FEVEROUS metric.
When constructing the evidence graph at test time, we choose to exclude header cells and list items evidence types as nodes as they account for a very small portion of evidence items (see Section 3), and experimentation shows that the evidence extraction model has a bias to favour these evidence elements over sentences. We use two GAT layers in our graph reasoning model, with: a hidden layer size of 128, embeddings size of 1024, and a global attention layer for node aggregation. The logits generated by the model are fed directly to a categorical cross entropy loss function, and the veracity label output probability distribution p i , for each evidence graph G i ∈ G, is computed using the relation
p i = softmax(MLP(Wo i + b)),(2)
where
o i = n i n i ∈V softmax (h gate (x n )) h Θ (x n ). (3)
MTL: Joint Verification and Extraction. We also experiment with a joint training or multi-task learning (MTL) approach in order to explore if simultaneously learning the veracity label and evidence items can lead to improvements in the label accuracy metric and also evidence prediction recall and precision. For this approach, we construct larger evidence graphs at test time, including the thirty-five highest ranked evidence items according to the S-BERT evidence extraction module. The intention is for the graph network to learn a binary classification for each claim-evidence pair in the network. For the multi-task learning model, we increase the dimensions of our graph network by feeding our initial input graphs to two separate GAT components (in order to increase the model's capacity for learning the more complex multi-task objective), the outputs of which, h a and h b , are concatenated to form representation h over which we compute global attention, where the combined representation takes the form: 4
h = [h a ; h b ].(4)
The binary cross entropy loss is then used for the node-level evidence selection task, and, as with the separated model, we use categorical cross entropy to compute the graph-level veracity prediction, as shown in (2) and (3). The resulting joint graph neural network is then trained with the linear-additive objective
L joint = λL evidence + L label ,(5)
taking the form of a Lagrangian with multiplier λ ≥ 0, where
L evidence = sigmoid(MLP(W i h + b)). (6)
As with the previous approach, we feed the model logits to our loss functions and use an Adam optimizer to train the network, and set λ = 0.5.
Hyper-parameter Settings
For all models, we make use of a ROBERTA-LARGE model which is pre-trained on a number of NLI datasets including NLI-FEVER (Nie et al., 2020). We use a maximum sequence length of 512 for encoding all claim-evidence concatenated pairs. We experiment with the following learning rates [1e-5, 5e-5, 1e-4], ultimately choosing the learning rate underlined. Training was performed using batch size of 64. We train the single objective model for 20k steps, choosing the weights with the minimum veracity prediction label loss, and train the joint model for 20k steps, taking the model with highest recall for evidence extraction. The Adam optimizer is used in training for both approaches.
Results
We report the results of the entire fact extraction and verification pipeline, as well as the evaluation of the pipeline's performance for intermediate stages of the fact verification system, e.g., document retrieval and evidence selection.
Document retrieval. Our method for DR shows significant improvement on the TF-IDF+DrQA approach used by the baseline. In particular we find that our document retrieval module sees gains from querying the Wikipedia dump for pages related to entities which are not retrieved by the WikiMedia API. However, we do note that our approach struggles to retrieve Wikipedia pages in cases relating to specific events which can only be inferred through reasoning over the claim.
For example, consider the following claim from the development dataset: "2014 Sky Blue FC season number 18 Lindsi Cutshall (born October 18, 1990) played the FW position.". In this case, the document selection process returns "Sky Blue FC", "Lindsi Cutshall", and "2015 Sky Blue FC season", but does not return the gold evidence page "2014 Sky Blue FC season" which is required for verification of the claim.
We report recall@k for k = {3, 5, 7} where k is the number of Wikipedia page documents retrieved by the module. Our approach shows significant improvements over the baseline (see Table 5).
Method Rec@3 Rec@5 Rec@7
Baseline 0.58 0.69 -Ours 0.65 0.73 0.80 Evidence selection and veracity prediction.
For evidence selection and veracity prediction, we observe that the approach trained for the single objective of veracity prediction marginally outperforms the jointly trained module (see Table 6). We hypothesize that the difficulty of learning to select the correct evidence nodes along with predicting veracity might be the cause of this. It is possible that performance of the joint model could be improved with better evidence representation or through the use of a different graph structure, e.g., by incorporating edge attributes. Finally, we submitted our blind test results for STL, which is our best performing method, to the after-competition FEVEROUS leaderboard. Our system outperforms the baseline significantly on both the FEVEROUS metric and also label accuracy as reported in Table 7. Furthermore, our results on the blind test data show almost no degradation from development to test set with respect to the evidence recall which remains at 37%. So the cause of our reduced FEVEROUS score between the development and test data is mainly due to a decrease in label accuracy from 63% on the development data to 53% for the test data. We are confident that this could be improved with better label accuracy for the NEI class.
Method Recall LA
Method
Dev
Case Study and System Interpretability
We present an example of a claim from the development dataset, which requires both tabular and textual evidence to be verified. We show how it is labelled by our pipeline (see Table 8). For this example, our evidence selection module correctly identifies all three evidence items required to fact-check the claim. Furthermore, two of the three evidence items receive the highest relevance scores from our evidence selection module. Of the irrelevant evidence items retrieved for this claim, eleven out of twenty-two come from an unrelated Wikipedia page ("Scomadi Turismo Leggera"). The correct label of SUPPORTS is also predicted for this instance. In order to explore the interpretability system predictions, for this same instance, we analyse the node attention weights for the first GAT layer, they are shown in parenthesis for each predicted evidence item in Table 8. We can see that the two evidence nodes with the highest values both correspond to items in the gold evidence set. However the third gold evidence item, SCO-MADI_SENTENCE_15, has a much lower weight than a number of items which are not in the gold evidence set.
Conclusion and Future Work
In this work, we have demonstrated two novel approaches for fact extraction and verification that support both structured and unstructured evidence. These architectures were motivated by literature in argumentation, and also by the empirical analysis presented in Section 3. Our results show significant improvement over the shared task baseline for Claim "In 2019, Scomadi, a private limited company with limited liability, was bought by a British owner which changed Scomadi's management structure." both the joint and separated models, with the latter generating a marginal improvement on the FEVER-OUS metric compared with the former. Overall, we conclude that the use of graph-based reasoning in fact verification systems could hold great promise for future lines of work. We hypothesize that exploring varied task formulations could potentially yield strong improvements in model performance, for example: constructing reasoning graphs on an evidence set level, or using the FEVER dataset to augment the NEI claims used during training, or further fine-tuning sentence embeddings on the FEVEROUS dataset. Furthermore, we believe further insights could be gained by evaluating our table linearization approach on other datasets related to fact verification over tabular data. In addition to this, we hope to conduct further experiments with our graph based approach using structured and unstructured evidence independently, to further investigate which aspect of our approach led to the improvement on the FEVER-OUS score.
Evidence Scomadi_cell_0_0_1, Scomadi_sentence_14, Scomadi_sentence_15. Predicted Evidence (1) Scomadi_cell_0_0_1 (0.1794),(2)
Incorporating prior knowledge or constraints into the training procedure would also be an interesting direction. Finally, we believe that our graph-based approach lends itself well to the extraction of veracity prediction explanations (Kotonya and Toni, 2020a), obtained from evidence extracted from our underpinning graphs as justifications for claims. The ability to provide evidence for a claim, and to justify this, would better enable the integration of these techniques in practical systems.
Disclaimer This paper was prepared for informational purposes by the Artificial Intelligence Research group of JPMorgan Chase & Co and its affiliates ("J.P. Morgan"), and is not a product of the Research Department of J.P. Morgan. J.P. Morgan makes no representation and warranty whatsoever and disclaims all liability, for the completeness, accuracy or reliability of the information contained herein. This document is not intended as investment research or investment advice, or a recommendation, offer or solicitation for the purchase or sale of any security, financial instrument, financial product or service, or to be used in any way for evaluating the merits of participating in any transaction, and shall not constitute a solicitation under any jurisdiction or to any person, if such solicitation under such jurisdiction or to such person would be unlawful. © 2021 JPMorgan Chase & Co. All rights reserved.
Table Linearization .
LinearizationA number of approaches have been adopted in NLP for table linearization. For example,Gupta et al. (2020) study natural language inference in the context of table linearizations, in particular they are interested to see if language models can infer entailment relations from table linearizations. The linearization approach employed bySchlichtkrull et al. (2021) is also used for automated fact verification. However, they linearize tables row-and column-wise, whereas we focus on cells as evidence items in the FEVEROUS dataset are annotated at table-cell level.
Table 2 :
2We sample 100 NEI instances and categorize
them according to the type of lexical mutation which
results in the claim being unverifiable.
We note that a number of NEI examples are
mutations of SUPPORTS or REFUTES examples.
For example the claim in
Table 3 :
3NEI example where the evidence is highlighted according to the part of the claim to which it refers. The text in bold is the substitution which resulted in the label changing from SUPPORTS to NEI.
Wolfgang Niedecken is a German rock musician who founded the Kölsch speaking rock group BAP at the end of the 1970sClaim
Document
Retriever
Graph
Reasoner
Label
"SUPPORTS"
Evidence
Ranker
Evidence
["Wolfgang
Niedecken_sentence_0",
"Wolfgang
Niedecken_cell_0_4_1",
"Wolfgang
Niedecken_sentence_1"]
The Player Honours for Park Sang-in includes K-League BestXI: 1985 [/wiki/Park_Sang-in] Evidence Type
Linearization
Example from FEVEROUS dataset
Infoboxes
Headers
TABLE has CELL_I_J
[in SUBHEADER]
Brewster Productions has Genres .
[/wiki/Brewster_Productions]
Non-headers
CELL_I_0 of TABLE
[in
SUBHEADER]
is
CELL_I_J
Current ranking of Barbora Krejčíková in
Singles
is
No. 65 (16 November 2020) .
[/wiki/Barbora_Krejcikova]
General tables
Headers
TABLE has CELL_I_J
[in SUBHEADER]
The 1964 United States Senate election in Maine
has Party .
[/wiki/1908_Clemson_Tigers_football_team]
Non-headers
TABLE/PAGE has
SUBHEADER_0 CELL_I_0
in SUBHEADER_J
of CELL_I_J
2014 Ladies European Tour has Rank 9 in Player
of Florentyna Parker .
[/wiki/2014_Ladies_European_Tour]
List items
Without subheaders TITLE includes ITEM_I_J
Site includes Location, a point or an area on the
Earth's surface or elsewhere.
[/wiki/Site]
With subheaders
SUBHEADERS for TITLE
includes ITEM_I_J
Table 4 :
4Templates for encoding tabular evidence. CELL_I_0, SUBHEADER_0, SUBHEADER_J, SUBHEADERS,TABLE, TITLE and PAGE are all context elements. The content of the evidence item is highlighted . In each case ITEM_I_J denotes list item content and CELL_I_J denotes table cell content.
quence is generated by concatenating the evidence item with the page title which serves as context.For table cells and list items we perform a per cell linearization, where this linearization forms the evidence sequence for table and list item evidence items (seeNode Representations. For the initial node rep-
resentations, similar to Liu et al. (2020) and Zhao
et al. (2020), we represent evidence nodes with the
claim to which they refer as context. The claim
is concatenated with a constructed context-rich
evidence sequence e i . When constructing the se-
quences, e i , we consider the unstructured evidence
items (i.e, sentences and table captions) and the
structured table and list items separately.
For sentences and table captions the evidence se-
Table 5 :
5Document retrieval results measured by
Recall@k, where k is the number of documents re-
trieved. Results reported for the dev set.
Table 6 :
6System performance of the dev set for evidence recall and label accuracy.
Table 7 :
7Results for label accuracy (LA) and FEVER-OUS score (FS) for the full pipeline on both the development and blind test datasets.
Table 8 :
8Example claim from the development dataset which requires extracting both tabular and textual evidence in order for it to be verified. For brevity we only show the top fourteen (out of twenty-five) extracted evidence items, correctly predicted evidence is highlighted .
* Work done while the author was an intern at J.P. Morgan AI Research.1 This system was not submitted to the shared task competition, but instead to the after competition leader board under the name CARE (Context Aware REasoner).
https://www.mediawiki.org/wiki/API
We use the 'msmarco-distilbert-base-v4' and 'paraphrasempnet-base-v2" pretrained models.
We denote the concatenation of vectors x and y, by [x; y].
Explainable fact checking with probabilistic answer set programming. Naser Ahmadi, Joohyung Lee, Paolo Papotti, Mohammed Saeed, Proceedings of the 2019 Truth and Trust Online Conference (TTO 2019). the 2019 Truth and Trust Online Conference (TTO 2019)London, UKNaser Ahmadi, Joohyung Lee, Paolo Papotti, and Mo- hammed Saeed. 2019. Explainable fact checking with probabilistic answer set programming. In Pro- ceedings of the 2019 Truth and Trust Online Confer- ence (TTO 2019), London, UK, October 4-5, 2019.
Where is your evidence: Improving factchecking by justification modeling. Savvas Tariq Alhindi, Smaranda Petridis, Muresan, 10.18653/v1/W18-5513Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). the First Workshop on Fact Extraction and VERification (FEVER)Brussels, BelgiumAssociation for Computational LinguisticsTariq Alhindi, Savvas Petridis, and Smaranda Mure- san. 2018. Where is your evidence: Improving fact- checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VER- ification (FEVER), pages 85-90, Brussels, Belgium. Association for Computational Linguistics.
Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verification over unstructured and structured information. Rami Aly, Zhijiang Guo, Michael Schlichtkrull, James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Oana Cocarascu, and Arpit Mittal. 2021. Feverous: Fact extraction and verifica- tion over unstructured and structured information.
MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. Isabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Christian Hansen, Jakob Grue Simonsen, 10.18653/v1/D19-1475Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsIsabelle Augenstein, Christina Lioma, Dongsheng Wang, Lucas Chaves Lima, Casper Hansen, Chris- tian Hansen, and Jakob Grue Simonsen. 2019. MultiFC: A real-world multi-domain dataset for evidence-based fact checking of claims. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 4685-4697, Hong Kong, China. Association for Computational Lin- guistics.
Proceedings of the 7th Workshop on Argument Mining. Elena Cabrio, Serena Villata, Association for Computational LinguisticsOnlineElena Cabrio and Serena Villata, editors. 2020. Pro- ceedings of the 7th Workshop on Argument Mining. Association for Computational Linguistics, Online.
Tabfact : A largescale dataset for table-based fact verification. Wenhu Chen, Hongmin Wang, Yunkai Zhang Jianshu Chen, Hong Wang, Shiyang Li, Xiyou Zhou, William Yang Wang, International Conference on Learning Representations (ICLR). Addis Ababa, EthiopiaWenhu Chen, Hongmin Wang, Yunkai Zhang Jian- shu Chen, Hong Wang, Shiyang Li, Xiyou Zhou, and William Yang Wang. 2020. Tabfact : A large- scale dataset for table-based fact verification. In International Conference on Learning Representa- tions (ICLR), Addis Ababa, Ethiopia.
INFOTABS: Inference on tables as semi-structured data. Vivek Gupta, Maitrey Mehta, Pegah Nokhiz, Vivek Srikumar, 10.18653/v1/2020.acl-main.210Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsVivek Gupta, Maitrey Mehta, Pegah Nokhiz, and Vivek Srikumar. 2020. INFOTABS: Inference on tables as semi-structured data. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 2309-2324, Online. Association for Computational Linguistics.
UKP-athene: Multi-sentence textual entailment for claim verification. Andreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, Iryna Gurevych, 10.18653/v1/W18-5516Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). the First Workshop on Fact Extraction and VERification (FEVER)Brussels, BelgiumAssociation for Computational LinguisticsAndreas Hanselowski, Hao Zhang, Zile Li, Daniil Sorokin, Benjamin Schiller, Claudia Schulz, and Iryna Gurevych. 2018. UKP-athene: Multi-sentence textual entailment for claim verification. In Pro- ceedings of the First Workshop on Fact Extraction and VERification (FEVER), pages 103-108, Brus- sels, Belgium. Association for Computational Lin- guistics.
DeSePtion: Dual sequence prediction and adversarial examples for improved fact-checking. Christopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, Smaranda Muresan, 10.18653/v1/2020.acl-main.761Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsChristopher Hidey, Tuhin Chakrabarty, Tariq Alhindi, Siddharth Varia, Kriste Krstovski, Mona Diab, and Smaranda Muresan. 2020. DeSePtion: Dual se- quence prediction and adversarial examples for im- proved fact-checking. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 8593-8606, Online. Association for Computational Linguistics.
Gradual argumentation evaluation for stance aggregation in automated fake news detection. Neema Kotonya, Francesca Toni, 10.18653/v1/W19-4518Proceedings of the 6th Workshop on Argument Mining. the 6th Workshop on Argument MiningFlorence, ItalyAssociation for Computational LinguisticsNeema Kotonya and Francesca Toni. 2019. Gradual argumentation evaluation for stance aggregation in automated fake news detection. In Proceedings of the 6th Workshop on Argument Mining, pages 156- 166, Florence, Italy. Association for Computational Linguistics.
Explainable automated fact-checking: A survey. Neema Kotonya, Francesca Toni, 10.18653/v1/2020.coling-main.474Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsNeema Kotonya and Francesca Toni. 2020a. Ex- plainable automated fact-checking: A survey. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5430-5443, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Explainable automated fact-checking for public health claims. Neema Kotonya, Francesca Toni, 10.18653/v1/2020.emnlp-main.623Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNeema Kotonya and Francesca Toni. 2020b. Ex- plainable automated fact-checking for public health claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 7740-7754, Online. Associa- tion for Computational Linguistics.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
Fine-grained fact verification with kernel graph attention network. Zhenghao Liu, Chenyan Xiong, Maosong Sun, Zhiyuan Liu, 10.18653/v1/2020.acl-main.655Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineZhenghao Liu, Chenyan Xiong, Maosong Sun, and Zhiyuan Liu. 2020. Fine-grained fact verification with kernel graph attention network. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7342-7351, On- line. Association for Computational Linguistics.
GCAN: Graph-aware co-attention networks for explainable fake news detection on social media. Yi-Ju Lu, Cheng-Te Li, 10.18653/v1/2020.acl-main.48Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsYi-Ju Lu and Cheng-Te Li. 2020. GCAN: Graph-aware co-attention networks for explainable fake news de- tection on social media. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 505-514, Online. Associ- ation for Computational Linguistics.
Revealing the importance of semantic retrieval for machine reading at scale. Yixin Nie, Songhe Wang, Mohit Bansal, Yixin Nie, Songhe Wang, and Mohit Bansal. 2019. Re- vealing the importance of semantic retrieval for ma- chine reading at scale.
Adversarial NLI: A new benchmark for natural language understanding. Yixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, Douwe Kiela, 10.18653/v1/2020.acl-main.441Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsYixin Nie, Adina Williams, Emily Dinan, Mohit Bansal, Jason Weston, and Douwe Kiela. 2020. Ad- versarial NLI: A new benchmark for natural lan- guage understanding. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 4885-4901, Online. Association for Computational Linguistics.
Sentence-BERT: Sentence embeddings using Siamese BERTnetworks. Nils Reimers, Iryna Gurevych, 10.18653/v1/D19-1410Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsNils Reimers and Iryna Gurevych. 2019. Sentence- BERT: Sentence embeddings using Siamese BERT- networks. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), pages 3982-3992, Hong Kong, China. Association for Computational Linguistics.
Joint verification and reranking for open fact checking over tables. Vladimir Michael Sejr Schlichtkrull, Barlas Karpukhin, Mike Oguz, Wen-Tau Lewis, Sebastian Yih, Riedel, 10.18653/v1/2021.acl-long.529Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Michael Sejr Schlichtkrull, Vladimir Karpukhin, Bar- las Oguz, Mike Lewis, Wen-tau Yih, and Sebastian Riedel. 2021. Joint verification and reranking for open fact checking over tables. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 6787-6799, Online. Association for Computational Linguistics.
. Shaden Shaar, Nikolay Babulkov, Giovanni , Shaden Shaar, Nikolay Babulkov, Giovanni
That is a known lie: Detecting previously fact-checked claims. Martino Da San, Preslav Nakov, 10.18653/v1/2020.acl-main.332Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsDa San Martino, and Preslav Nakov. 2020. That is a known lie: Detecting previously fact-checked claims. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguis- tics, pages 3607-3618, Online. Association for Computational Linguistics.
FEVER: a large-scale dataset for fact extraction and VERification. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal, 10.18653/v1/N18-1074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819, New Orleans, Louisiana. Association for Computational Linguistics.
Towards argument mining for social good: A survey. Maria Eva, Neele Vecchi, Iman Falk, Gabriella Jundi, Lapesa, 10.18653/v1/2021.acl-long.107Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnline. Association for Computational LinguisticsEva Maria Vecchi, Neele Falk, Iman Jundi, and Gabriella Lapesa. 2021. Towards argument mining for social good: A survey. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 1338-1352, Online. As- sociation for Computational Linguistics.
Graph attention networks. Petar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, Yoshua Bengio, 6th International Conference on Learning Representations. Vancouver, BC, CanadaConference Track Proceedings. OpenReview. netPetar Velickovic, Guillem Cucurull, Arantxa Casanova, Adriana Romero, Pietro Liò, and Yoshua Bengio. 2018. Graph attention networks. In 6th Inter- national Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenRe- view.net.
Fact checking: Task definition and dataset construction. Andreas Vlachos, Sebastian Riedel, 10.3115/v1/W14-2508Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science. the ACL 2014 Workshop on Language Technologies and Computational Social ScienceBaltimore, MD, USAAssociation for Computational LinguisticsAndreas Vlachos and Sebastian Riedel. 2014. Fact checking: Task definition and dataset construction. In Proceedings of the ACL 2014 Workshop on Lan- guage Technologies and Computational Social Sci- ence, pages 18-22, Baltimore, MD, USA. Associa- tion for Computational Linguistics.
Fact or fiction: Verifying scientific claims. David Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine Van Zuylen, Arman Cohan, Hannaneh Hajishirzi, 10.18653/v1/2020.emnlp-main.609Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsOnlineDavid Wadden, Shanchuan Lin, Kyle Lo, Lucy Lu Wang, Madeleine van Zuylen, Arman Cohan, and Hannaneh Hajishirzi. 2020. Fact or fiction: Verify- ing scientific claims. In Proceedings of the 2020 Conference on Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 7534-7550, On- line. Association for Computational Linguistics.
Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2020. Defending against neural fake news. Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner, and Yejin Choi. 2020. Defending against neural fake news.
Transformer-xh: Multi-evidence reasoning with extra hop attention. Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, Saurabh Tiwary, International Conference on Learning Representations. Chen Zhao, Chenyan Xiong, Corby Rosset, Xia Song, Paul Bennett, and Saurabh Tiwary. 2020. Transformer-xh: Multi-evidence reasoning with ex- tra hop attention. In International Conference on Learning Representations.
Reasoning over semantic-level graph for fact checking. Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, Jian Yin, 10.18653/v1/2020.acl-main.549Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsWanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over semantic-level graph for fact checking. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6170-6180, Online. Association for Computa- tional Linguistics.
GEAR: Graph-based evidence aggregating and reasoning for fact verification. Jie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, Maosong Sun, 10.18653/v1/P19-1085Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsJie Zhou, Xu Han, Cheng Yang, Zhiyuan Liu, Lifeng Wang, Changcheng Li, and Maosong Sun. 2019. GEAR: Graph-based evidence aggregating and rea- soning for fact verification. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 892-901, Florence, Italy. Association for Computational Linguistics.
| [] |
[
"Visually Grounded, Situated Learning in Neural Models",
"Visually Grounded, Situated Learning in Neural Models"
] | [
"Alexander G Ororbia \nThe Pennsylvania State University University Park\n16802PAUSA\n",
"Ankur Mali \nThe Pennsylvania State University University Park\n16802PAUSA\n",
"Matthew A Kelly matthew.kelly@psu.edu \nThe Pennsylvania State University University Park\n16802PAUSA\n",
"David Reitter reitter@psu.edu \nThe Pennsylvania State University University Park\n16802PAUSA\n"
] | [
"The Pennsylvania State University University Park\n16802PAUSA",
"The Pennsylvania State University University Park\n16802PAUSA",
"The Pennsylvania State University University Park\n16802PAUSA",
"The Pennsylvania State University University Park\n16802PAUSA"
] | [] | The theory of situated cognition postulates that language is inseparable from its physical context-words, phrases, and sentences must be learned in the context of the objects or concepts to which they refer. Yet, statistical language models are trained on words alone. This makes it impossible for language models to connect to the real world-the world described in the sentences presented to the model. In this paper, we examine the generalization ability of neural language models trained with a visual context. A multimodal connectionist language architecture based on the Differential State Framework is proposed, which outperforms its equivalent trained on language alone, even when no visual context is available at test time. Superior performance for language models trained with a visual context is robust across different languages and models. | null | [
"https://arxiv.org/pdf/1805.11546v1.pdf"
] | 44,157,579 | 1805.11546 | 73b023c2d0a5871c9ea4579624c8243c1dbc59ce |
Visually Grounded, Situated Learning in Neural Models
Alexander G Ororbia
The Pennsylvania State University University Park
16802PAUSA
Ankur Mali
The Pennsylvania State University University Park
16802PAUSA
Matthew A Kelly matthew.kelly@psu.edu
The Pennsylvania State University University Park
16802PAUSA
David Reitter reitter@psu.edu
The Pennsylvania State University University Park
16802PAUSA
Visually Grounded, Situated Learning in Neural Models
The theory of situated cognition postulates that language is inseparable from its physical context-words, phrases, and sentences must be learned in the context of the objects or concepts to which they refer. Yet, statistical language models are trained on words alone. This makes it impossible for language models to connect to the real world-the world described in the sentences presented to the model. In this paper, we examine the generalization ability of neural language models trained with a visual context. A multimodal connectionist language architecture based on the Differential State Framework is proposed, which outperforms its equivalent trained on language alone, even when no visual context is available at test time. Superior performance for language models trained with a visual context is robust across different languages and models.
Introduction
The theory of situated cognition postulates that a person's knowledge is inseparable from the physical or social context in which it is learned and used (Greeno and Moore, 1993). Knowledge of language cannot be separated from its physical context, which allows words and sentences to be learned by grounding them in reference to objects or natural concepts on hand (see Roy and Reiter, 2005, for a review). Nor can knowledge of language be separated from its social context, where language is learned interactively through communicating with others to facilitate problem-solving. Simply put, language does not occur in a vacuum.
Yet, statistical language models, typically connectionist systems, are often trained in such a vac-uum. Sequences of symbols, such as sentences or phrases composed of words in any language, such as English or German, are often fed into the model independently of any real-world context they might describe. In the classical language modeling framework, a neural model is tasked with a series of next-step prediction tasks, learning to predict a word based on a history of words it has seen so far. While these models learn a great deal of linguistic structure from these symbol sequences alone, acquiring the essence of basic syntax, it is highly unlikely that this approach can create models that acquire much in terms of semantics or pragmatics, which are integral to the human experience of language. How might one build neural language models that "understand" the semantic content held within the symbol sequences, of any language, presented to it?
In this paper, we take a small step towards a model that understands language by training a neural architecture jointly on corresponding linguistic and visual data. From an image-captioning dataset, we create a multi-lingual corpus where sentences are mapped to the real-world images they describe. We ask how adding such real-world context at training can improve the performance of language models. We extend the ∆-RNN (Ororbia II et al., 2017), the Long Short Term Memory (LSTM; Hochreiter and Schmidhuber, 1997) and the Gated Recurrent Unit (GRU; Cho et al., 2014) to incorporate visual context information, creating a unified multi-modal connectionist architecture. We find that the models acquire more knowledge of language than if they were trained without corresponding, real-world visual context.
Related Work
The Perceptual Symbol Systems theory holds that all of cognition, language, reasoning, and mem-ory, is grounded in perceptual features (Barsalou, 1999). Both behavioral and neuroimaging studies have found considerable evidence for the contribution of perceptual information to linguistic tasks (Barsalou, 2008). Cognitive theory has long held that language is acquired jointly with perception through interaction with the environment (e.g. Frank et al., 2008). Cognitive models can account for bootstrapped learning of word meaning and syntax when language is paired with ambiguous and limited perceptual experience (Abend et al., 2017), and for the ability of children to rapidly acquire new words by inferring the referent from their physical environment (Alishahi et al., 2008).
A number of models of distributional semantics integrate word co-occurrence data extracted from a corpus with perceptual data, either to achieve a better model of language as it exists in the minds of humans (Kievit-Kylar and Jones, 2011;Johns and Jones, 2012) or to improve performance on machine learning tasks such as object recognition (Frome et al., 2013), image captioning (Kiros et al., 2014), or image search (Socher et al., 2014).
Integrating language and perception can facilitate language acquisition by allowing models to infer how a new word is used from the perceptual features of its referent (Johns and Jones, 2012). Likewise, this integration allows models to infer the perceptual features of an unobserved referent from how a word is used in language (Johns and Jones, 2012). As a result, language data can be used to improve object recognition by providing information about unobserved or infrequently observed objects (Frome et al., 2013).
By representing the referents of concrete nouns as arrangements of elementary visual features (Biederman, 1987), Kievit-Kylar and Jones (2011) find that the visual features of nouns capture semantic typicality effects, and that a combined representation, consisting of both visual features and word co-occurrence data, more strongly correlates with human judgments of semantic similarity than representations extracted from a corpus alone. While modeling similarity judgments is distinct from the problem of predictive language modeling, we take this finding as evidence that visual perception informs semantics, which suggests there are gains to be had integrating perception with predictive language models.
While knowledge of concrete nouns benefits most directly from integrating perceptual data with language, verbs also benefit, as the perceptual features of verbs can be inferred from the features of the nouns they act upon (Johns and Jones, 2012), such that a model with access to perceptual features gains the ability to discriminate between actions afforded by a verb and actions that are not afforded by the verb (e.g., hanging a coat on a vacuum versus a cup).
Image Captioning (Kiros et al., 2014;Vinyals et al., 2015;Xu et al., 2015) systems have shown promising results in generating captions by mapping between vision and language. However such models are restricted to a single language and can introduce irreversible corruption to a vision signal if trained jointly, since randomly initialized language parameters generates Gaussian noise that can harm contextual interaction information. If a jointly trained vision and language model is trained on multiple languages then each language introduces language specific noise that would corrupt visual information.
In contrast to prior work in machine learning, our goal in integrating visual and linguistic data is not to accomplish a task such as image search or image captioning that inherently requires a mapping between these two modalities. Rather, our goal is to demonstrate that perceptual information is intrinsic to how humans process language, and as such, a language model that is trained on both visual and linguistic data will be a better model, consistently across languages, than a model trained on linguistic data alone.
Prior work in cognitive modeling has focused on models of distributional semantics that capture the similarity relations between words (e.g. Johns and Jones, 2012; Kievit-Kylar and Jones, 2011), whereas the model we propose here is a predictive language model. Due to the ability of language models to probabilistically constrain input on the basis of preceding context and to classify linguistic material, these models play a central role in natural-language and speech processing applications. However, the psycholinguistic questions surrounding how people acquire and use linguistic knowledge are fundamentally different from the aims of machine learning. Using NLP-style language models to address psycholinguistic questions is a new approach that integrates well with the theory of predictive coding in cognitive psychology (Clark, 2013;Rao and Ballard, 1999). For language processing this means that when reading text or comprehending speech, humans constantly anticipate what will be said next. This is a fast, implicit cognitive process that does not require symbol manipulation, but that can make use of the kind of sequence learning that recurrent neural models excel at. We do not propose such models as direct accounts of human language processing. Instead, our intent is to examine what can and cannot be learned with the addition of a non-linguistic modality (vision) at training time.
The Multimodal Neural Architecture
In designing our neural model, we start from the Differential State Framework (DSF, Ororbia II et al. (2017)), which unifies gated recurrent architectures under the general view that state memory is a simple parametrized mixture of "fast" and "slow" states. Our aim is to model sequences of symbols, such as the words that compose sentences, where at each time we process x t , or the one-hot encoding of a token.
One of the simplest models that can be derived from the DSF is the ∆-RNN, which has been shown to outperform most complex neural models in next-step symbol prediction tasks (Ororbia II et al., 2017). The model, with parameters Θ = {W, U, V, b, c, b r , β 1 , β 2 , α}, is defined as:
d rec t = V h t−1 , d dat t = W e w,t ,(1)d 1 t = α ⊗ d rec t ⊗ d dat t (2) d 2 t = β 1 ⊗ d rec t + β 2 ⊗ d dat t ,(3)z t = φ hid (d 1 t + d 2 t + b),(4)h t = Φ((1 − r) ⊗ z t + r ⊗ h t−1 )
, and, (5)
r = 1/(1 + exp(−[d dat t + b r ])).(6)
where e w,t is the 1-of-k encoding of the word w at time t. Note that {α, β 1 , β 2 } are learnable bias vectors that modulate the internal multiplicative interactions and the rate gate r reuses the computed pre-activation term d dat t . In contrast to the model originally trained in Ororbia II et al. (2017), the outer activation is the linear rectifier, Φ(v) = max(0, v), instead of the identity or hyperbolic tangent, because we found that it worked much better. We set the inner activation function φ hid (v) to be tanh(v) = (e (2v) −1) (e (2v) +1) . To integrate visual context information into the ∆-RNN, we fuse the model with a neural vision system, motivated by promising recent work done in automated image captioning (Xu et al., 2015). We adopt a transfer learning approach and incorporate a state-of-the-art convolutional neural network into the ∆-RNN model, namely the Inception-v3 network (Szegedy et al., 2016). 1 The parameters of the vision network are fixed. As our focus is on language modeling and how the addition of visual context can improve neural network performance on the task, fixing the vision system prevents any noise from the language model from potentially corrupting the vision model and damaging its distributed representations. We leave learning the vision system jointly with the language model as future work.
To obtain a distributed representation of an image from the Inception-v3 network, we extract the vector produced from the final max-pooling layer, c, after running an image through the model (note that this operation occurs right before the final, fully-connected processing layers which are usually task-specific parameters, such as in object classification). The ∆-RNN can make use of the information in this visual context vector if we modify its state computation in one of two ways. The first way would be to modify its inner state function to be a linear combination of the data-dependent pre-activation, the filtration, and a learned linear mapping of c as follows:
z t = φ hid (d 1 t + d 2 t + M c + b)(7)
where M is a learnable synaptic connections that connect the visual context representation with the inner state. The second way to modify the ∆-RNN would be change its outer mixing function instead:
h t = Φ([(1 − r) ⊗ z t + r ⊗ h t−1 ] ⊗ (M c))(8)
Here we see the linearly-mapped visual context embedding interacts with the currently computation state through a multiplicative operation, allowing the visual-context to persist and work in a longer-term capacity. In either situation, using a parameter matrix M frees us from having to set the dimensionality of the hidden state to be the same as the context vector produced by the Inception-v3 network. We do not use regularization techniques with this model. The application of regularization techniques is, in principle, possible (and typically improves performance of the ∆-RNN), albeit it is inappropriate and indeed damaging to performance in this particular case, where an already compressed and regularized representation of the images from Inception-v3 serves as input to the multimodal language modeling network.
Let w 1 , . . . , w N be a variable-length sequence of N words corresponding to an image I. In general, the distribution over the variables follows the graphical model:
P θ (w 1 , . . . , w T |I) = T t=1 P Θ (w t |w <t , I),(9)
For all model variants the state h t calculated at any time step is fed into a maximum-entropy classifier 2 defined as:
P (w, h t ) = P Θ (w|h t ) = exp (w T U h t ) w exp ((w ) T U h t ) ,(10)
The model parameters Θ optimized with respect 2 Bias term omitted for clarity.
to the sequence negative log likelihood:
L = − N i=1 T t=1 log P Θ (w t |h),(11)
We employ back-propagation of errors, or differentiate with respect to the negative log likelihood objective function above, to calculate the gradients needed to update parameters.
Experiments
The experiments in this paper were conducted using the MS-COCO image-captioning dataset. 3 Images in the dataset contains significant amount of contextual information and also five human annotated captions per image.We extracted all the five sentences from the dataset and created 5 different ground truth splits. We translated ground truth splits into German and Spanish splits using state of the art Google Translation API. To our knowledge, this represents the first Multi-lingual MSCOCO dataset on situated learning. We process the corpus at the word-level and obtain a 16.6K vocabulary for English, 33.2K for German and 18.2k for Spanish.
Our primary concern is with the next-step prediction of words/tokens, which means the negative log likelihood and perplexity of the learned Table 1: Generalization performance of language models trained and evaluated on linguistic data only (L), full: trained and evaluated on multimodal linguistic and visual data (LV), and, blind: trained on multimodal data (LV) but evaluated on language only (L). generative model is of high importance. This is different from the goals of machine translation or image captioning, which, in most cases, is concerned with a ranking of possible captions where one measures how similar the model's generated sequences are to ground-truth target phrases. Baseline results were obtained with neural language models of text alone. For the ∆-RNN, this meant implementing a model using only Equations 1-7. To verify that the experiment generalizes beyond the specific architecture chosen, a Gated Recurrent Unit (GRU, Cho et al., 2014) and a Long Short Term Memory (LSTM, Hochreiter and Schmidhuber, 1997) were also trained. We compare these symbol-only baselines to the two variations of our proposed multimodal ∆-RNN, as described in the previous section. The multimodal variant of the GRU, where the context information is directly integrated into its inner function, is defined as follows:
d c = M c (12) z t = σ(W z x t + V z h t−1 )(13)r t = σ(W r x t + V r h t−1 ) (14) h t = tanh(W h x t + V h (r t ⊗ h t−1 )) (15) h t = [z t ⊗ h t−1 + (1 − z t ) ⊗ h t ] ⊗ d c (16)
where we note the parameter matrix M that maps the visual context c into the GRU state effectively gates the outer function. 4 The multimodal variant 4 In preliminary experiments, we tried both ways of in-of the LSTM (with peephole connections) is defined as follows:
d c = M c (17) h t = [r t ⊗ Φ(c t )] ⊗ d c , where,(18)r t = σ(W r x t + V r h t−1 + U r c t + b r ) (19) c t = f t ⊗ c t−1 + i t ⊗ z t , where,(20)z t = Φ(W z x t + V z h t−1 + b z ),(21)i t = σ(W i x t + V i h t−1 + U i c t−1 + b i ), (22) f t = σ(W f x t + V f h t−1 + U f c t−1 + b f ). (23)
All models were trained to minimize the sequence loss of the sentences in the training split. The weight matrices of all models were initialized from uniform distribution, U (−0.1, 0.1), biases were initialized from zero, and the ∆-RNNspecific biases {α, β 1 , β 2 } were all initialized to one. Parameter updates calculated through backpropagation through time required unrolling the model over 49 steps in time. All symbol sequences were zero-padded and appropriately masked to ensure efficient mini-batching. Gradients were hardclipped at a magnitude bound of l = 2.0. Over mini-batches of 32 samples, model parameters were optimized using simple stochastic gradient descent with a learning rate that starts at λ = 1.0 and is halved if the perplexity, measured at the end of each epoch, goes up three or more times.
tegrating the visual context information as proposed before, Equations 7 and 8. We ultimately found the second formulation to give better performance. To determine if our multimodal language model actually captures knowledge that is different from a text-only language model, we evaluate each model twice. First, we compute the model perplexity on the test set using the sentences' visual context vectors. Next, we compute the model perplexity on the test sentences by feeding in a nullvector to the multimodal model as the visual context. If the model did truly pick up some semantic knowledge that is not exclusively dependent on the conditioned context vector, its perplexity in the second setting, while naturally worse than the first setting, should still outperform the text-only baselines.
In Table 3, we report the model negative log likelihood (NLL) and per-word perplexity (PPL). PPL is a function of NLL, and is simply calculated using the measure:
P P L = exp − (1/N ) N i=1 T t=1 log P Θ (w t |h)(24)
We observe that in all cases the multimodal models outperform their respective text-only baselines. More importantly, the multimodal models, when evaluated without the Inception-v3 representations on held-out samples, still perform better than the text-only baselines. This improvement in generalization can be attributed to the visual context information given to the model in the training data, enriching its distributed representations over word sequences with knowledge of actual objects as provided by the Inception-v3 vision system. Figure 2 shows the validation perplexity of the various ∆-RNN on each language as a function of the first 15 epochs of learning. We observe that throughout the learning process, the improvement in generalization afforded by the visual context c is persistent. Validation performance was also tracked for the various GRU and LSTM models, where the same trend was also observed. We provide the plots for those models in the appendix.
Model Analysis
To further probe the differences between the textonly and multimodal models, we analyze the decoders of each. Specifically, we examine the parameter matrix U , which is directly involved in calculating the logits of the underlying generative model. U can be essentially thought of as "transposed embeddings", an idea that has also been ex- ploited to introduce further regularization into the neural language model learning process (Press and Wolf, 2016;Inan et al., 2016). If we treat each row of this matrix (since we assume column-major orientation in implementation) as the learned embedding for a particular word, we can calculate its similarity to other column embeddings using cosine similarity.
In Table 2, we examine the top ten highest ranked words given several query terms, using the decoder parameter matrix. By observing the different sets of nearest-neighbors produced by the ∆-RNN and the MM-∆-RNN, we can see that MM-∆-RNN appears to have learned to combine the visual context information with the token sequence information in its distributed representations. For example, in the case of "ocean", we see that while the ∆-RNN does associate some relevant terms, such as "surfing" and "beach", nearly all of the terms the MM-∆-RNN associates are relevant to the query. The same situation is observed for "kite" and "subway". In the case of "racket", while the text-only baseline does mostly seem to associate sports terms, especially sports equipment like "bat", the MM-∆-RNN is actually able to relate the query correctly to the correct sport, "tennis".
Conditional Sampling
Another interesting way to see how visual context information influences the neural language architecture is to sample from the learned conditional generative model. While image-captioning generally focuses on ranking appropriate caption candidates, we intend to use the model to generate sen-tences using only the image for guidance. Sampling the learned generative model will allow us to gauge if the system can "explain", in some fashion, what it sees. where, iteratively, m best sentences are picked at time t from a set of generated sentences of length t+1. We experimented with a beam of size 13 and Table 34.1 shows the generated captions using this specific beam-search.
Discussion and Conclusions
We find that multi-modal neural models trained with a perceptual context are better at modeling language than models trained on language alone. Specifically, we find that augmenting a predictive language model with images that illustrate the sentences being learned enhances the ability of the model to make next-word predictions. This performance improvement persists even in situations devoid of visual representations, when the model is being used as a pure language model. This research is a step towards taking neural language models more seriously as cognitive and psycholinguistic models of the non-symbolic, implicit aspects of language representation. There's a great deal of evidence that something like a predictive language model exists in the human mind. Surprisal is a concept in psycholinguistics that refers to the degree of mismatch between what a human listener expected to be said next and what is actually said, such as when a garden path sentence forces the listener to abandon a partial, incremental parse (Hale, 2001). More generally, the idea of predictive coding holds that the mind forms expectations before perception occurs (see Clark, 2013, for review). How these predictions are formed is unclear. Predictive language models trained with a generic neural architecture, without specific linguistic universals, are a reasonable candidate for a model of predictive coding in language. This does not imply neuropsychological realism of the low-level representations or learning algorithms, and we cannot advocate for a specific neural architecture as being most plausible. However, we can show that an architecture that predicts linguistic input well learns better when its input mimics that of a human language learner.
In our (cognitive) view of language processing, we distinguish between symbolic language knowledge and processes that implement compositionality to produce semantics on the one hand, and implicit processes that leverage sequences and associations to produce expectations. With respect to acquiring the latter model, we note that children are exposed to a rich sensory environment, and a more detailed one than the visual environment provided to our language model here. If even static visual input alone improves language acquisition, then what could a sensorily rich environment achieve? When a multimodal learner is considered, then, perhaps, the language acquisition stimulus that has been famously labeled to be rather poor (Chomsky, 1959;Berwick et al., 2013), isn't so poor after all.
One direction for future work is to learn the visual architecture jointly with the language model. Error signals from the language model's backpropagation pathway can prove useful in tuning the multimodal model's ability to fuse information from the linguistic context and the image context. While our current architecture allows us to explore the visual grounding of human language, an architecture trained jointly on vision and language would allow us to also examine the theoretical influence of language on human visual perception.
Figure 1 :
1The multimodal ∆-RNN, unrolled over time. The gray-dashed connections represent the identity connections that carry over the slow-moving state while the dash-dotted black lines represent the next-step predictions made by the model. Solid black lines correspond to synaptic weight matrices (labeled accordingly).
) Spanish ∆-RNNs.
Figure 2 :
2Comparison of learning curves for the ∆-RNNs in each language (English, German, Spanish).
Table 2 :
2Decoder analysis: Word query similarity test.Ocean
Kite
Subway
Racket
∆-RNN MM-∆-RNN
∆-RNN
MM-∆-RNN
∆-RNN
MM-∆-RNN
∆-RNN
MM-∆-RNN
surfing
boats
plane
kites
train
railroad
bat
bat
sandy
beach
kites
airplane
passenger
train
batter
players
filled
pier
airplane
plane
railroad
locomotive
catcher
batter
beach
wetsuit
surfboard airplanes
trains
trains
skateboard
swing
market
cloth
planes
planes
gas
steam
umpire
catcher
crowded surfing
airplanes
airliner
commuter
gas
soccer
hitter
topped
windsurfing
boats
helicopter
trolley
commuter
women
ball
plays
boardwalk
jet
jets
locomotive passenger
pedestrians umpire
cross
flying
aircraft
biplane
steam
crowded
players
tennis
snowy
biplane
jets
jet
it's
trolley
uniform
tatoos
Table 4.1 lists examples generated by the trained English model. Another sampling approach we implemented is beam search,
Table 3 :
3Some captions generated by the Multimodal ∆-RNN in English. a skateboarder and person in front of skyscrapers. a person with skateboarder on air. a person doing a trick with skateboarder . a person with camera with blue background. a food bowl on the table a bowl full of food on the table a green and red bowl on the table a salad bowl with chicken a dog on blue bed with blanket. a dog sleeps near wooden table. a dog sleeps on a bed. a dog on some blue blankets.
In preliminary experiments, we also examined VGGNet and a few other variations, but found that the Inception worked the best when it came to acquiring somewhat more general distributed representations of natural images.
https://competitions.codalab.org/competitions/3221
Figure 2in the main paper, we also show the learning curves for all models experimented with in this paper beyond the ∆-RNN. Validation learning curves are provided for the GRU and LSTM language models, both multimodal and unimodal variations.
Sharon Goldwater, and Mark Steedman. Omri Abend, Tom Kwiatkowski, Nathaniel J Smith, Cognition. 164Bootstrapping language acquisitionOmri Abend, Tom Kwiatkowski, Nathaniel J. Smith, Sharon Goldwater, and Mark Steed- man. 2017. Bootstrapping language ac- quisition. Cognition 164:116 -143.
. 10.1016/j.cognition.2017.02.009https://doi.org/10.1016/j.cognition.2017.02.009.
Fast mapping in word learning: What probabilities tell us. Afra Alishahi, Afsaneh Fazly, Suzanne Stevenson, Proceedings of the Twelfth Conference on Computational Natural Language Learning. the Twelfth Conference on Computational Natural Language LearningAssociation for Computational LinguisticsAfra Alishahi, Afsaneh Fazly, and Suzanne Stevenson. 2008. Fast mapping in word learning: What proba- bilities tell us. In Proceedings of the Twelfth Confer- ence on Computational Natural Language Learning. Association for Computational Linguistics, pages 57-64.
Perceptions of perceptual symbols. W Lawrence, Barsalou, Behavioral and Brain Sciences. 224Lawrence W Barsalou. 1999. Perceptions of per- ceptual symbols. Behavioral and Brain Sciences 22(4):637-660.
Grounded cognition. Annual. W Lawrence, Barsalou, Review of Psychology. 59Lawrence W Barsalou. 2008. Grounded cognition. An- nual Review of Psychology 59:617-645.
Poverty of the stimulus stands: Why recent challenges fail. C Robert, Noam Berwick, Massimo Chomsky, Piattelli-Palmarini, Robert C Berwick, Noam Chomsky, and Massimo Piattelli-Palmarini. 2013. Poverty of the stimulus stands: Why recent challenges fail. Rich languages from poor inputs pages 19-42.
Recognition-by-components: a theory of human image understanding. Irving Biederman, Psychological Review. 942115Irving Biederman. 1987. Recognition-by-components: a theory of human image understanding. Psycholog- ical Review 94(2):115.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, Yoshua Bengio, arXiv:1406.1078arXiv preprintKyunghyun Cho, Bart Van Merriënboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078 .
A review of bf skinner's verbal behavior. Noam Chomsky, Language. 351Noam Chomsky. 1959. A review of bf skinner's verbal behavior. Language 35(1):26-58.
Whatever next? predictive brains, situated agents, and the future of cognitive science. Andy Clark, Behavioral and brain sciences. 363Andy Clark. 2013. Whatever next? predictive brains, situated agents, and the future of cognitive science. Behavioral and brain sciences 36(3):181-204.
A bayesian framework for crosssituational word-learning. C Michael, Frank, D Noah, Joshua B Goodman, Tenenbaum, Advances in neural information processing systems. Michael C Frank, Noah D Goodman, and Joshua B Tenenbaum. 2008. A bayesian framework for cross- situational word-learning. In Advances in neural in- formation processing systems. pages 457-464.
Devise: A deep visual-semantic embedding model. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, Advances in neural information processing systems. Andrea Frome, Greg S Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Tomas Mikolov, et al. 2013. De- vise: A deep visual-semantic embedding model. In Advances in neural information processing systems. pages 2121-2129.
Situativity and symbols: Response to Vera and Simon. G James, Joyce L Greeno, Moore, Cognitive Science. 171James G Greeno and Joyce L Moore. 1993. Situativity and symbols: Response to Vera and Simon. Cogni- tive Science 17(1):49-59.
A probabilistic earley parser as a psycholinguistic model. John Hale, Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies. the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologiesPittsburgh, PAJohn Hale. 2001. A probabilistic earley parser as a psy- cholinguistic model. In Proceedings of the second meeting of the North American Chapter of the Asso- ciation for Computational Linguistics on Language technologies. Pittsburgh, PA, pages 1-8.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural Comput. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural Comput. 9(8):1735- 1780. https://doi.org/10.1162/neco.1997.9.8.1735.
Tying word vectors and word classifiers: A loss framework for language modeling. Khashayar Hakan Inan, Richard Khosravi, Socher, arXiv:1611.01462arXiv preprintHakan Inan, Khashayar Khosravi, and Richard Socher. 2016. Tying word vectors and word classifiers: A loss framework for language modeling. arXiv preprint arXiv:1611.01462 .
Perceptual inference through global lexical similarity. T Brendan, Michael N Jones Johns, Topics in Cognitive Science. 4Brendan T Johns and Michael N Jones. 2012. Percep- tual inference through global lexical similarity. Top- ics in Cognitive Science 4(1):103-120.
The semantic pictionary project. Brent Kievit, - Kylar, Michael Jones, Proceedings of the Annual Meeting of the Cognitive Science Society. the Annual Meeting of the Cognitive Science Society33Brent Kievit-Kylar and Michael Jones. 2011. The se- mantic pictionary project. In Proceedings of the An- nual Meeting of the Cognitive Science Society. vol- ume 33.
Unifying visual-semantic embeddings with multimodal neural language models. Ryan Kiros, Ruslan Salakhutdinov, Richard S Zemel, arXiv:1411.2539arXiv preprintRyan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. arXiv preprint arXiv:1411.2539 .
Learning simpler language models with the differential state framework. Neural Computation. I I Alexander G Ororbia, Tomas Mikolov, David Reitter, Alexander G Ororbia II, Tomas Mikolov, and David Reitter. 2017. Learning simpler language models with the differential state framework. Neural Com- putation .
Using the output embedding to improve language models. Ofir Press, Lior Wolf, arXiv:1608.05859arXiv preprintOfir Press and Lior Wolf. 2016. Using the output embedding to improve language models. arXiv preprint arXiv:1608.05859 .
Predictive coding in the visual cortex: a functional interpretation of some extra-classical receptive-field effects. P N Rajesh, Rao, H Dana, Ballard, Nature Neuroscience. 2179Rajesh PN Rao and Dana H Ballard. 1999. Predictive coding in the visual cortex: a functional interpreta- tion of some extra-classical receptive-field effects. Nature Neuroscience 2(1):79.
Connecting language to the world. Deb Roy, Ehud Reiter, Artificial Intelligence. 1671-2Deb Roy and Ehud Reiter. 2005. Connecting language to the world. Artificial Intelligence 167(1-2):1-12.
Grounded compositional semantics for finding and describing images with sentences. Richard Socher, Andrej Karpathy, V Quoc, Christopher D Le, Andrew Y Manning, Ng, Transactions of the Association of Computational Linguistics. 21Richard Socher, Andrej Karpathy, Quoc V Le, Christo- pher D Manning, and Andrew Y Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association of Computational Linguistics 2(1):207-218.
Rethinking the inception architecture for computer vision. Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pages 2818-2826.
Show and tell: A neural image caption generator. Oriol Vinyals, Alexander Toshev, Samy Bengio, Dumitru Erhan, Computer Vision and Pattern Recognition (CVPR), 2015 IEEE Conference on. IEEE. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural im- age caption generator. In Computer Vision and Pat- tern Recognition (CVPR), 2015 IEEE Conference on. IEEE, pages 3156-3164.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, Yoshua Bengio, International Conference on Machine Learning. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual at- tention. In International Conference on Machine Learning. pages 2048-2057.
| [] |
[
"Multidirectional Associative Optimization of Function-Specific Word Representations",
"Multidirectional Associative Optimization of Function-Specific Word Representations"
] | [
"Daniela Gerz ",
"♠ ",
"Ivan Vulić ",
"Marek Rei marek.rei@imperial.ac.uk \nDepartment of Computing\nFaculty of Industrial Engineering and Management\nImperial College London\nTechnionIIT\n",
"Roi Reichart ",
"Anna Korhonen ",
"\nLanguage Technology Lab\nUniversity of Cambridge ♦ PolyAI Limited\nLondon\n"
] | [
"Department of Computing\nFaculty of Industrial Engineering and Management\nImperial College London\nTechnionIIT",
"Language Technology Lab\nUniversity of Cambridge ♦ PolyAI Limited\nLondon"
] | [
"Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics"
] | We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures. Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together. The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure. We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference and event similarity. The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%. | 10.18653/v1/2020.acl-main.257 | [
"https://www.aclweb.org/anthology/2020.acl-main.257.pdf"
] | 218,581,798 | 2005.05264 | c5b39af6f4a463c36c0cc3b0a82527ed252b83a6 |
Multidirectional Associative Optimization of Function-Specific Word Representations
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 5 -10, 2020. 2020
Daniela Gerz
♠
Ivan Vulić
Marek Rei marek.rei@imperial.ac.uk
Department of Computing
Faculty of Industrial Engineering and Management
Imperial College London
TechnionIIT
Roi Reichart
Anna Korhonen
Language Technology Lab
University of Cambridge ♦ PolyAI Limited
London
Multidirectional Associative Optimization of Function-Specific Word Representations
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJuly 5 -10, 2020. 20202872
We present a neural framework for learning associations between interrelated groups of words such as the ones found in Subject-Verb-Object (SVO) structures. Our model induces a joint function-specific word vector space, where vectors of e.g. plausible SVO compositions lie close together. The model retains information about word group membership even in the joint space, and can thereby effectively be applied to a number of tasks reasoning over the SVO structure. We show the robustness and versatility of the proposed framework by reporting state-of-the-art results on the tasks of estimating selectional preference and event similarity. The results indicate that the combinations of representations learned with our task-independent model outperform task-specific architectures from prior work, while reducing the number of parameters by up to 95%.
Introduction
Word representations are in ubiquitous usage across all areas of natural language processing (NLP) (Collobert et al., 2011;Chen and Manning, 2014;Melamud et al., 2016). Standard approaches rely on the distributional hypothesis (Harris, 1954;Schütze, 1993) and learn a single word vector space based on word co-occurrences in large text corpora (Mikolov et al., 2013b;Pennington et al., 2014;Bojanowski et al., 2017). This purely context-based training produces general word representations that capture the broad notion of semantic relatedness and conflate a variety of possible semantic relations into a single space (Hill et al., 2015;Schwartz et al., 2015). However, this mono-faceted view of meaning is a well-known deficiency in NLP applications (Faruqui, 2016;Mrkšić et al., 2017) as it fails to distinguish between fine-grained word associations.
In this work we propose to learn a joint functionspecific word vector space that accounts for the ). The space is optimised such that vectors for plausible SVO compositions will be close. Note that one word can have several vectors, for example chicken can occur both as S and O. different roles and functions a word can take in text. The space can be trained for a specific structure, such as SVO, and each word in a particular role will have a separate representation. Vectors for plausible SVO compositions will then be optimized to lie close together, as illustrated by Figure 1. For example, the verb vector study will be close to plausible subject vectors researcher or scientist and object vectors subject or art. For words that can occur as either subject or object, such as chicken, we obtain separate vectors for each role: one for chicken as subject and another for chicken as object. The resulting representations capture more detailed associations in addition to basic distributional similarity and can be used to construct representations for the whole SVO structure.
To validate the effectiveness of our representation framework in language applications, we focus on modeling a prominent linguistic phenomenon: a general model of who does what to whom (Gell- Mann and Ruhlen, 2011). In language, this event understanding information is typically captured by the SVO structures and, according to the cognitive science literature, is well aligned with how humans process sentences (McRae et al., 1997(McRae et al., , 1998Grefenstette and Sadrzadeh, 2011a;; it reflects the likely distinct storage and processing of objects (typically nouns) and actions (typically verbs) in the brain (Caramazza and Hillis, 1991;Damasio and Tranel, 1993).
The quantitative results are reported on two established test sets for compositional event similarity (Grefenstette and Sadrzadeh, 2011a;. This task requires reasoning over SVO structures and quantifies the plausibility of the SVO combinations by scoring them against human judgments. We report consistent gains over established word representation methods, as well as over two recent tensor-based architectures (Tilk et al., 2016;Weber et al., 2018) which are designed specifically for solving the event similarity task.
Furthermore, we investigate the generality of our approach by also applying it to other types of structures. We conduct additional experiments in a 4-role setting, where indirect objects are also modeled, along with a selectional preference evaluation of 2-role SV and VO relationships (Chambers and Jurafsky, 2010;Van de Cruys, 2014), yielding the highest scores on several established benchmarks.
Background and Motivation
Representation Learning. Standard word representation models such as skip-gram negative sam-pling (SGNS) (Mikolov et al., 2013b,a), Glove (Pennington et al., 2014), or FastText (Bojanowski et al., 2017) induce a single word embedding space capturing broad semantic relatedness (Hill et al., 2015). For instance, SGNS makes use of two vector spaces for this purpose, which are referred to as A w and A c . SGNS has been shown to approximately correspond to factorising a matrix M = A w A T c , where elements in M represent the co-occurrence strengths between words and their context words (Levy and Goldberg, 2014b). Both matrices represent the same vocabulary: therefore, only one of them is needed in practice to represent each word. Typically only A w is used while A c is discarded, or the two vector spaces are averaged to produce the final space.
Levy and Goldberg (2014a) used dependencybased contexts, resulting in two separate vector spaces; however, the relation types were embedded into the vocabulary and the model was trained only in one direction. Camacho-Collados et al. (2019) proposed to learn separate sets of relation vectors in addition to standard word vectors and showed that such relation vectors encode knowledge that is often complementary to what is coded in word vectors. Rei et al. (2018) and Vulić and Mrkšić (2018) described related task-dependent neural nets for mapping word embeddings into relation-specific spaces for scoring lexical entailment. In this work, we propose a task-independent approach and extend it to work with a variable number of relations.
Neuroscience. Theories from cognitive linguistics and neuroscience reveal that single-space representation models fail to adequately reflect the organisation of semantic concepts in the human brain (i.e., semantic memory): there seems to be no single semantic system indifferent to modalities or categories in the brain (Riddoch et al., 1988). Recent fMRI studies strongly support this proposition and suggest that semantic memory is in fact a widely distributed neural network (Davies et al., 2009;Huth et al., 2012;Pascual et al., 2015;Rice et al., 2015;de Heer et al., 2017), where sub-networks might activate selectively or more strongly for a particular function such as modalityspecific or category-specific semantics (such as objects/actions, abstract/concrete, animate/inanimate, animals, fruits/vegetables, colours, body parts, countries, flowers, etc.) (Warrington, 1975;Warrington and McCarthy, 1987;McCarthy and Warrington, 1988). This indicates a function-specific division of lower-level semantic processing. Singlespace distributional word models have been found to partially correlate to these distributed brain activity patterns Huth et al., 2012Huth et al., , 2016Anderson et al., 2017), but fail to explain the full spectrum of fine-grained word associations humans are able to make. Our work has been partly inspired by this literature.
Compositional Distributional Semantics. Partially motivated by similar observations, prior work frequently employs tensor-based methods for composing separate tensor spaces (Coecke et al., 2010): there, syntactic categories are often represented by tensors of different orders based on assumptions on their relations. One fundamental difference is made between atomic types (e.g., nouns) versus compositional types (e.g., verbs). Atomic types are seen as standalone: their meaning is independent from other types. On the other hand, verbs are compositional as they rely on their subjects and objects for their exact meaning. Due to this added complexity, the compositional types are often represented with more parameters than the atomic types, e.g., with a matrix instead of a vector. The goal is then to compose constituents into a semantic representation which is independent of the underlying grammatical structure. Therefore, a large body of prior work is concerned with finding appropriate composition functions (Grefenstette and Sadrzadeh, 2011a,b;Kartsaklis et al., 2012;Milajevs et al., 2014) to be applied on top of word representations. Since this approach represents different syntactic structures with tensors of varying dimensions, comparing syntactic constructs is not straightforward. This compositional approach thus struggles with transferring the learned knowledge to downstream tasks.
State-of-the-art compositional models (Tilk et al., 2016;Weber et al., 2018) combine similar tensor-based approaches with neural training, leading to task-specific compositional solutions. While effective for a task at hand, the resulting models rely on a large number of parameters and are not robust: we observe deteriorated performance on other related compositional tasks, as shown in Section 6.
Multivariable (SVO) Structures in NLP.
Modeling SVO-s is important for tasks such as compositional event similarity using all three variables, and thematic fit modeling based on SV and VO associations separately. Traditional solutions are typ-ically based on clustering of word co-occurrence counts from a large corpus (Baroni and Lenci, 2010;Greenberg et al., 2015a,b;Emerson and Copestake, 2016). More recent solutions combine neural networks with tensor-based methods. Van de Cruys (2014) present a feedforward neural net trained to score compositions of both two and three groups with a max-margin loss. Grefenstette and Sadrzadeh (2011a,b); Kartsaklis and Sadrzadeh (2014) Objectives. We propose to induce functionspecific vector spaces which enable a better model of associations between concepts and consequently improved event representations by encoding the relevant information directly into the parameters for each word during training. Word vectors offer several advantages over tensors: a large reduction in parameters and fixed dimensionality across concepts. This facilitates their reuse and transfer across different tasks. For this reason, we find our multidirectional training to deliver good performance: the same function-specific vector space achieves state-of-the-art scores across multiple related tasks, previously held by task-specific models.
Function-specific Representation Space
Our goal is to model the mutual associations (cooccurrences) between N groups of words, where each group represents a particular role, such as subject or object in an SVO structure. We induce an embedding matrix R |V i |×d for every group i = 1, . . . , N , where |V i | corresponds to the vocabulary size of the i-th group and the group vocabularies can partially overlap. For consistency, the vector dimensionality d is kept equal across all variables.
Multiple Groups. Without loss of generality we present a model which creates a function-specific vector space for N = 3 groups, referring to those groups as A, B, and C. Note that the model is not limited to this setup, as we show later in Section 6. A, B and C might be interrelated phenomena, and we aim for a model which can reliably score the plausibility of combining three vectors ( A, B, C) taken from this space. In addition to the full joint prediction, we aim for any two vector combinations Directionality. To design a solution with the necessary properties, we first need to consider the influence of prediction directionality in representation learning. A representation model such as SGNS (Mikolov et al., 2013a,b) learns two vectors for each word in one large vocabulary: one vector on the input side (word vector), another on the output side (context vector), with only the input word vectors being commonly used (Levy and Goldberg, 2014b). Here, we require several distinct vocabularies (i.e., three, one each for group A, B, and C).
Instead of context vectors, we train the model to predict words from another group, hence directionality is an important consideration. We find that prediction directionality has a strong impact on the quality of the induced representations, and illustrate this effect on an example that is skewed extremely to one side: an n:1 assignment case. Let us assume data of two groups, where each word of group A 1 is assigned to exactly one of three clusters in group B 3 . We expect a function-specific word vector space customised for this purpose to show three clearly separated clusters. Figure 2 visualises obtained representations. 1 Figure 2a plots the vector spaces when we use words on the input side of the model and predict the cluster: A 1 → B 3 ; this can be seen as n:1 assignment. In the opposite direction (B 3 → A 1 , 1:n assignment) we do not observe the same trends ( Figure 2b).
Representations for other and more complex phenomena suffer from the same issue. For example, the verb eat can take many arguments corresponding to various food items such as pizza, beans, or kimchi. A more specific verb such as embark might take only a few arguments such as journey, whereas journey might be fairly general and can co-occur with many other verbs themselves. We thus effectively deal with an n:m assignment case, which might be inclined towards 1:n or n:1 entirely depending on the words in question. Therefore, it is unclear whether one should rather construct a model predicting verb → object or object → verb. We resolve this fundamental design question by training representations in a multidirectional way with a joint loss function. Figure 2c shows how this method learns accurately clustered representations without having to make directionality assumptions.
Multidirectional Synchronous Representation Learning
The multidirectional neural representation learning model takes a list of N groups of words (G 1 , G 2 , . . . , G N ), factorises it into all possible "group-to-group" sub-models, and trains them jointly by combining objectives based on skipgram negative sampling (Mikolov et al., 2013a,b). We learn a joint function-specific word vector space by using sub-networks that each consume one group G i on the input side and predict words from a second group G j on the output side, i, j = 1, 2 . . . , N ; i = j. All sub-network losses are tied into a single joint loss and all groups G 1 , . . . , G n are shared between the sub-networks.
Sub-Network Architecture. We first factorise groups into sub-networks, representing all possible directions of prediction. Two groups would lead to two sub-networks A → B and B → A; three groups lead to six sub-networks. Similar to (Mikolov et al., 2013a,b), we calculate the dot-product between two word vectors to quantify their association. For instance, the sub-network A → B computes its prediction:
P A→B = σ( a · B T e + b ab )(1)
where a is a word vector from the input group A, B e is the word embedding matrix for the target group B, b ab is a bias vector, and σ is the sigmoid function. The loss of each sub-network is computed using cross-entropy between this prediction and the correct labels: L A→B = cross entropy(P A→B , L A→B ).
(2)
L A→B are one-hot vectors corresponding to the correct predictions. We leave experiments with more sophisticated sub-networks for future work.
Synchronous Joint Training. We integrate all sub-networks into one joint model via two following mechanisms:
(1) Shared Parameters. The three embedding matrices referring to groups A, B and C are shared across all sub-networks. That is, we train one matrix per group, regardless of whether it is being employed at the input or the output side of any sub-network. This leads to a substantial reduction in the model size. For example, with a vocabulary of 50, 000 words and 25-dimensional vectors we work only with 1.35M parameters. Comparable models for the same tasks are trained with much larger sets of parameters: 26M or even up to 179M when not factorised (Tilk et al., 2016). Our modeling approach thus can achieve more that 95% reduction in the number of parameters.
(2) Joint Loss. We also train all sub-networks with a single joint loss and a single backward pass. We refer to this manner of joining the losses as synchronous: it synchronises the backward pass of all sub-networks. This could also be seen as a form of multi-task learning, where each sub-network optimises the shared parameters for a different task (Ruder, 2017). In practice, we perform a forward pass in each direction separately, then join all subnetwork cross-entropy losses and backpropagate this joint loss through all sub-networks in order to update the parameters. The different losses are combined using addition:
L = µ L µ(3)
where µ iterates over all the possible sub-networks, L µ is the corresponding loss from one network, and L the overall joint loss. When focusing on the SVO structures, the model will learn one joint space for the three groups of embeddings (one for S, V and O). The 6 subnetworks all share parameters and optimization is performed using the joint loss:
L =L S→V + L V →S + L V →O + L O→V + L S→O + L O→S(4)
The vectors from the induced function-specific space can then be composed by standard composition functions (Milajevs et al., 2014) to yield event representations (Weber et al., 2018), that is, representations for the full SVO structure.
Evaluation
Preliminary Task: Pseudo-Disambiguation. In the first evaluation, we adopt a standard pseudodisambiguation task from the selectional preference literature (Rooth et al., 1999;Bergsma et al., 2008;Erk et al., 2010;Chambers and Jurafsky, 2010;Van de Cruys, 2014). For the three-group (S-V-O) case, the task is to score a true triplet (i.e., the ( Similarly, for the two-group setting, the task is to express a higher preference towards the attested pairs (V-O) or (S-V) over corrupted pairs (V-O') or (S'-V). We report accuracy scores, i.e., we count all items where score(true) > score(corrupted). This simple pseudo-disambiguation task serves as a preliminary sanity check: it can be easily applied to a variety of training conditions with different variables. However, as pointed out by Chambers and Jurafsky (2010), the performance on this task is strongly influenced by a number of factors such as vocabulary size and the procedure for constructing corrupted examples. Therefore, we additionally evaluate our models on a number of other established datasets . Event Similarity (3 Variables: SVO). A standard task to measure the plausibility of SVO structures (i.e., events) is event similarity (Grefenstette and Sadrzadeh, 2011a;Weber et al., 2018): the goal is to score similarity between SVO triplet pairs and correlate the similarity scores to humanelicited similarity judgements. Robust and flexible event representations are important to many core areas in language understanding such as script learning, narrative generation, and discourse understanding (Chambers and Jurafsky, 2009;Pichotta and Mooney, 2016;Modi, 2016;Weber et al., 2018). We evaluate event similarity on two benchmarking data sets: GS199 (Grefenstette and Sadrzadeh, 2011a) and KS108 . GS199 contains 199 pairs of SV O triplets/events. In the GS199 data set only the V is varied, while S and O are fixed in the pair: this evaluation prevents the model from relying only on simple lexical overlap for similarity computation. 2 KS108 contains 108 event pairs for the same task, but is specifically constructed without any lexical overlap between the events in each pair.
For this task function-specific representations are composed into a single event representation/vector. Following prior work, we compare cosine similarity of event vectors to averaged human scores and report Spearman's ρ correlation with human scores. We compose the function-specific word vectors into event vectors using simple addition and multiplication, as well as more sophisticated compositions from prior work (Milajevs et al., 2014, inter alia). The summary is provided in Table 4.
Thematic-Fit Evaluation (2 Variables: SV and VO). Similarly to the 3-group setup, we also evaluate the plausibility of SV and V O pairs separately in the 2-group setup. The selectional preference evaluation , also referred to as thematic-fit, quantifies the extent to which a noun fulfils the selectional preference of a verb given a role (i.e., agent:S, or patient:O) (McRae et al., 1997). We evaluate our 2-group function-specific Table 3: Accuracy scores on the pseudo disambiguation task. indicates our reimplementation.
spaces on two standard benchmarks: 1) MST1444 (McRae et al., 1998) contains 1,444 word pairs where humans provided thematic fit ratings on a scale from 1 to 7 for each noun to score the plausibility of the noun taking the agent role, and also taking the patient role. 3 2) PADO414 (Padó, 2007) is similar to MST1444, containing 414 pairs with human thematic fit ratings, where role-filling nouns were selected to reflect a wide distribution of scores for each verb. We compute plausibility by simply taking the cosine similarity between the verb vector (from the V space) and the noun vector from the appropriate function-specific space (S space for agents; O space for patients). We again report Spearman's ρ correlation scores.
Training Data. We parse the ukWaC corpus (Baroni et al., 2009) and the British National Corpus (BNC) (Leech, 1992) using the Stanford Parser with Universal Dependencies v1.4 (Chen and Manning, 2014;Nivre et al., 2016) and extract cooccurring subjects, verbs and objects. All words are lowercased and lemmatised, and tuples containing non-alphanumeric characters are excluded. We also remove tuples with (highly frequent) pronouns as subjects, and filter out training examples containing words with frequency lower than 50. After preprocessing, the final training corpus comprises 22M SVO triplets in total. Table 2 additionally shows training data statistics when training in the 2-group setup (SV and VO) and in the 4-group setup (when adding indirect objects: SVO+iO). We report the number of examples in training and test sets, as well as vocabulary sizes and most frequent words across different categories.
Hyperparameters. We train with batch size 128, and use Adam for optimisation (Kingma and Ba, 2015) with a learning rate 0.001. All gradients are clipped to a maximum norm of 5.0. All models were trained with the same fixed random seed. We train 25-dimensional vectors for all setups (2/3/4 groups), and we additionally train 100-dimensional vectors for the 3-group (SVO) setup.
Results and Analysis
Pseudo-Disambiguation. Accuracy scores on the pseudo-disambiguation task in the 2/3/4-group setups are summarised in Table 3. 4 We find consistently high pseudo-disambiguation scores (>0.94) across all setups. In a more detailed analysis, we find especially the prediction accuracy of verbs to be high: we report accuracy of 96.9% for the 3group SVO model. The vocabulary size for verbs is typically lowest (see Table 2), which presumably makes predictions into this direction easier. In summary, as mentioned in Section 5, this initial evaluation already suggests that our model is able to capture associations between interrelated groups which are instrumental to modeling SVO structures and composing event representations.
Event Similarity. We now test correlations of SVO-based event representations composed from a Table 5: Results on the event similarity task. Best baseline score is underlined, and the best overall result is provided in bold.
function-specific vector space (see Table 4) to human scores in the event similarity task. A summary of the main results is provided in Table 5. We also report best baseline scores from prior work. The main finding is that our model based on functionspecific word vectors outperforms previous stateof-the-art scores on both datasets. It is crucial to note that different modeling approaches and configurations from prior work held previous peak scores on the two evaluation sets. 5 Interestingly, by relying only on the representations from the V subspace (i.e., by completely discarding the knowledge stored in S and O vectors), we can already obtain reasonable correlation scores. This is an indicator that the verb vectors indeed stores some selectional preference information as designed, i.e., the information is successfully encoded into the verb vectors themselves.
Thematic-Fit Evaluation. Correlation scores on two thematic-fit evaluation data sets are summarised in Table 6. We also report results with representative baseline models for the task: 1) a TypeDM-based model (Baroni and Lenci, 2010), further improved by Greenberg et al. (2015a,b) (G15), and 2) current state-of-the-art tensor-based neural model by Tilk et al. (2016) (TK16). We find that vectors taken from the model trained in the joint 3-group SVO setup perform on a par with state-of-the-art models also in the 2-group evaluation on SV and VO subsets. Vectors trained explicitly in the 2-group setup using three times more data lead to substantial improvements on PADO414. As a general finding, our function-specific approach leads to peak performance on both data sets. The results are similar with 25-dim SVO vectors.
Our model is also more light-weight than the baselines: we do not require a full (tensor-based) neural model, but simply function-specific word vectors to reason over thematic fit. To further verify the importance of joint multidirectional training, we have also compared our function-specific vectors against standard single-space word vectors (Mikolov et al., 2013b). The results indicate the superiority of function-specific spaces: respective correlation scores on MST1444 and PADO414 are 0.28 and 0.41 (vs 0.34 and 0.58 with our model). It is interesting to note that we obtain state-of-theart scores calculating cosine similarity of vectors taken from two groups found in the joint space. This finding verifies that the model does indeed learn a joint space where co-occurring words from different groups lie close to each other.
Qualitative Analysis. We retrieve nearest neighbours from the function-specific (S, V , O) space, shown in Figure 1. We find that the nearest neighbours indeed reflect the relations required to model the SVO structure. For instance, the closest subjects/agents to the verb eat are cat and dog. The closest objects to need are three plausible nouns: help, support, and assistance. As the model has information about group membership, we can also filter and compare nearest neighbours in singlegroup subspaces. For example, we find subjects similar to the subject memory are dream and feeling, and objects similar to beer are ale and pint.
Model Variants. We also conduct an ablation study that compares different model variants. The variants are constructed by varying 1) the training regime: asynchronous (async) vs synchronous (sync), and 2) the type of parameter sharing: training on separate parameters for each sub-network (sep) 6 or training on shared variables (shared). In the asynchronous setup we update the shared parameters per sub-network directly based on their own loss, instead of relying on the joint synchronous loss as in Section 3. Table 7 shows the results with the model variants, demonstrating that both aspects (i.e., shared parameters and synchronous training) are important to reach improved overall performance. We reach the peak scores on all evaluation sets using the sync+shared variant. We suspect that asynchronous training deteriorates performance because each sub-network overwrites the updates of other subnetworks as their training is not tied through a joint loss function. On the other hand, the synchronous training regime guides the model towards making updates that can benefit all sub-networks.
Conclusion and Future Work
We presented a novel multidirectional neural framework for learning function-specific word representations, which can be easily composed into multiword representations to reason over event similarity and thematic fit. We induced a joint vector space in which several groups of words (e.g., S, V, and O words forming the SVO structures) are represented while taking into account the mutual associations between the groups. We found that resulting function-specific vectors yield state-of-the-art results on established benchmarks for the tasks of estimating event similarity and evaluating thematic fit, previously held by task-specific methods.
In future work we will investigate more sophisticated neural (sub-)networks within the proposed framework. We will also apply the idea of functionspecific training to other interrelated linguistic phenomena and other languages, probe the usefulness of function-specific vectors in other language tasks, and explore how to integrate the methodology with sequential models. The pre-trained word vectors used in this work are available online at: https://github.com/cambridgeltl/fs-wrep.
Figure 1 :
1Illustration of three neighbourhoods in a function-specific space trained for the SVO structure (marked (S), (V), $(O)
;Milajevs et al. (2014);Edelstein and Reichart (2016) employ tensor compositions on standard single-space word vectors.Hashimoto and Tsuruoka (2016) discern compositional and non-compositional phrase embeddings starting from HPSG-parsed data.
Figure 2 :
2The directionality of prediction in neural models is important. Representations can be of varying quality depending on whether they are induced at the input or output side of the model. Our multidirectional approach resolves this problem by training on shared representations in all directions.( A B, B C, C A) to have plausible scores of their own. Observing relations between words inside single-group subspaces (A, B, or C) is another desirable feature.
S-V-O) structure attested in the corpus) above all corrupted triplets (S-V'-O), (S'-V-O), (S-V-O'), where S', V' and O' denote subjects and objects randomly drawn from their respective vocabularies.
Table 1 :
1Nearest neighbours in a function-specific space trained for the SVO structure. In the Joint SVO space (bottom) we show nearest neighbors for verbs (V) from the two other subspaces (O and S).
Table 2 :
2Training data statistics.Model
Accuracy
4 Variables
SVO+iO
0.950
3 Variables: SVO
Van de Cruys (2009)
0.874
Van de Cruys (2014)
0.889
Tilk et al. (2016)
0.937
Ours
0.943
2 Variables
Rooth et al. (1999)
0.720
Erk et al. (2010)
0.887
Van de Cruys (2014)
0.880
Ours: SV
0.960
Ours: VO
0.972
Table 4 :
4Composition functions used to obtain event
vectors from function-specific vector spaces. +: addi-
tion, : element-wise multiplication, ×: dot product.
[·, ·]: concatenation.
Spearman's ρ
Model
Reference
GS199 KS108
Copy Object W2V Milajevs et al. (2014) 0.46
0.66
Addition KS14
Milajevs et al. (2014) 0.28
0.73
Tilk et al. (2016)
0.34
-
Weber et al. (2018)
-
0.71
Ours: SVO d100
Verb only
Ours
0.34
0.63
Addition
Ours
0.27
0.76
Concat
Ours
0.26
0.75
Concat Addition
Ours
0.32
0.77
Copy Object
Ours
0.40
0.52
Network
Ours
0.53
-
Table 6 :
6Results on the 2-variable thematic-fit evaluation. Spearman's ρ correlation.async
sync
sep
shared sep
shared
3 Variables
KS108 Verb only 0.56
0.48
0.58
0.60
KS108 Addition
0.51
0.66
0.73
0.78
GS199 Verb only 0.24
0.26
0.26
0.34
GS199 Network
0.10
0.40
0.28
0.52
2 Variables
MST1444
0.17
0.10
0.30
0.39
PADO414
0.41
0.21
0.44
0.44
Table 7 :
7Evaluation of different model variants, by training regime and parameter sharing.
We train on 10K randomly selected German nouns (A1) and their corresponding noun gender (B3) from a German-English dictionary obtained from dict.cc, and train a 25dim model for 24 epochs. Points in the figures show 1K words which were randomly selected from the 10K training vocabulary. The embedding spaces have been mapped to 2D with tSNE (van der Maaten and Hinton, 2012).
For instance, the phrases 'people run company' and 'people operate company' have a high similarity score of 6.53, whereas 'river meet sea' and 'river satisfy sea' have been given a low score of 1.84.
Using an example from, the human participants were asked "how common is it for a {snake, monster, baby, cat} to frighten someone/something" (agent role) as opposed to "how common is it for a {snake, monster, baby, cat} to be frightened by someone/something" (patient role).
We also provide baseline scores taken from prior work, but the reader should be aware that the scores may not be directly comparable due to the dependence of this evaluation on factors such as vocabulary size and sampling of corrupted examples(Chambers and Jurafsky, 2010;.
Note the two tasks are inherently different. KS108 requires similarity between plausible triplets. Using the network score directly (which is a scalar, seeTable 4) is not suitable for KS108 as all KS108 triplets are plausible and scored highly. This is reflected in the results inTable 5.
With separate parameters we merge vectors from "duplicate" vector spaces by non-weighted averaging.
AcknowledgmentsThis work is supported by the ERC Consolidator Grant LEXICAL: Lexical Acquisition Across Languages (no 648909) awarded to Anna Korhonen.
Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns. Andrew Anderson, Douwe Kiela, Stephen Clark, Massimo Poesio, Transactions of the ACL. 5Andrew Anderson, Douwe Kiela, Stephen Clark, and Massimo Poesio. 2017. Visually grounded and tex- tual semantic models differentially decode brain ac- tivity associated with concrete and abstract nouns. Transactions of the ACL, 5:17-30.
The WaCky wide web: a collection of very large linguistically processed webcrawled corpora. Language Resources and Evaluation. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, Eros Zanchetta, https:/link.springer.com/article/10.1007/s10579-009-9081-443Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web- crawled corpora. Language Resources and Evalua- tion, 43(3):209-226.
Distributional memory: A general framework for corpus-based semantics. Marco Baroni, Alessandro Lenci, 10.1162/coli_a_00016Computational Linguistics. 364Marco Baroni and Alessandro Lenci. 2010. Dis- tributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673-721.
Discriminative learning of selectional preference from unlabeled text. Shane Bergsma, Dekang Lin, Randy Goebel, Proceedings of EMNLP. EMNLPShane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proceedings of EMNLP, pages 59-68.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the ACL. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the ACL, 5:135-146.
Relational word embeddings. José Camacho-Collados, Luis Espinosa Anke, Steven Schockaert, 10.18653/v1/p19-1318Proceedings of ACL. ACLJosé Camacho-Collados, Luis Espinosa Anke, and Steven Schockaert. 2019. Relational word embed- dings. In Proceedings of ACL, pages 3286-3296.
Lexical organization of nouns and verbs in the brain. Alfonso Caramazza, Argye E Hillis, 10.1038/349788a0Nature. 3496312Alfonso Caramazza and Argye E. Hillis. 1991. Lexical organization of nouns and verbs in the brain. Nature, 349(6312):788-790.
Unsupervised learning of narrative schemas and their participants. Nathanael Chambers, Dan Jurafsky, Proceedings of ACL. ACLNathanael Chambers and Dan Jurafsky. 2009. Unsu- pervised learning of narrative schemas and their par- ticipants. In Proceedings of ACL, pages 602-610.
Improving the use of pseudo-words for evaluating selectional preferences. Nathanael Chambers, Dan Jurafsky, Proceedings of ACL. ACLNathanael Chambers and Dan Jurafsky. 2010. Improv- ing the use of pseudo-words for evaluating selec- tional preferences. In Proceedings of ACL, pages 445-453.
A fast and accurate dependency parser using neural networks. Danqi Chen, Christopher D Manning, Proceedings of EMNLP. EMNLPDanqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of EMNLP, pages 740-750.
Mathematical foundations for a compositional distributional model of meaning. Bob Coecke, Mehrnoosh Sadrzadeh, Stephen Clark, Linguistic Analysis. 361-4Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compo- sitional distributional model of meaning. Linguistic Analysis, 36(1-4):345-384.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Lon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, Journal of Machine Learning Research. 12Ronan Collobert, Jason Weston, Lon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493- 2537.
A non-negative tensor factorization model for selectional preference induction. Tim Van De Cruys, Proceedings of the Workshop on Geometrical Models of Natural Language Semantics. the Workshop on Geometrical Models of Natural Language SemanticsTim Van de Cruys. 2009. A non-negative tensor fac- torization model for selectional preference induc- tion. In Proceedings of the Workshop on Geomet- rical Models of Natural Language Semantics, pages 83-90.
A neural network approach to selectional preference acquisition. Tim Van De Cruys, Proceedings of EMNLP. EMNLPTim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of EMNLP, pages 26-35.
Nouns and verbs are retrieved with differently distributed neural systems. R Antonio, Daniel Damasio, Tranel, 10.1073/PNAS.90.11.4957Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America90Antonio R. Damasio and Daniel Tranel. 1993. Nouns and verbs are retrieved with differently distributed neural systems. Proceedings of the National Academy of Sciences of the United States of Amer- ica, 90(11):4957-60.
The neural basis of semantic memory: Evidence from semantic dementia. R Rhys Davies, Glenda M Halliday, John H Xuereb, Jillian J Kril, John R Hodges, 10.1016/J.NEUROBIOLAGING.2008.02.005Neurobiology of Aging. 3012R. Rhys Davies, Glenda M. Halliday, John H. Xuereb, Jillian J. Kril, and John R. Hodges. 2009. The neural basis of semantic memory: Evidence from seman- tic dementia. Neurobiology of Aging, 30(12):2043- 2052.
A factorized model for transitive verbs in compositional distributional semantics. Lilach Edelstein, Roi Reichart, abs/1609.07756CoRRLilach Edelstein and Roi Reichart. 2016. A factorized model for transitive verbs in compositional distribu- tional semantics. CoRR, abs/1609.07756.
Functional distributional semantics. Guy Emerson, Ann A Copestake, 10.18653/v1/W16-1605Proceedings of the 1st Workshop on Representation Learning for NLP. the 1st Workshop on Representation Learning for NLPGuy Emerson and Ann A. Copestake. 2016. Func- tional distributional semantics. In Proceedings of the 1st Workshop on Representation Learning for NLP, pages 40-52.
A flexible, corpus-driven model of regular and inverse selectional preferences. Katrin Erk, Sebastian Padó, Ulrike Padó, 10.1162/coli_a_00017Computational Linguistics. 364Katrin Erk, Sebastian Padó, and Ulrike Padó. 2010. A flexible, corpus-driven model of regular and inverse selectional preferences. Computational Linguistics, 36(4):723-763.
Diverse Context for Learning Word Representations. Manaal Faruqui, Carnegie Mellon UniversityPh.D. thesisManaal Faruqui. 2016. Diverse Context for Learning Word Representations. Ph.D. thesis, Carnegie Mel- lon University.
The origin and evolution of word order. Murray Gell, - Mann, Merritt Ruhlen, Proceedings of the National Academy of Sciences. 10842Murray Gell-Mann and Merritt Ruhlen. 2011. The ori- gin and evolution of word order. Proceedings of the National Academy of Sciences, 108(42):17290- 17295.
Verb polysemy and frequency effects in thematic fit modeling. Clayton Greenberg, Vera Demberg, Asad Sayeed, Proceedings of the 6th Workshop on Cognitive Modeling and Computational Linguistics. the 6th Workshop on Cognitive Modeling and Computational LinguisticsClayton Greenberg, Vera Demberg, and Asad Sayeed. 2015a. Verb polysemy and frequency effects in the- matic fit modeling. In Proceedings of the 6th Work- shop on Cognitive Modeling and Computational Lin- guistics, pages 48-57.
Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering. Clayton Greenberg, Asad Sayeed, Vera Demberg, Proceedings of NAACL-HLT. NAACL-HLTClayton Greenberg, Asad Sayeed, and Vera Demberg. 2015b. Improving unsupervised vector-space the- matic fit evaluation via role-filler prototype cluster- ing. In Proceedings of NAACL-HLT, pages 21-31.
Experimental support for a categorical compositional distributional model of meaning. Edward Grefenstette, Mehrnoosh Sadrzadeh, Proceedings of EMNLP. EMNLPEdward Grefenstette and Mehrnoosh Sadrzadeh. 2011a. Experimental support for a categorical composi- tional distributional model of meaning. In Proceed- ings of EMNLP, pages 1394-1404.
Experimenting with transitive verbs in a DisCoCat. Edward Grefenstette, Mehrnoosh Sadrzadeh, Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. the GEMS 2011 Workshop on GEometrical Models of Natural Language SemanticsEdward Grefenstette and Mehrnoosh Sadrzadeh. 2011b. Experimenting with transitive verbs in a DisCoCat. In Proceedings of the GEMS 2011 Work- shop on GEometrical Models of Natural Language Semantics, pages 62-66.
. S Zellig, Harris, 10.1007/978-94-009-8467-7{_}1Distributional Structure. Word. 102-3Zellig S. Harris. 1954. Distributional Structure. Word, 10(2-3):146-162.
Adaptive joint learning of compositional and noncompositional phrase embeddings. Kazuma Hashimoto, Yoshimasa Tsuruoka, Proceedings of ACL. ACLKazuma Hashimoto and Yoshimasa Tsuruoka. 2016. Adaptive joint learning of compositional and non- compositional phrase embeddings. In Proceedings of ACL, pages 205-215.
The hierarchical cortical organization of human speech processing. Wendy A De Heer, Alexander G Huth, Thomas L Griffiths, Jack L Gallant, Frédéric E Theunissen, 10.1523/JNEUROSCI.3267-16.2017Journal of Neuroscience. 3727Wendy A. de Heer, Alexander G. Huth, Thomas L. Grif- fiths, Jack L. Gallant, and Frédéric E. Theunissen. 2017. The hierarchical cortical organization of hu- man speech processing. Journal of Neuroscience, 37(27):6539-6557.
SimLex-999: Evaluating semantic models with (genuine) similarity estimation. Felix Hill, Roi Reichart, Anna Korhonen, Computational Linguistics. 414Felix Hill, Roi Reichart, and Anna Korhonen. 2015. SimLex-999: Evaluating semantic models with (gen- uine) similarity estimation. Computational Linguis- tics, 41(4):665-695.
Natural speech reveals the semantic maps that tile human cerebral cortex. Alexander G Huth, Wendy A De Heer, Thomas L Griffiths, Frédéric E Theunissen, Jack L Gallant, Nature. 5327600Alexander G. Huth, Wendy A. de Heer, Thomas L. Grif- fiths, Frédéric E. Theunissen, and Jack L. Gallant. 2016. Natural speech reveals the semantic maps that tile human cerebral cortex. Nature, 532(7600):453- 458.
A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Alexander G Huth, Shinji Nishimoto, An T Vu, Jack L Gallant, 10.1016/J.NEURON.2012.10.014Neuron. 766Alexander G. Huth, Shinji Nishimoto, An T. Vu, and Jack L. Gallant. 2012. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neu- ron, 76(6):1210-1224.
A study of entanglement in a categorical framework of natural language. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Proceedings of QPL. QPLDimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2014. A study of entanglement in a categorical framework of natural language. In Proceedings of QPL, pages 249-261.
A unified sentence space for categorical distributional-compositional semantics: Theory and experiments. Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Stephen Pulman, Proceedings of COLING. COLINGDimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Stephen Pulman. 2012. A unified sentence space for categor- ical distributional-compositional semantics: Theory and experiments. In Proceedings of COLING, pages 549-558.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, Proceedings of ICLR (Conference Track). ICLR (Conference Track)Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of ICLR (Conference Track).
Geoffrey Neil Leech. 1992. 100 million words of English: The British National Corpus (BNC). 10.1017/S0266078400006854Geoffrey Neil Leech. 1992. 100 million words of En- glish: The British National Corpus (BNC).
Dependencybased word embeddings. Omer Levy, Yoav Goldberg, Proceedings of ACL. ACLOmer Levy and Yoav Goldberg. 2014a. Dependency- based word embeddings. In Proceedings of ACL, pages 302-308.
Neural word embedding as implicit matrix factorization. Omer Levy, Yoav Goldberg, Proceedings of NIPS. NIPSOmer Levy and Yoav Goldberg. 2014b. Neural word embedding as implicit matrix factorization. In Pro- ceedings of NIPS, pages 2177-2185.
Visualizing non-metric similarities in multiple maps. Laurens Van Der Maaten, Geoffrey E Hinton, 10.1007/s10994-011-5273-4Machine Learning. 87Laurens van der Maaten and Geoffrey E. Hinton. 2012. Visualizing non-metric similarities in multiple maps. Machine Learning, 87(1):33-55.
Evidence for modality-specific meaning systems in the brain. Rosaleen A Mccarthy, E K Warrington, 10.1038/334428a0Nature. 3346181Rosaleen A. McCarthy and E. K. Warrington. 1988. Evidence for modality-specific meaning systems in the brain. Nature, 334(6181):428-430.
Thematic roles as verb-specific concepts. Ken Mcrae, Todd Ferretti, Liane Amyote, 10.1080/016909697386835Language and Cognitive Processes. 122Ken McRae, Todd Ferretti, and Liane Amyote. 1997. Thematic roles as verb-specific concepts. Language and Cognitive Processes, 12(2):137-176.
Modeling the influence of thematic fit (and other constraints) in on-line sentence comprehension. Ken Mcrae, Michael J Spivey-Knowlton, Michael K Tanenhaus, Journal of Memory and Language. 383Ken McRae, Michael J. Spivey-Knowlton, and Michael K. Tanenhaus. 1998. Modeling the influ- ence of thematic fit (and other constraints) in on-line sentence comprehension. Journal of Memory and Language, 38(3):283-312.
The role of context types and dimensionality in learning word embeddings. Oren Melamud, David Mcclosky, Siddharth Patwardhan, Mohit Bansal, Proceedings of NAACL-HLT. NAACL-HLTOren Melamud, David McClosky, Siddharth Patward- han, and Mohit Bansal. 2016. The role of context types and dimensionality in learning word embed- dings. In Proceedings of NAACL-HLT, pages 1030- 1040.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Gregory S Corrado, Jeffrey Dean, Proceedings of ICLR (Workshop Papers. ICLR (Workshop PapersTomas Mikolov, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of ICLR (Workshop Papers).
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, S Gregory, Jeffrey Corrado, Dean, Proceedings of NIPS. NIPSTomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, and Jeffrey Dean. 2013b. Distributed rep- resentations of words and phrases and their composi- tionality. In Proceedings of NIPS, pages 3111-3119.
Evaluating neural word representations in tensor-based compositional settings. Dmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, Matthew Purver, Proceedings of EMNLP. EMNLPDmitrijs Milajevs, Dimitri Kartsaklis, Mehrnoosh Sadrzadeh, and Matthew Purver. 2014. Evaluating neural word representations in tensor-based compo- sitional settings. In Proceedings of EMNLP, pages 708-719.
Vector-based models of semantic composition. Jeff Mitchell, Mirella Lapata, Proceedings of ACL. ACLJeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. Proceedings of ACL, pages 236-244.
Predicting human brain activity associated with the meanings of nouns. Tom M Mitchell, Svetlana V Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L Malave, Robert A Mason, Marcel Adam Just, 10.1126/science.1152876Science. 3205880Tom M. Mitchell, Svetlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave, Robert A. Mason, and Marcel Adam Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320(5880):1191-1195.
Event embeddings for semantic script modeling. Ashutosh Modi, Proceedings of CoNLL. CoNLLAshutosh Modi. 2016. Event embeddings for semantic script modeling. In Proceedings of CoNLL, pages 75-83.
Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Nikola Mrkšić, Ivan Vulić, Diarmuidó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korhonen, Steve Young, Transactions of the ACL. 5Nikola Mrkšić, Ivan Vulić, DiarmuidÓ Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korho- nen, and Steve Young. 2017. Semantic specialisa- tion of distributional word vector spaces using mono- lingual and cross-lingual constraints. Transactions of the ACL, 5:309-324.
Universal Dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, D Christopher, Manning, T Ryan, Slav Mcdonald, Sampo Petrov, Natalia Pyysalo, Silveira, Proceedings of LREC. LRECJoakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajic, Christopher D Manning, Ryan T McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, et al. 2016. Universal De- pendencies v1: A multilingual treebank collection. In Proceedings of LREC, pages 1659-1666.
The integration of syntax and semantic plausibility in a wide-coverage model of human sentence processing. Ulrike Padó, Ulrike Padó. 2007. The integration of syntax and se- mantic plausibility in a wide-coverage model of hu- man sentence processing.
Large-scale brain networks of the human left temporal pole: A functional connectivity MRI study. Belen Pascual, Joseph C Masdeu, Mark Hollenbeck, Nikos Makris, Ricardo Insausti, Bradford C Song-Lin Ding, Dickerson, 10.1093/cercor/bht260Cerebral Cortex. 253Belen Pascual, Joseph C. Masdeu, Mark Hollenbeck, Nikos Makris, Ricardo Insausti, Song-Lin Ding, and Bradford C. Dickerson. 2015. Large-scale brain networks of the human left temporal pole: A func- tional connectivity MRI study. Cerebral Cortex, 25(3):680-702.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Procedings of EMNLP. edings of EMNLPJeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Procedings of EMNLP, pages 1532- 1543.
Learning statistical scripts with LSTM recurrent neural networks. Karl Pichotta, Raymond J Mooney, Proceedings of AAAI. AAAIKarl Pichotta and Raymond J. Mooney. 2016. Learning statistical scripts with LSTM recurrent neural net- works. In Proceedings of AAAI, pages 2800-2806.
Scoring lexical entailment with a supervised directional similarity network. Marek Rei, Daniela Gerz, Ivan Vulić, Proceedings of ACL. ACLMarek Rei, Daniela Gerz, and Ivan Vulić. 2018. Scor- ing lexical entailment with a supervised directional similarity network. In Proceedings of ACL, pages 638-643.
Graded specialization within and between the anterior temporal lobes. Grace E Rice, Paul Hoffman, Matthew A Ralph, 10.1111/nyas.12951Annals of the New York Academy of Sciences. 13591Grace E. Rice, Paul Hoffman, and Matthew A. Lam- bon Ralph. 2015. Graded specialization within and between the anterior temporal lobes. Annals of the New York Academy of Sciences, 1359(1):84-97.
Semantic systems or system? Neuropsychological evidence re-examined. M , Jane Riddoch, Glyn W Humphreys, Max Coltheart, Elaine Funnell, https:/www.tandfonline.com/doi/abs/10.1080/02643298808252925Cognitive Neuropsychology. 51M. Jane Riddoch, Glyn W. Humphreys, Max Coltheart, and Elaine Funnell. 1988. Semantic systems or system? Neuropsychological evidence re-examined. Cognitive Neuropsychology, 5(1):3-25.
Inducing a semantically annotated lexicon via EM-based clustering. Mats Rooth, Stefan Riezler, Detlef Prescher, Proceedings of ACL. ACLGlenn Carroll, and Franz BeilMats Rooth, Stefan Riezler, Detlef Prescher, Glenn Car- roll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Pro- ceedings of ACL, pages 104-111.
An overview of multitask learning in deep neural networks. Sebastian Ruder, abs/1706.05098CoRRSebastian Ruder. 2017. An overview of multi- task learning in deep neural networks. CoRR, abs/1706.05098.
Thematic fit evaluation: An aspect of selectional preferences. Asad Sayeed, Clayton Greenberg, Vera Demberg, Proceedings of the 1st Workshop on Evaluating Vector Space Representations for NLP. the 1st Workshop on Evaluating Vector Space Representations for NLPAsad Sayeed, Clayton Greenberg, and Vera Demberg. 2016. Thematic fit evaluation: An aspect of selec- tional preferences. In Proceedings of the 1st Work- shop on Evaluating Vector Space Representations for NLP, pages 99-105.
Word space. Hinrich Schütze, Proceedings of NIPS. NIPSHinrich Schütze. 1993. Word space. In Proceedings of NIPS, pages 895-902.
Symmetric pattern based word embeddings for improved word similarity prediction. Roy Schwartz, Roi Reichart, Ari Rappoport, Proceedings of CoNLL. CoNLLRoy Schwartz, Roi Reichart, and Ari Rappoport. 2015. Symmetric pattern based word embeddings for im- proved word similarity prediction. In Proceedings of CoNLL, pages 258-267.
Event participant modelling with neural networks. Ottokar Tilk, Vera Demberg, Asad Sayeed, Dietrich Klakow, Stefan Thater, Proceedings of EMNLP. EMNLPOttokar Tilk, Vera Demberg, Asad Sayeed, Dietrich Klakow, and Stefan Thater. 2016. Event participant modelling with neural networks. In Proceedings of EMNLP, pages 171-182.
Specialising word vectors for lexical entailment. Ivan Vulić, Nikola Mrkšić, Proceedings of NAACL-HLT. NAACL-HLTIvan Vulić and Nikola Mrkšić. 2018. Specialising word vectors for lexical entailment. Proceedings of NAACL-HLT.
The Selective Impairment of Semantic Memory. Elizabeth K Warrington, 10.1080/14640747508400525Quarterly Journal of Experimental Psychology. 274Elizabeth K. Warrington. 1975. The Selective Impair- ment of Semantic Memory. Quarterly Journal of Experimental Psychology, 27(4):635-657.
Categories of knowledge. Elizabeth K Warrington, Rosaleen A Mc-Carthy, 10.1093/brain/110.5.1273Brain. 1105Elizabeth K. Warrington and Rosaleen A. Mc- Carthy. 1987. Categories of knowledge. Brain, 110(5):1273-1296.
Event representations with tensor-based compositions. Noah Weber, Niranjan Balasubramanian, Nathanael Chambers, Proceedings of AAAI. AAAINoah Weber, Niranjan Balasubramanian, and Nathanael Chambers. 2018. Event representations with tensor-based compositions. In Proceedings of AAAI, pages 4946-4953.
| [
"https://github.com/cambridgeltl/fs-wrep."
] |
[
"Similarity of Semantic Relations",
"Similarity of Semantic Relations"
] | [
"Peter D Turney \nNational Research Council\nCanada\n"
] | [
"National Research Council\nCanada"
] | [] | There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM. Computational Linguistics Volume 1, Number 1 similarity.Cognitive scientists distinguish words that are semantically associated (bee-honey) from words that are semantically similar (deer-pony), although they recognize that some words are both associated and similar (doctor-nurse) (Chiarello et al., 1990). Both of these are types of attributional similarity, since they are based on correspondence between attributes (e.g., bees and honey are both found in hives; deer and ponies are both mammals).Budanitsky and Hirst(2001)describe semantic relatedness as follows:Recent research on the topic in computational linguistics has emphasized the perspective of semantic relatedness of two lexemes in a lexical resource, or its inverse, semantic distance. It's important to note that semantic relatedness is a more general concept than similarity; similar entities are usually assumed to be related by virtue of their likeness (bank-trust company), but dissimilar entities may also be semantically related by lexical relationships such as meronymy (car-wheel) and antonymy (hot-cold), or just by any kind of functional relationship or frequent association (pencil-paper, penguin-Antarctica). | 10.1162/coli.2006.32.3.379 | [
"https://arxiv.org/pdf/cs/0608100v1.pdf"
] | 2,468,783 | cs/0608100 | 97725a361c173eba897fb969c24fb18233bf9ac3 |
Similarity of Semantic Relations
25 Aug 2006
Peter D Turney
National Research Council
Canada
Similarity of Semantic Relations
25 Aug 2006
There are at least two kinds of similarity. Relational similarity is correspondence between relations, in contrast with attributional similarity, which is correspondence between attributes. When two words have a high degree of attributional similarity, we call them synonyms. When two pairs of words have a high degree of relational similarity, we say that their relations are analogous. For example, the word pair mason:stone is analogous to the pair carpenter:wood. This paper introduces Latent Relational Analysis (LRA), a method for measuring relational similarity. LRA has potential applications in many areas, including information extraction, word sense disambiguation, and information retrieval. Recently the Vector Space Model (VSM) of information retrieval has been adapted to measuring relational similarity, achieving a score of 47% on a collection of 374 college-level multiple-choice word analogy questions. In the VSM approach, the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus. LRA extends the VSM approach in three ways: (1) the patterns are derived automatically from the corpus, (2) the Singular Value Decomposition (SVD) is used to smooth the frequency data, and (3) automatically generated synonyms are used to explore variations of the word pairs. LRA achieves 56% on the 374 analogy questions, statistically equivalent to the average human score of 57%. On the related problem of classifying semantic relations, LRA achieves similar gains over the VSM. Computational Linguistics Volume 1, Number 1 similarity.Cognitive scientists distinguish words that are semantically associated (bee-honey) from words that are semantically similar (deer-pony), although they recognize that some words are both associated and similar (doctor-nurse) (Chiarello et al., 1990). Both of these are types of attributional similarity, since they are based on correspondence between attributes (e.g., bees and honey are both found in hives; deer and ponies are both mammals).Budanitsky and Hirst(2001)describe semantic relatedness as follows:Recent research on the topic in computational linguistics has emphasized the perspective of semantic relatedness of two lexemes in a lexical resource, or its inverse, semantic distance. It's important to note that semantic relatedness is a more general concept than similarity; similar entities are usually assumed to be related by virtue of their likeness (bank-trust company), but dissimilar entities may also be semantically related by lexical relationships such as meronymy (car-wheel) and antonymy (hot-cold), or just by any kind of functional relationship or frequent association (pencil-paper, penguin-Antarctica).
Introduction
There are at least two kinds of similarity. Attributional similarity is correspondence between attributes and relational similarity is correspondence between relations (Medin, Goldstone, and Gentner, 1990). When two words have a high degree of attributional similarity, we call them synonyms. When two word pairs have a high degree of relational similarity, we say they are analogous.
Verbal analogies are often written in the form A:B::C:D, meaning A is to B as C is to D; for example, traffic:street::water:riverbed. Traffic flows over a street; water flows over a riverbed. A street carries traffic; a riverbed carries water. There is a high degree of relational similarity between the word pair traffic:street and the word pair water:riverbed. In fact, this analogy is the basis of several mathematical theories of traffic flow (Daganzo, 1994).
In Section 2, we look more closely at the connections between attributional and relational similarity. In analogies such as mason:stone::carpenter:wood, it seems that relational similarity can be reduced to attributional similarity, since mason and carpenter are attributionally similar, as are stone and wood. In general, this reduction fails. Consider the analogy traffic:street::water:riverbed. Traffic and water are not attributionally similar. Street and riverbed are only moderately attributionally similar.
Many algorithms have been proposed for measuring the attributional similarity between two words (Lesk, 1969;Resnik, 1995; Landauer and Dumais, 1997; Jiang and Conrath, 1997; Lin, 1998b;Turney, 2001;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003). Measures of attributional similarity have been studied extensively, due to their applications in problems such as recognizing synonyms (Landauer and Dumais, 1997), information retrieval (Deerwester et al., 1990), determining semantic orientation (Turney, 2002), grading student essays (Rehder et al., 1998), measuring textual cohesion (Morris and Hirst, 1991), and word sense disambiguation (Lesk, 1986).
On the other hand, since measures of relational similarity are not as well developed as measures of attributional similarity, the potential applications of relational similarity are not as well known. Many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity. We discuss related problems in natural language processing, information retrieval, and information extraction in more detail in Section 3. This paper builds on the Vector Space Model (VSM) of information retrieval. Given a query, a search engine produces a ranked list of documents. The documents are ranked in order of decreasing attributional similarity between the query and each document. Almost all modern search engines measure attributional similarity using the VSM (Baeza-Yates and Ribeiro-Neto, 1999). Turney and Littman (2005) adapt the VSM approach to measuring relational similarity. They used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words. Section 4 presents the VSM approach to measuring similarity.
In Section 5, we present an algorithm for measuring relational similarity, which we call Latent Relational Analysis (LRA). The algorithm learns from a large corpus of unlabeled, unstructured text, without supervision. LRA extends the VSM approach of Turney and Littman (2005) in three ways: (1) The connecting patterns are derived automatically from the corpus, instead of using a fixed set of patterns.
(2) Singular Value Decomposition (SVD) is used to smooth the frequency data. (3) Given a word pair such as traffic:street, LRA considers transformations of the word pair, generated by replacing one of the words by synonyms, such as traffic:road, traffic:highway.
Section 6 presents our experimental evaluation of LRA with a collection of 374 multiple-choice word analogy questions from the SAT college entrance exam. 1 An example of a typical SAT question appears in Table 1. In the educational testing literature, the first pair (mason:stone) is called the stem of the analogy. The correct choice is called the solution and the incorrect choices are distractors. We evaluate LRA by testing its ability to select the solution and avoid the distractors. The average performance of collegebound senior high school students on verbal SAT questions corresponds to an accuracy of about 57%. LRA achieves an accuracy of about 56%. On these same questions, the VSM attained 47%.
One application for relational similarity is classifying semantic relations in nounmodifier pairs (Turney and Littman, 2005). In Section 7, we evaluate the performance of LRA with a set of 600 noun-modifier pairs from Nastase and Szpakowicz (2003). The problem is to classify a noun-modifier pair, such as "laser printer", according to the semantic relation between the head noun (printer) and the modifier (laser). The 600 pairs have been manually labeled with 30 classes of semantic relations. For example, "laser printer" is classified as instrument; the printer uses the laser as an instrument for We approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem. The 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbour in the training set. LRA is used to measure distance (i.e., similarity, nearness). LRA achieves an accuracy of 39.8% on the 30-class problem and 58.0% on the 5-class problem. On the same 600 noun-modifier pairs, the VSM had accuracies of 27.8% (30-class) and 45.7% (5-class) (Turney and Littman, 2005).
We discuss the experimental results, limitations of LRA, and future work in Section 8 and we conclude in Section 9.
Attributional and Relational Similarity
In this section, we explore connections between attributional and relational similarity.
Types of Similarity
Medin, Goldstone, and Gentner (1990) distinguish attributes and relations as follows:
Attributes are predicates taking one argument (e.g., X is red, X is large), whereas relations are predicates taking two or more arguments (e.g., X collides with Y , X is larger than Y ). Attributes are used to state properties of objects; relations express relations between objects or propositions. Gentner (1983) notes that what counts as an attribute or a relation can depend on the context. For example, large can be viewed as an attribute of X, LARGE(X ), or a relation between X and some standard Y , LARGER THAN(X , Y ).
The amount of attributional similarity between two words, A and B, depends on the degree of correspondence between the properties of A and B. A measure of attributional similarity is a function that maps two words, A and B, to a real number, sim a (A, B) ∈ ℜ. The more correspondence there is between the properties of A and B, the greater their attributional similarity. For example, dog and wolf have a relatively high degree of attributional similarity.
The amount of relational similarity between two pairs of words, A:B and C:D, depends on the degree of correspondence between the relations between A and B and the relations between C and D. A measure of relational similarity is a function that maps two pairs, A:B and C:D, to a real number, sim r (A : B, C : D) ∈ ℜ. The more correspondence there is between the relations of A:B and C:D, the greater their relational similarity. For example, dog:bark and cat:meow have a relatively high degree of relational
As these examples show, semantic relatedness is the same as attributional similarity (e.g., hot and cold are both kinds of temperature, pencil and paper are both used for writing). Here we prefer to use the term attributional similarity, because it emphasizes the contrast with relational similarity. The term semantic relatedness may lead to confusion when the term relational similarity is also under discussion.
Resnik (1995) describes semantic similarity as follows:
Semantic similarity represents a special case of semantic relatedness: for example, cars and gasoline would seem to be more closely related than, say, cars and bicycles, but the latter pair are certainly more similar. Rada et al. (1989) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonimic (IS-A) links, to the exclusion of other link types; that view will also be taken here, although admittedly it excludes some potentially useful information.
Thus semantic similarity is a specific type of attributional similarity. The term semantic similarity is misleading, because it refers to a type of attributional similarity, yet relational similarity is not any less semantic than attributional similarity. To avoid confusion, we will use the terms attributional similarity and relational similarity, following Medin, Goldstone, and Gentner (1990). Instead of semantic similarity (Resnik, 1995) or semantically similar (Chiarello et al., 1990), we prefer the term taxonomical similarity, which we take to be a specific type of attributional similarity. We interpret synonymy as a high degree of attributional similarity. Analogy is a high degree of relational similarity.
Measuring Attributional Similarity
Algorithms for measuring attributional similarity can be lexicon-based (Lesk, 1986;Budanitsky and Hirst, 2001;Banerjee and Pedersen, 2003), corpus-based (Lesk, 1969;Landauer and Dumais, 1997;Lin, 1998a;Turney, 2001), or a hybrid of the two (Resnik, 1995;Jiang and Conrath, 1997;Turney et al., 2003). Intuitively, we might expect that lexicon-based algorithms would be better at capturing synonymy than corpus-based algorithms, since lexicons, such as WordNet, explicitly provide synonymy information that is only implicit in a corpus. However, experiments do not support this intuition.
Several algorithms have been evaluated using 80 multiple-choice synonym ques- Table 2. Table 3 shows the best performance on the TOEFL questions for each type of attributional similarity algorithm. The results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy.
Using Attributional Similarity to Solve Analogies
We may distinguish near analogies (mason:stone::carpenter:wood) from far analogies (traffic:street::water:riverbed) (Gentner, 1983;Medin, Goldstone, and Gentner, 1990). In an analogy A:B::C:D, where there is a high degree of relational similarity between A:B and C:D, if there is also a high degree of attributional similarity between A and C, and between B and D, then A:B::C:D is a near analogy; otherwise, it is a far analogy. It seems possible that SAT analogy questions might consist largely of near analogies, in which case they can be solved using attributional similarity measures. We could score each candidate analogy by the average of the attributional similarity, sim a , between A and C and between B and D: score(A : B ::
C : D) = 1 2 (sim a (A, C) + sim a (B, D))(1)
This kind of approach was used in two of the thirteen modules in Turney et al. (2003) (see Section 3.1).
To evaluate this approach, we applied several measures of attributional similarity to our collection of 374 SAT questions. The performance of the algorithms was measured by precision, recall, and F , defined as follows: precision = number of correct guesses total number of guesses made
(2)
F = 2 × precision × recall precision + recall (4)
Note that recall is the same as percent correct (for multiple-choice questions, with only zero or one guesses allowed per question, but not in general). Table 4 shows the experimental results for our set of 374 analogy questions. For example, using the algorithm of Hirst and St-Onge (1998), 120 questions were answered correctly, 224 incorrectly, and 30 questions were skipped. When the algorithm assigned the same similarity to all of the choices for a given question, that question was skipped. The precision was 120/(120 + 224) and the recall was 120/(120 + 224 + 30).
The first five algorithms in Table 4 are implemented in Pedersen's WordNet-Similarity package. 2 The sixth algorithm (Turney, 2001) used the Waterloo MultiText System, as described in Terra and Clarke (2003).
The difference between the lowest performance (Jiang and Conrath, 1997) and random guessing is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, the difference between the highest performance (Turney, 2001) and the VSM approach (Turney and Littman, 2005) is also statistically significant with 95% confidence. We conclude that there are enough near analogies in the 374 SAT questions for attributional similarity to perform better than random guessing, but not enough near analogies for attributional similarity to perform as well as relational similarity.
Related Work
This section is a brief survey of the many problems that involve semantic relations and could potentially make use of an algorithm for measuring relational similarity.
Recognizing Word Analogies
The problem of recognizing word analogies is, given a stem word pair and a finite list of choice word pairs, select the choice that is most analogous to the stem. This problem was first attempted by a system called Argus (Reitman, 1965), using a small hand-built semantic network. Argus could only solve the limited set of analogy questions that its programmer had anticipated. Argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity. Turney et al. (2003) combined 13 independent modules to answer SAT questions. The final output of the system was based on a weighted combination of the outputs of each individual module. The best of the 13 modules was the VSM, which is described in detail in Turney and Littman (2005). The VSM was evaluated on a set of 374 SAT questions, achieving a score of 47%.
In contrast with the corpus-based approach of Turney and Littman (2005), Veale (2004) applied a lexicon-based approach to the same 374 SAT questions, attaining a score of 43%. Veale evaluated the quality of a candidate analogy A:B::C:D by looking for paths in WordNet, joining A to B and C to D. The quality measure was based on the similarity between the A:B paths and the C:D paths. Turney (2005) introduced Latent Relational Analysis (LRA), an enhanced version of the VSM approach, which reached 56% on the 374 SAT questions. Here we go beyond Turney (2005) by describing LRA in more detail, performing more extensive experiments, and analyzing the algorithm and related work in more depth. French (2002) cites Structure Mapping Theory (SMT) (Gentner, 1983) and its implementation in the Structure Mapping Engine (SME) (Falkenhainer, Forbus, and Gentner, 1989) as the most influential work on modeling of analogy-making. The goal of computational modeling of analogy-making is to understand how people form complex, structured analogies. SME takes representations of a source domain and a target domain, and produces an analogical mapping between the source and target. The domains are given structured propositional representations, using predicate logic. These descriptions include attributes, relations, and higher-order relations (expressing relations between relations). The analogical mapping connects source domain relations to target domain relations.
Structure Mapping Theory
For example, there is an analogy between the solar system and Rutherford's model of the atom (Falkenhainer, Forbus, and Gentner, 1989). The solar system is the source domain and Rutherford's model of the atom is the target domain. The basic objects in the source model are the planets and the sun. The basic objects in the target model are the electrons and the nucleus. The planets and the sun have various attributes, such as mass(sun) and mass(planet), and various relations, such as revolve(planet, sun) and attracts(sun, planet). Likewise, the nucleus and the electrons have attributes, such as charge(electron) and charge(nucleus), and relations, such as revolve(electron, nucleus) and attracts(nucleus, electron). SME maps revolve(planet, sun) to revolve(electron, nucleus) and attracts(sun, planet) to attracts(nucleus, electron).
Each individual connection (e.g., from revolve(planet, sun) to revolve(electron, nucleus)) in an analogical mapping implies that the connected relations are similar; thus, SMT requires a measure of relational similarity, in order to form maps. Early versions of SME only mapped identical relations, but later versions of SME allowed similar, non-identical relations to match (Falkenhainer, 1990). However, the focus of research in analogy-making has been on the mapping process as a whole, rather than measuring the similarity between any two particular relations, hence the similarity measures used in SME at the level of individual connections are somewhat rudimentary.
We believe that a more sophisticated measure of relational similarity, such as LRA, may enhance the performance of SME. Likewise, the focus of our work here is on the similarity between particular relations, and we ignore systematic mapping between sets animal:eat::inflation:reduce of relations, so LRA may also be enhanced by integration with SME.
Metaphor
Metaphorical language is very common in our daily life; so common that we are usually unaware of it (Lakoff and Johnson, 1980). argue that novel metaphors are understood using analogy, but conventional metaphors are simply recalled from memory. A conventional metaphor is a metaphor that has become entrenched in our language (Lakoff and Johnson, 1980). Dolan (1995) describes an algorithm that can recognize conventional metaphors, but is not suited to novel metaphors. This suggests that it may be fruitful to combine Dolan's (1995) algorithm for handling conventional metaphorical language with LRA and SME for handling novel metaphors. Lakoff and Johnson (1980) give many examples of sentences in support of their claim that metaphorical language is ubiquitous. The metaphors in their sample sentences can be expressed using SAT-style verbal analogies of the form A:B::C:D. The first column in Table 5 is a list of sentences from Lakoff and Johnson (1980) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy.
Classifying Semantic Relations
The task of classifying semantic relations is to identify the relation between a pair of words. Often the pairs are restricted to noun-modifier pairs, but there are many interesting relations, such as antonymy, that do not occur in noun-modifier pairs. However, noun-modifier pairs are interesting due to their high frequency in English. For instance, WordNet 2.0 contains more than 26,000 noun-modifier pairs, although many common noun-modifiers are not in WordNet, especially technical terms. Rosario and Hearst (2001) and Rosario, Hearst, and Fillmore (2002) classify nounmodifier relations in the medical domain, using MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) as lexical resources for representing each noun-modifier pair with a feature vector. They trained a neural network to distinguish 13 classes of semantic relations. Nastase and Szpakowicz (2003) explore a similar approach to classifying general noun-modifier pairs (i.e., not restricted to a particular domain, such as medicine), using WordNet and Roget's Thesaurus as lexical resources. Vanderwende (1994) used hand-built rules, together with a lexical knowledge base, to classify noun-modifier pairs.
None of these approaches explicitly involved measuring relational similarity, but any classification of semantic relations necessarily employs some implicit notion of relational similarity, since members of the same class must be relationally similar to some extent. Barker and Szpakowicz (1998) tried a corpus-based approach that explicitly used a measure of relational similarity, but their measure was based on literal matching, which limited its ability to generalize. Moldovan et al. (2004) also used a measure of relational similarity, based on mapping each noun and modifier into semantic classes in WordNet. The noun-modifier pairs were taken from a corpus and the surrounding context in the corpus was used in a word sense disambiguation algorithm, to improve the mapping of the noun and modifier into WordNet. Turney and Littman (2005) used the VSM (as a component in a single nearest neighbour learning algorithm) to measure relational similarity. We take the same approach here, substituting LRA for the VSM, in Section 7.
Lauer (1995) used a corpus-based approach (using the BNC) to paraphrase nounmodifier pairs, by inserting the prepositions of, for, in, at, on, from, with, and about. For example, reptile haven was paraphrased as haven for reptiles. Lapata and Keller (2004) achieved improved results on this task, by using the database of AltaVista's search engine as a corpus.
Word Sense Disambiguation
We believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text. If we can identify the semantic relations between the given word and its context, then we can disambiguate the given word. Yarowsky's (1993) observation that collocations are almost always monosemous is evidence for this view. Federici, Montemagni, and Pirrelli (1997) present an analogybased approach to word sense disambiguation.
For example, consider the word plant. Out of context, plant could refer to an industrial plant or a living organism. Suppose plant appears in some text near food. A typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism (Lesk, 1986;Banerjee and Pedersen, 2003). In this case, the decision may not be clear, since industrial plants often produce food and living organisms often serve as food. It would be very helpful to know the relation between food and plant in this example. In the phrase "food for the plant", the relation between food and plant strongly suggests that the plant is a living organism, since industrial plants do not need food. In the text "food at the plant", the relation strongly suggests that the plant is an industrial plant, since living organisms are not usually considered as locations. Thus an algorithm for classifying semantic relations (as in Section 7) should be helpful for word sense disambiguation.
Information Extraction
The problem of relation extraction is, given an input document and a specific relation R, extract all pairs of entities (if any) that have the relation R in the document. The problem was introduced as part of the Message Understanding Conferences (MUC) in 1998. Zelenko, Aone, and Richardella (2003) present a kernel method for extracting the relations person-affiliation and organization-location. For example, in the sentence "John Smith is the chief scientist of the Hardcom Corporation," there is a person-affiliation relation between "John Smith" and "Hardcom Corporation" (Zelenko, Aone, and Richardella, 2003). This is similar to the problem of classifying semantic relations (Section 3.4), except that information extraction focuses on the relation between a specific pair of entities in a specific document, rather than a general pair of words in general text. Therefore an algorithm for classifying semantic relations should be useful for information extraction.
In the VSM approach to classifying semantic relations (Turney and Littman, 2005), we would have a training set of labeled examples of the relation person-affiliation, for instance. Each example would be represented by a vector of pattern frequencies. Given a specific document discussing "John Smith" and "Hardcom Corporation", we could construct a vector representing the relation between these two entities, and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors. It would seem that there is a problem here, because the training vectors would be relatively dense, since they would presumably be derived from a large corpus, but the new unlabeled vector for "John Smith" and "Hardcom Corporation" would be very sparse, since these entities might be mentioned only once in the given document. However, this is not a new problem for the Vector Space Model; it is the standard situation when the VSM is used for information retrieval. A query to a search engine is represented by a very sparse vector whereas a document is represented by a relatively dense vector. There are well-known techniques in information retrieval for coping with this disparity, such as weighting schemes for query vectors that are different from the weighting schemes for document vectors (Salton and Buckley, 1988).
Question Answering
In their paper on classifying semantic relations, Moldovan et al. (2004) suggest that an important application of their work is Question Answering. As defined in the Text REtrieval Conference (TREC) Question Answering (QA) track, the task is to answer simple questions, such as "Where have nuclear incidents occurred?", by retrieving a relevant document from a large corpus and then extracting a short string from the document, such as "The Three Mile Island nuclear incident caused a DOE policy crisis." Moldovan et al. (2004) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text. They argue that the desired semantic relation can easily be inferred from the surface form of the question. A question of the form "Where ...?" is likely to be seeking for entities with a location relation and a question of the form "What did ... make?" is likely to be looking for entities with a product relation. In Section 7, we show how LRA can recognize relations such as location and product (see Table 19). Hearst (1992) presents an algorithm for learning hyponym (type of) relations from a corpus and Berland and Charniak (1999) describe how to learn meronym (part of) relations from a corpus. These algorithms could be used to automatically generate a thesaurus or dictionary, but we would like to handle more relations than hyponymy and meronymy. WordNet distinguishes more than a dozen semantic relations between words (Fellbaum, 1998) and Nastase and Szpakowicz (2003) list 30 semantic relations for noun-modifier pairs. Hearst (1992) and Berland and Charniak (1999) use manually generated rules to mine text for semantic relations. Turney and Littman (2005) also use a manually generated set of 64 patterns.
Automatic Thesaurus Generation
LRA does not use a predefined set of patterns; it learns patterns from a large corpus. Instead of manually generating new rules or patterns for each new semantic relation, it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations. A nearest neighbour algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations, given the appropriate labeled training data.
Girju, Badulescu, and Moldovan (2003) present an algorithm for learning meronym relations from a corpus. Like Hearst (1992) and Berland and Charniak (1999), they use manually generated rules to mine text for their desired relation. However, they supplement their manual rules with automatically learned constraints, to increase the precision of the rules. Veale (2003) has developed an algorithm for recognizing certain types of word analogies, based on information in WordNet. He proposes to use the algorithm for analogical information retrieval. For example, the query "Muslim church" should return "mosque" and the query "Hindu bible" should return "the Vedas". The algorithm was designed with a focus on analogies of the form adjective:noun::adjective:noun, such as Christian:church::Muslim:mosque.
Information Retrieval
A measure of relational similarity is applicable to this task. Given a pair of words, A and B, the task is to return another pair of words, X and Y , such that there is high relational similarity between the pair A:X and the pair Y :B. For example, given A = "Muslim" and B = "church", return X = "mosque" and Y = "Christian". (The pair Muslim:mosque has a high relational similarity to the pair Christian:church.)
Marx et al. (2002) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora. Each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus. For example, one experiment used a corpus of Buddhist documents and a corpus of Christian documents. A cluster of words such as {Hindu, Mahayana, Zen, ...} from the Buddhist corpus was coupled with a cluster of words such as {Catholic, Protestant, ...} from the Christian corpus. Thus the algorithm appears to have discovered an analogical mapping between Buddhist schools and traditions and Christian schools and traditions. This is interesting work, but it is not directly applicable to SAT analogies, because it discovers analogies between clusters of words, rather than individual words.
Identifying Semantic Roles
A semantic frame for an event such as judgement contains semantic roles such as judge, evaluee, and reason, whereas an event such as statement contains roles such as speaker, addressee, and message (Gildea and Jurafsky, 2002). The task of identifying semantic roles is to label the parts of a sentence according to their semantic roles. We believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations; thus a measure of relational similarity should help us to identify semantic roles. Moldovan et al. (2004) argue that semantic roles are merely a special case of semantic relations (Section 3.4), since semantic roles always involve verbs or predicates, but semantic relations can involve words of any part of speech.
The Vector Space Model
This section examines past work on measuring attributional and relational similarity using the Vector Space Model (VSM).
Measuring Attributional Similarity with the Vector Space Model
The VSM was first developed for information retrieval (Salton and McGill, 1983;Salton and Buckley, 1988;Salton, 1989) and it is at the core of most modern search engines (Baeza-Yates and Ribeiro-Neto, 1999).
In the VSM approach to information retrieval, queries and documents are represented by vectors. Elements in these vectors are based on the frequencies of words in the corresponding queries and documents. The frequencies are usually transformed by various formulas and weights, tailored to improve the effectiveness of the search engine (Salton, 1989). The attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors. For a given query, the search engine sorts the matching documents in order of decreasing cosine.
The VSM approach has also been used to measure the attributional similarity of words (Lesk, 1969;Ruge, 1992;Pantel and Lin, 2002). Pantel and Lin (2002) clustered words according to their attributional similarity, as measured by a VSM. Their algorithm is able to discover the different senses of polysemous words, using unsupervised learning.
Latent Semantic Analysis enhances the VSM approach to information retrieval by using the Singular Value Decomposition (SVD) to smooth the vectors, which helps to handle noise and sparseness in the data (Deerwester et al., 1990;Dumais, 1993; Landauer and Dumais, 1997). SVD improves both document-query attributional similarity measures (Deerwester et al., 1990;Dumais, 1993) and word-word attributional similarity measures (Landauer and Dumais, 1997). LRA also uses SVD to smooth vectors, as we discuss in Section 5.
Measuring Relational Similarity with the Vector Space Model
Let R 1 be the semantic relation (or set of relations) between a pair of words, A and B, and let R 2 be the semantic relation (or set of relations) between another pair, C and D. We wish to measure the relational similarity between R 1 and R 2 . The relations R 1 and R 2 are not given to us; our task is to infer these hidden (latent) relations and then compare them.
In the VSM approach to relational similarity (Turney and Littman, 2005), we create vectors, r 1 and r 2 , that represent features of R 1 and R 2 , and then measure the similarity of R 1 and R 2 by the cosine of the angle θ between r 1 and r 2 :
r 1 = r 1,1 , . . . , r 1,n (5) r 2 = r 2,1 , . . . r 2,n (6) cosine(θ) = n i=1 r 1,i · r 2,i n i=1 (r 1,i ) 2 · n i=1 (r 2,i ) 2 = r 1 · r 2 √ r 1 · r 1 · √ r 2 · r 2 = r 1 · r 2 r 1 · r 2 (7)
We create a vector, r, to characterize the relationship between two words, X and Y , by counting the frequencies of various short phrases containing X and Y . Turney and Littman (2005) use a list of 64 joining terms, such as "of", "for", and "to", to form 128 phrases that contain X and Y , such as "X of Y ", "Y of X", "X for Y ", "Y for X", "X to Y ", and "Y to X". These phrases are then used as queries for a search engine and the number of hits (matching documents) is recorded for each query. This process yields a vector of 128 numbers. If the number of hits for a query is x, then the corresponding element in the vector r is log(x + 1). Several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures (Salton and Buckley, 1988;Ruge, 1992;Lin, 1998b).
Turney and Littman (2005) evaluated the VSM approach by its performance on 374 SAT analogy questions, achieving a score of 47%. Since there are five choices for each question, the expected score for random guessing is 20%. To answer a multiple-choice analogy question, vectors are created for the stem pair and each choice pair, and then cosines are calculated for the angles between the stem pair and each choice pair. The best guess is the choice pair with the highest cosine. We use the same set of analogy questions to evaluate LRA in Section 6.
The VSM was also evaluated by its performance as a distance (nearness) measure in a supervised nearest neighbour classifier for noun-modifier semantic relations (Turney and Littman, 2005). The evaluation used 600 hand-labeled noun-modifier pairs from Nastase and Szpakowicz (2003). A testing pair is classified by searching for its single nearest neighbour in the labeled training data. The best guess is the label for the training pair with the highest cosine. LRA is evaluated with the same set of nounmodifier pairs in Section 7. Turney and Littman (2005) used the AltaVista search engine to obtain the frequency information required to build vectors for the VSM. Thus their corpus was the set of all web pages indexed by AltaVista. At the time, the English subset of this corpus consisted of about 5 × 10 11 words. Around April 2004, AltaVista made substantial changes to their search engine, removing their advanced search operators. Their search engine no longer supports the asterisk operator, which was used by Turney and Littman (2005) for stemming and wild-card searching. AltaVista also changed their policy towards automated searching, which is now forbidden. 3 Turney and Littman (2005) used AltaVista's hit count, which is the number of documents (web pages) matching a given query, but LRA uses the number of passages (strings) matching a query. In our experiments with LRA (Sections 6 and 7), we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003), running on a 16 CPU Beowulf Cluster, with a corpus of about 5 × 10 10 English words. The Waterloo MultiText System (WMTS) is a distributed (multiprocessor) search engine, designed primarily for passage retrieval (although document retrieval is possible, as a special case of passage retrieval). The text and index require approximately one terabyte of disk space. Although AltaVista only gives a rough estimate of the number of matching documents, the Waterloo MultiText System gives exact counts of the number of matching passages. Turney et al. (2003) combine 13 independent modules to answer SAT questions. The performance of LRA significantly surpasses this combined system, but there is no real contest between these approaches, because we can simply add LRA to the combination, as a fourteenth module. Since the VSM module had the best performance of the thirteen modules (Turney et al., 2003), the following experiments focus on comparing VSM and LRA.
Latent Relational Analysis
LRA takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs. LRA relies on three resources, a search engine with a very large corpus of text, a broad-coverage thesaurus of synonyms, and an efficient implementation of SVD.
We first present a short description of the core algorithm. Later, in the following subsections, we will give a detailed description of the algorithm, as it is applied in the experiments in Sections 6 and 7.
• Given a set of word pairs as input, look in a thesaurus for synonyms for each word in each word pair. For each input pair, make alternate pairs by replacing the original words with their synonyms. The alternate pairs are intended to form near analogies with the corresponding original pairs (see Section 2.3).
• Filter out alternate pairs that do not form near analogies, by dropping alternate pairs that co-occur rarely in the corpus. In the preceding step, if a synonym replaced an ambiguous original word, but the synonym captures the wrong sense of the original word, it is likely that there is no significant relation between the words in the alternate pair, so they will rarely co-occur.
• For each original and alternate pair, search in the corpus for short phrases that begin with one member of the pair and end with the other. These phrases characterize the relation between the words in each pair.
• For each phrase from the previous step, create several patterns, by replacing words in the phrase with wild cards.
• Build a pair-pattern frequency matrix, in which each cell represents the number of times that the corresponding pair (row) appears in the corpus with the corresponding pattern (column). The number will usually be zero, resulting in a sparse matrix.
• Apply the Singular Value Decomposition to the matrix. This reduces noise in the matrix and helps with sparse data.
• Suppose that we wish to calculate the relational similarity between any two of the original pairs. Start by looking for the two row vectors in the pair-pattern frequency matrix that correspond to the two original pairs. Calculate the cosine of the angle between these two row vectors. Then merge the cosine of the two original pairs with the cosines of their corresponding alternate pairs, as follows. If an analogy formed with alternate pairs has a higher cosine than the original pairs, we assume that we have found a better way to express the analogy, but we have not significantly changed its meaning. If the cosine is lower, we assume that we may have changed the meaning, by inappropriately replacing words with synonyms. Filter out inappropriate alternates by dropping all analogies formed of alternates, such that the cosines are less than the cosine for the original pairs. The relational similarity between the two original pairs is then calculated as the average of all of the remaining cosines.
The motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus. The hope is that we can find near analogies for the original pairs, such that the near analogies co-occur more frequently in the corpus. The danger is that the alternates may have different relations from the originals. The filtering steps above aim to reduce this risk.
Input and Output
In our experiments, the input set contains from 600 to 2,244 word pairs. The output similarity measure is based on cosines, so the degree of similarity can range from −1 (dissimilar; θ = 180 • ) to +1 (similar; θ = 0 • ). Before applying SVD, the vectors are completely nonnegative, which implies that the cosine can only range from 0 to +1, but SVD introduces negative values, so it is possible for the cosine to be negative, although we have never observed this in our experiments.
Search Engine and Corpus
In the following experiments, we use a local copy of the Waterloo MultiText System (Clarke, Cormack, and Palmer, 1998; Terra and Clarke, 2003). 4 The corpus consists of about 5 × 10 10 English words, gathered by a web crawler, mainly from US academic web sites. The web pages cover a very wide range of topics, styles, genres, quality, and writing skill. The WMTS is well suited to LRA, because the WMTS scales well to large corpora (one terabyte, in our case), it gives exact frequency counts (unlike most web search engines), it is designed for passage retrieval (rather than document retrieval), and it has a powerful query syntax.
Thesaurus
As a source of synonyms, we use Lin's (1998a) automatically generated thesaurus. This thesaurus is available through an online interactive demonstration or it can be downloaded. 5 We used the online demonstration, since the downloadable version seems to contain fewer words. For each word in the input set of word pairs, we automatically query the online demonstration and fetch the resulting list of synonyms. As a courtesy to other users of Lin's online system, we insert a 20 second delay between each query.
Lin's thesaurus was generated by parsing a corpus of about 5 × 10 7 English words, consisting of text from the Wall Street Journal, San Jose Mercury, and AP Newswire (Lin, 1998a). The parser was used to extract pairs of words and their grammatical relations. Words were then clustered into synonym sets, based on the similarity of their grammatical relations. Two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words. Given a word and its part of speech, Lin's thesaurus provides a list of words, sorted in order of decreasing attributional similarity. This sorting is convenient for LRA, since it makes it possible to focus on words with higher attributional similarity and ignore the rest. WordNet, in contrast, given a word and its part of speech, provides a list of words grouped by the possible senses of the given word, with groups sorted by the frequencies of the senses. WordNet's sorting does not directly correspond to sorting by degree of attributional similarity, although various algorithms have been proposed for deriving attributional similarity from WordNet (Resnik, 1995; Jiang and Conrath, 1997; Budanitsky and Hirst, 2001; Banerjee and Pedersen, 2003).
Singular Value Decomposition
We use Rohde's SVDLIBC implementation of the Singular Value Decomposition, which is based on SVDPACKC (Berry, 1992). 6 In LRA, SVD is used to reduce noise and compensate for sparseness.
The Algorithm
We will go through each step of LRA, using an example to illustrate the steps. Assume that the input to LRA is the 374 multiple-choice SAT word analogy questions of Turney and Littman (2005). Since there are six word pairs per question (the stem and five choices), the input consists of 2,244 word pairs. Let's suppose that we wish to calculate the relational similarity between the pair quart:volume and the pair mile:distance, taken from the SAT question in Table 6. The LRA algorithm consists of the following twelve steps:
1. Find alternates: For each word pair A:B in the input set, look in Lin's (1998a) thesaurus for the top num sim words (in the following experiments, num sim is 10) that are most similar to A. For each A ′ that is similar to A, make a new word pair A ′ :B. Likewise, look for the top num sim words that are most similar to B, and for each B ′ , make a new word pair A:B ′ . A:B is called the original pair and each A ′ :B or A:B ′ is an alternate pair. The intent is that alternates should have almost the same semantic relations as the original. For each input pair, there will now be 2 × num sim alternate pairs. When looking for similar words in Lin's (1998a) thesaurus, avoid words that seem unusual (e.g., hyphenated words, words with three characters or less, words with non-alphabetical char- Table 6 This SAT question, from Claman (2000), is used to illustrate the steps in the LRA algorithm.
Stem: quart:volume Choices: (a) day:night (b) mile:distance (c) decade:century (d) friction:heat (e) part:whole Solution: (b) mile:distance acters, multi-word phrases, and capitalized words). The first column in Table 7 shows the alternate pairs that are generated for the original pair quart:volume.
Filter alternates:
For each original pair A:B, filter the 2 × num sim alternates as follows. For each alternate pair, send a query to the WMTS, to find the frequency of phrases that begin with one member of the pair and end with the other. The phrases cannot have more than max phrase words (we use max phrase = 5). Sort the alternate pairs by the frequency of their phrases.
Select the top num f ilter most frequent alternates and discard the remainder (we use num f ilter = 3, so 17 alternates are dropped). This step tends to eliminate alternates that have no clear semantic relation. The third column in Table 7 shows the frequency with which each pair co-occurs in a window of max phrase words. The last column in Table 7 shows the pairs that are selected.
Find phrases:
For each pair (originals and alternates), make a list of phrases in the corpus that contain the pair. Query the WMTS for all phrases that begin with one member of the pair and end with the other (in either order). We ignore suffixes when searching for phrases that match a given pair. The phrases cannot have more than max phrase words and there must be at least one word between the two members of the word pair. These phrases give us information about the semantic relations between the words in each pair. A phrase with no words between the two members of the word pair would give us very little information about the semantic relations (other than that the words occur together with a certain frequency in a certain order). Table 8 gives some examples of phrases in the corpus that match the pair quart:volume.
Find patterns:
For each phrase found in the previous step, build patterns from the intervening words. A pattern is constructed by replacing any or all or none of the intervening words with wild cards (one wild card can only replace one word). If a phrase is n words long, there are n − 2 intervening words between the members of the given word pair (e.g., between quart and volume). Thus a phrase with n words generates 2 (n−2) patterns. (We use max phrase = 5, so a phrase generates at most eight patterns.) For each pattern, count the number of pairs (originals and alternates) with phrases that match the pattern (a wild card must match exactly one word). Keep the top num patterns most frequent patterns and discard the rest (we use num patterns = 4, 000). Typically there will be millions of patterns, so it is not feasible to keep them all.
Map pairs to rows:
In preparation for building the matrix X, create a mapping of word pairs to row numbers. For each pair A:B, create a row for A:B and Table 7 Alternate forms of the original pair quart:volume. The first column shows the original pair and the alternate pairs. The second column shows Lin's similarity score for the alternate word compared to the original word. For example, the similarity between quart and pint is 0.210. The third column shows the frequency of the pair in the WMTS corpus. The fourth column shows the pairs that pass the filtering step (i.e., step 2). "quarts liquid volume" "volume in quarts" "quarts of volume" "volume capacity quarts" "quarts in volume"
Word pair
"volume being about two quarts" "quart total volume" "volume of milk in quarts" "quart of spray volume" "volume include measures like quart" Table 9 Frequencies of various patterns for quart:volume. The asterisk "*" represents the wildcard. Suffixes are ignored, so "quart" matches "quarts". For example, "quarts in volume" is one of the four phrases that match "quart P volume" when P is "in". P = "in" P = "* of" P = "of *" P = "* *" freq("quart P volume") 4 1 5 19 freq("volume P quart") 10 0 2 16 another row for B:A. This will make the matrix more symmetrical, reflecting our knowledge that the relational similarity between A:B and C:D should be the same as the relational similarity between B:A and D:C. This duplication of rows is examined in Section 6.6.
6. Map patterns to columns: Create a mapping of the top num patterns patterns to column numbers. For each pattern P , create a column for "word 1 P word 2 " and another column for "word 2 P word 1 ". Thus there will be 2 × num patterns columns in X. This duplication of columns is examined in Section 6.6.
7. Generate a sparse matrix: Generate a matrix X in sparse matrix format, suitable for input to SVDLIBC. The value for the cell in row i and column j is the frequency of the j-th pattern (see step 6) in phrases that contain the i-th word pair (see step 5). Table 9 gives some examples of pattern frequencies for quart:volume.
8. Calculate entropy: Apply log and entropy transformations to the sparse matrix (Landauer and Dumais, 1997). These transformations have been found to be very helpful for information retrieval (Harman, 1986;. Let x i,j be the cell in row i and column j of the matrix X from step 7. Let m be the number of rows in X and let n be the number of columns. We wish to weight the cell x i,j by the entropy of the j-th column. To calculate the entropy of the column, we need to convert the column into a vector of probabilities. Let p i,j be the probability of x i,j , calculated by normalizing the column vector so that the sum of the elements is one, p i,j = x i,j / m k=1 x k,j . The entropy of the j-th column is then H j = − m k=1 p k,j log(p k,j ). Entropy is at its maximum when p i,j is a uniform distribution, p i,j = 1/m, in which case H j = log(m). Entropy is at its minimum when p i,j is 1 for some value of i and 0 for all other values of i, in which case H j = 0. We want to give more weight to columns (patterns) with frequencies that vary substantially from one row (word pair) to the next, and less weight to columns that are uniform. Therefore we weight the cell x i,j by w j = 1 − H j / log(m), which varies from 0 when p i,j is uniform to 1 when entropy is minimal. We also apply the log transformation to frequencies, log(x i,j + 1). (Entropy is calculated with the original frequency values, before the log transformation is applied.) For all i and all j, replace the original value x i,j in X by the new value w j log(x i,j + 1). This is an instance of the TF-IDF (Term Frequency-Inverse Document Frequency) family of transformations, which is familiar in information retrieval (Salton and Buckley, 1988; Baeza-Yates and Ribeiro-Neto, 1999): log(x i,j + 1) is the TF term and w j is the IDF term. 9. Apply SVD: After the log and entropy transformations have been applied to the matrix X, run SVDLIBC. SVD decomposes a matrix X into a product of three matrices UΣV T , where U and V are in column orthonormal form (i.e., the columns are orthogonal and have unit length: U T U = V T V = I) and Σ is a diagonal matrix of singular values (hence SVD) (Golub and Van Loan, 1996). If X is of rank r, then Σ is also of rank r. Let Σ k , where k < r, be the diagonal matrix formed from the top k singular values, and let U k and V k be the matrices produced by selecting the corresponding columns from U and V. The matrix U k Σ k V T k is the matrix of rank k that best approximates the original matrix X, in the sense that it minimizes the approximation errors. That
is,X = U k Σ k V T k minimizes X − X F over all matricesX of rank k, where
. . . F denotes the Frobenius norm (Golub and Van Loan, 1996). We may think of this matrix U k Σ k V T k as a "smoothed" or "compressed" version of the original matrix. In the subsequent steps, we will be calculating cosines for row vectors. For this purpose, we can simplify calculations by dropping V. The cosine of two vectors is their dot product, after they have been normalized to unit length. The matrix XX T contains the dot products of all of the row vectors. We can find the dot product of the i-th and j-th row vectors by looking at the cell in row i, column j of the matrix XX T . Since V T V = I, we have XX T = UΣV T (UΣV T ) T = UΣV T VΣ T U T = UΣ(UΣ) T , which means that we can calculate cosines with the smaller matrix UΣ, instead of using X = UΣV T (Deerwester et al., 1990).
10. Projection: Calculate U k Σ k (we use k = 300). This matrix has the same number of rows as X, but only k columns (instead of 2 × num patterns columns; in our experiments, that is 300 columns instead of 8,000). We can compare two word pairs by calculating the cosine of the corresponding row vectors in U k Σ k . The row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space. The value k = 300 is recommended by Landauer and Dumais (1997) for measuring the attributional similarity between words. We investigate other values in Section 6.4. Table 10 gives the cosines for the sixteen combinations.
Evaluate alternates: Let
12. Calculate relational similarity: The relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D.
The requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies, which may be introduced in step 1 and may have slipped through the filtering in step 2. Averaging the cosines, as opposed to taking their maximum, is intended to provide some resistance to noise. For quart:volume and mile:distance, the third column in Table 10 shows which alternates are used to calculate the average. For these two pairs, the average of the selected cosines is 0.677. In Table 7, we see that pumping:volume Table 10 The sixteen combinations and their cosines. A:B::C:D expresses the analogy "A is to B as C is to D". The third column indicates those combinations for which the cosine is greater than or equal to the cosine of the original analogy, quart:volume::mile:distance. has slipped through the filtering in step 2, although it is not a good alternate for quart:volume. However, Table 10 shows that all four analogies that involve pumping:volume are dropped here, in step 12.
Steps 11 and 12 can be repeated for each two input pairs that are to be compared. This completes the description of LRA. Table 11 gives the cosines for the sample SAT question. The choice pair with the highest average cosine (the choice with the largest value in column #1), choice (b), is the solution for this question; LRA answers the question correctly. For comparison, column #2 gives the cosines for the original pairs and column #3 gives the highest cosine. For this particular SAT question, there is one choice that has the highest cosine for all three columns, choice (b), although this is not true in general. Note that the gap between the first choice (b) and the second choice (d) is largest for the average cosines (column #1). This suggests that the average of the cosines (column #1) is better at discriminating the correct choice than either the original cosine (column #2) or the highest cosine (column #3).
Experiments with Word Analogy Questions
This section presents various experiments with 374 multiple-choice SAT word analogy questions. Table 12 shows the performance of the baseline LRA system on the 374 SAT questions, using the parameter settings and configuration described in Section 5. LRA correctly answered 210 of the 374 questions. 160 questions were answered incorrectly and 4 questions were skipped, because the stem pair and its alternates were represented by zero Table 11 Cosines for the sample SAT question given in Table 6. Column #1 gives the averages of the cosines that are greater than or equal to the original cosines (e.g., the average of the cosines that are marked "yes" in Table 10 is 0.677; see choice (b) in column #1). Column #2 gives the cosine for the original pairs (e.g., the cosine for the first pair in Table 10 is 0.525; see choice (b) in column #2). Column #3 gives the maximum cosine for the sixteen possible analogies (e.g., the maximum cosine in Table 10 vectors. The performance of LRA is significantly better than the lexicon-based approach of Veale (2004) (see Section 3.1) and the best performance using attributional similarity (see Section 2.3), with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). As another point of reference, consider the simple strategy of always guessing the choice with the highest co-occurrence frequency. The idea here is that the words in the solution pair may occur together frequently, because there is presumably a clear and meaningful relation between the solution words, whereas the distractors may only occur together rarely, because they have no meaningful relation. This strategy is signifcantly worse than random guessing. The opposite strategy, always guessing the choice pair with the lowest co-occurrence frequency, is also worse than random guessing (but not significantly). It appears that the designers of the SAT questions deliberately chose distractors that would thwart these two strategies.
Baseline LRA System
With 374 questions and 6 word pairs per question (one stem and five choices), there are 2,244 pairs in the input set. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 8,976 pairs. In step 5, for each pair A:B, we add B:A, yielding 17,952 pairs. However, some pairs are dropped because they correspond to zero vectors (they do not appear together in a window of five words in the WMTS . Also, a few words do not appear in Lin's thesaurus, and some word pairs appear twice in the SAT questions (e.g., lion:cat). The sparse matrix (step 7) has 17,232 rows (word pairs) and 8,000 columns (patterns), with a density of 5.8% (percentage of nonzero values). Table 13 gives the time required for each step of LRA, a total of almost nine days. All of the steps used a single CPU on a desktop computer, except step 3, finding the phrases for each word pair, which used a 16 CPU Beowulf cluster. Most of the other steps are parallelizable; with a bit of programming effort, they could also be executed on the Beowulf cluster. All CPUs (both desktop and cluster) were 2.4 GHz Intel Xeons. The desktop computer had 2 GB of RAM and the cluster had a total of 16 GB of RAM. Table 14 compares LRA to the Vector Space Model with the 374 analogy questions. VSM-AV refers to the VSM using AltaVista's database as a corpus. The VSM-AV results are taken from Turney and Littman (2005). As mentioned in Section 4.2, we estimate this corpus contained about 5 × 10 11 English words at the time the VSM-AV experiments took place. VSM-WMTS refers to the VSM using the WMTS, which contains about 5 × 10 10 English words. We generated the VSM-WMTS results by adapting the VSM to the WMTS. The algorithm is slightly different from Turney and Littman (2005), because we used passage frequencies instead of document frequencies.
LRA versus VSM
All three pairwise differences in recall in Table 14 are statistically significant with 95% confidence, using the Fisher Exact Test (Agresti, 1990). The pairwise differences in Table 15 Comparison with human SAT performance. The last column in the table indicates whether (YES) or not (NO) the average human performance (57%) falls within the 95% confidence interval of the corresponding algorithm's performance. The confidence intervals are calculated using the Binomial Exact Test (Agresti, 1990 precision between LRA and the two VSM variations are also significant, but the difference in precision between the two VSM variations (42.4% versus 47.7%) is not significant. Although VSM-AV has a corpus ten times larger than LRA's, LRA still performs better than VSM-AV.
Comparing VSM-AV to VSM-WMTS, the smaller corpus has reduced the score of the VSM, but much of the drop is due to the larger number of questions that were skipped (34 for VSM-WMTS versus 5 for VSM-AV). With the smaller corpus, many more of the input word pairs simply do not appear together in short phrases in the corpus. LRA is able to answer as many questions as VSM-AV, although it uses the same corpus as VSM-WMTS, because Lin's thesaurus allows LRA to substitute synonyms for words that are not in the corpus.
VSM-AV required 17 days to process the 374 analogy questions (Turney and Littman, 2005), compared to 9 days for LRA. As a courtesy to AltaVista, Turney and Littman (2005) inserted a five second delay between each query. Since the WMTS is running locally, there is no need for delays. VSM-WMTS processed the questions in only one day.
Human Performance
The average performance of college-bound senior high school students on verbal SAT questions corresponds to a recall (percent correct) of about 57% (Turney and Littman, 2005). The SAT I test consists of 78 verbal questions and 60 math questions (there is also an SAT II test, covering specific subjects, such as chemistry). Analogy questions are only a subset of the 78 verbal SAT questions. If we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal SAT I questions, then we can estimate that the average college-bound senior would correctly answer about 57% of the 374 analogy questions.
Of our 374 SAT questions, 190 are from a collection of ten official SAT tests (Claman, 2000). On this subset of the questions, LRA has a recall of 61.1%, compared to a recall of 51.1% on the other 184 questions. The 184 questions that are not from Claman (2000) seem to be more difficult. This indicates that we may be underestimating how well LRA performs, relative to college-bound senior high school students. Claman (2000) suggests that the analogy questions may be somewhat harder than other verbal SAT questions, so we may be slightly overestimating the mean human score on the analogy questions. Table 15 gives the 95% confidence intervals for LRA, VSM-AV, and VSM-WMTS, calculated by the Binomial Exact Test (Agresti, 1990). There is no significant difference between LRA and human performance, but VSM-AV and VSM-WMTS are significantly below human-level performance.
Varying the Parameters in LRA
There are several parameters in the LRA algorithm (see Section 5.5). The parameter values were determined by trying a small number of possible values on a small set of questions that were set aside. Since LRA is intended to be an unsupervised learning algorithm, we did not attempt to tune the parameter values to maximize the precision and recall on the 374 SAT questions. We hypothesized that LRA is relatively insensitive to the values of the parameters. Table 16 shows the variation in the performance of LRA as the parameter values are adjusted. We take the baseline parameter settings (given in Section 5.5) and vary each parameter, one at a time, while holding the remaining parameters fixed at their baseline values. None of the precision and recall values are significantly different from the baseline, according to the Fisher Exact Test (Agresti, 1990), at the 95% confidence level. This supports the hypothesis that the algorithm is not sensitive to the parameter values.
Although a full run of LRA on the 374 SAT questions takes nine days, for some of the parameters it is possible to reuse cached data from previous runs. We limited the experiments with num sim and max phrase because caching was not as helpful for these parameters, so experimenting with them required several weeks.
Ablation Experiments
As mentioned in the introduction, LRA extends the VSM approach of Turney and Littman (2005) by (1) exploring variations on the analogies by replacing words with synonyms (step 1),
(2) automatically generating connecting patterns (step 4), and (3) smoothing the data with SVD (step 9). In this subsection, we ablate each of these three components to assess their contribution to the performance of LRA. Table 17 shows the results. Without SVD (compare column #1 to #2 in Table 17), performance drops, but the drop is not statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). However, we hypothesize that the drop in performance would be significant with a larger set of word pairs. More word pairs would increase the sample size, which would decrease the 95% confidence interval, which would likely show that SVD is making a significant contribution. Furthermore, more word pairs would increase the matrix size, which would give SVD more leverage. For example, Landauer and Dumais (1997) apply SVD to a matrix of of 30,473 columns by 60,768 rows, but our matrix here is 8,000 columns by 17,232 rows. We are currently gathering more SAT questions, to test this hypothesis.
Without synonyms (compare column #1 to #3 in Table 17), recall drops significantly (from 56.1% to 49.5%), but the drop in precision is not significant. When the synonym component is dropped, the number of skipped questions rises from 4 to 22, which demonstrates the value of the synonym component of LRA for compensating for sparse data.
When both SVD and synonyms are dropped (compare column #1 to #4 in Table 17), the decrease in recall is significant, but the decrease in precision is not significant. Again, we believe that a larger sample size would show the drop in precision is significant.
If we eliminate both synonyms and SVD from LRA, all that distinguishes LRA from VSM-WMTS is the patterns (step 4). The VSM approach uses a fixed list of 64 patterns to generate 128 dimensional vectors (Turney and Littman, 2005), whereas LRA uses a dynamically generated set of 4,000 patterns, resulting in 8,000 dimensional vectors. We can see the value of the automatically generated patterns by comparing LRA without synonyms and SVD (column #4) to VSM-WMTS (column #5). The difference in both precision and recall is statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
The ablation experiments support the value of the patterns (step 4) and synonyms (step 1) in LRA, but the contribution of SVD (step 9) has not been proven, although we believe more data will support its effectiveness. Nonetheless, the three components together result in a 16% increase in F (compare #1 to #5).
Matrix Symmetry
We know a priori that, if A:B::C:D, then B:A::D:C. For example, "mason is to stone as carpenter is to wood" implies "stone is to mason as wood is to carpenter". Therefore a good measure of relational similarity, sim r , should obey the following equation:
In steps 5 and 6 of the LRA algorithm (Section 5.5), we ensure that the matrix X is symmetrical, so that equation (8) is necessarily true for LRA. The matrix is designed so that the row vector for A:B is different from the row vector for B:A only by a permutation of the elements. The same permutation distinguishes the row vectors for C:D and D:C. Therefore the cosine of the angle between A:B and C:D must be identical to the cosine of the angle between B:A and D:C (see equation (7)).
To discover the consequences of this design decision, we altered steps 5 and 6 so that symmetry is no longer preserved. In step 5, for each word pair A:B that appears in the input set, we only have one row. There is no row for B:A unless B:A also appears in the input set. Thus the number of rows in the matrix dropped from 17,232 to 8,616.
In step 6, we no longer have two columns for each pattern P , one for "word 1 P word 2 " and another for "word 2 P word 1 ". However, to be fair, we kept the total number of columns at 8,000. In step 4, we selected the top 8,000 patterns (instead of the top 4,000), distinguishing the pattern "word 1 P word 2 " from the pattern "word 2 P word 1 " (instead of considering them equivalent). Thus a pattern P with a high frequency is likely to appear in two columns, in both possible orders, but a lower frequency pattern might appear in only one column, in only one possible order.
These changes resulted in a slight decrease in performance. Recall dropped from 56.1% to 55.3% and precision dropped from 56.8% to 55.9%. The decrease is not statistically significant. However, the modified algorithm no longer obeys equation (8).
Although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the SAT questions, we prefer to retain symmetry, to ensure that equation (8) is satisfied.
Note that, if A:B::C:D, it does not follow that B:A::C:D. For example, it is false that "stone is to mason as carpenter is to wood". In general (except when the semantic relations between A and B are symmetrical), we have the following inequality:
Therefore we do not want A:B and B:A to be represented by identical row vectors, although it would ensure that equation (8) is satisfied.
All Alternates versus Better Alternates
In step 12 of LRA, the relational similarity between A:B and C:D is the average of the cosines, among the (num f ilter + 1) 2 cosines from step 11, that are greater than or equal to the cosine of the original pairs, A:B and C:D. That is, the average includes only those alternates that are "better" than the originals. Taking all alternates instead of the better alternates, recall drops from 56.1% to 40.4% and precision drops from 56.8% to 40.8%. Both decreases are statistically significant with 95% confidence, according to the Fisher Exact Test (Agresti, 1990).
Interpreting Vectors
Suppose a word pair A:B corresponds to a vector r in the matrix X. It would be convenient if inspection of r gave us a simple explanation or description of the relation between A and B. For example, suppose the word pair ostrich:bird maps to the row vector r. It would be pleasing to look in r and find that the largest element corresponds to the pattern "is the largest" (i.e., "ostrich is the largest bird"). Unfortunately, inspection of r reveals no such convenient patterns. We hypothesize that the semantic content of a vector is distributed over the whole vector; it is not concentrated in a few elements. To test this hypothesis, we modified step 10 of LRA. Instead of projecting the 8,000 dimensional vectors into the 300 dimensional space U k Σ k , we use the matrix U k Σ k V T k . This matrix yields the same cosines as U k Σ k , but preserves the original 8,000 dimensions, making it easier to interpret the row vectors. For each row vector in U k Σ k V T k , we select the N largest values and set all other values to zero. The idea here is that we will only pay attention to the N most important patterns in r; the remaining patterns will be ignored. This reduces the length of the row vectors, but the cosine is the dot product of normalized vectors (all vectors are normalized to unit length; see equation (7)), so the change to the vector lengths has no impact; only the angle of the vectors is important. If most of the semantic content is in the N largest elements of r, then setting the remaining elements to zero should have relatively little impact. Table 18 shows the performance as N varies from 1 to 3,000. The precision and recall are significantly below the baseline LRA until N ≥ 300 (95% confidence, Fisher Exact Test). In other words, for a typical SAT analogy question, we need to examine the top 300 patterns to explain why LRA selected one choice instead of another.
We are currently working on an extension of LRA that will explain with a single pattern why one choice is better than another. We have had some promising results, but this work is not yet mature. However, we can confidently claim that interpreting the vectors is not trivial.
Manual Patterns versus Automatic Patterns
Turney and Littman (2005) used 64 manually generated patterns whereas LRA uses 4,000 automatically generated patterns. We know from Section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns. It may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns. If we require an exact match, 50 of the 64 manual patterns can be found in the automatic patterns. If we are lenient about wildcards, and count the pattern "not the" as matching "* not the" (for example), then 60 of the 64 manual patterns appear within the automatic patterns. This suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns, rather than a qualitative difference in the patterns. Turney and Littman (2005) point out that some of their 64 patterns have been used by other researchers. For example, Hearst (1992) used the pattern "such as" to discover hyponyms and Berland and Charniak (1999) used the pattern "of the" to discover meronyms. Both of these patterns are included in the 4,000 patterns automatically generated by LRA.
The novelty in Turney and Littman (2005) is that their patterns are not used to mine text for instances of word pairs that fit the patterns (Hearst, 1992; Berland and Charniak, 1999); instead, they are used to gather frequency data for building vectors that represent the relation between a given pair of words. The results in Section 6.8 show that a vector contains more information than any single pattern or small set of patterns; a vector is a distributed representation. LRA is distinct from Hearst (1992) and Berland and Charniak (1999) in its focus on distributed representations, which it shares with Turney and Littman (2005), but LRA goes beyond Turney and Littman (2005) by finding patterns automatically. Riloff and Jones (1999) and Yangarber (2003) also find patterns automatically, but their goal is to mine text for instances of word pairs; the same goal as Hearst (1992) and Berland and Charniak (1999). Because LRA uses patterns to build distributed vector representations, it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of Hearst (1992), Berland and Charniak (1999), Riloff and Jones (1999), and Yangarber (2003). Therefore LRA can simply select the highest frequency patterns (step 4 in Section 5.5); it does not need the more sophisticated selection algorithms of Riloff and Jones (1999) and Yangarber (2003).
Experiments with Noun-Modifier Relations
This section describes experiments with 600 noun-modifier pairs, hand-labeled with 30 classes of semantic relations (Nastase and Szpakowicz, 2003). In the following experiments, LRA is used with the baseline parameter values, exactly as described in Section 5.5. No adjustments were made to tune LRA to the noun-modifier pairs. LRA is used as a distance (nearness) measure in a single nearest neighbour supervised learning algorithm.
Classes of Relations
The following experiments use the 600 labeled noun-modifier pairs of Nastase and Szpakowicz (2003). This data set includes information about the part of speech and WordNet synset (synonym set; i.e., word sense tag) of each word, but our algorithm does not use this information. Table 19 lists the 30 classes of semantic relations. The table is based on Appendix A of Nastase and Szpakowicz (2003), with some simplifications. The original table listed several semantic relations for which there were no instances in the data set. These were relations that are typically expressed with longer phrases (three or more words), rather than noun-modifier word pairs. For clarity, we decided not to include these relations in Table 19.
In this table, H represents the head noun and M represents the modifier. For example, in "flu virus", the head noun (H) is "virus" and the modifier (M ) is "flu" (*). In English, the modifier (typically a noun or adjective) usually precedes the head noun. In the description of purpose, V represents an arbitrary verb. In "concert hall", the hall is for presenting concerts (V is "present") or holding concerts (V is "hold") ( †).
Nastase and Szpakowicz (2003) organized the relations into groups. The five capitalized terms in the "Relation" column of Table 19 are the names of five groups of semantic relations. (The original table had a sixth group, but there are no examples of this group in the data set.) We make use of this grouping in the following experiments.
Baseline LRA with Single Nearest Neighbour
The following experiments use single nearest neighbour classification with leave-oneout cross-validation. For leave-one-out cross-validation, the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers. The data set is split 600 times, so that each noun-modifier gets a turn as the testing word pair. The predicted class of the testing pair is the class of the single nearest neighbour in the training set. As the measure of nearness, we use LRA to calculate the relational similarity between the testing pair and the training pairs. The single nearest neighbour algorithm is a supervised learning algorithm (i.e., it requires a training set of labeled data), but we are using LRA to measure the distance between a pair and its potential neighbours, and LRA is itself determined in an unsupervised fashion (i.e., LRA does not need labeled data).
Each SAT question has five choices, so answering 374 SAT questions required calculating 374 × 5 × 16 = 29, 920 cosines. The factor of 16 comes from the alternate pairs, step 11 in LRA. With the noun-modifier pairs, using leave-one-out cross-validation, each test pair has 599 choices, so an exhaustive application of LRA would require calculating 600 × 599 × 16 = 5, 750, 400 cosines. To reduce the amount of computation required, we first find the 30 nearest neighbours for each pair, ignoring the alternate pairs (600 × 599 = 359, 400 cosines), and then apply the full LRA, including the alternates, to just those 30 neighbours (600 × 30 × 16 = 288, 000 cosines), which requires calculating only 359, 400 + 288, 000 = 647, 400 cosines.
There are 600 word pairs in the input set for LRA. In step 2, introducing alternate pairs multiplies the number of pairs by four, resulting in 2,400 pairs. In step 5, for each pair A:B, we add B:A, yielding 4,800 pairs. However, some pairs are dropped because they correspond to zero vectors and a few words do not appear in Lin's thesaurus. The sparse matrix (step 7) has 4,748 rows and 8,000 columns, with a density of 8.4%.
Following Turney and Littman (2005), we evaluate the performance by accuracy and also by the macroaveraged F measure (Lewis, 1991). Macroaveraging calculates the precision, recall, and F for each class separately, and then calculates the average across all classes. Microaveraging combines the true positive, false positive, and false negative counts for all of the classes, and then calculates precision, recall, and F from the combined counts. Macroaveraging gives equal weight to all classes, but microaveraging gives more weight to larger classes. We use macroaveraging (giving equal weight to all classes), because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus.
Classification with 30 distinct classes is a hard problem. To make the task easier, we can collapse the 30 classes to 5 classes, using the grouping that is given in Table 19. For example, agent and beneficiary both collapse to participant. On the 30 class problem, LRA with the single nearest neighbour algorithm achieves an accuracy of 39.8% (239/600) and a macroaveraged F of 36.6%. Always guessing the majority class would result in an accuracy of 8.2% (49/600). On the 5 class problem, the accuracy is 58.0% (348/600) and the macroaveraged F is 54.6%. Always guessing the majority class would give an accuracy of 43.3% (260/600). For both the 30 class and 5 class problems, LRA's accuracy is significantly higher than guessing the majority class, with 95% confidence, according to the Fisher Exact Test (Agresti, 1990). Table 20 shows the performance of LRA and VSM on the 30 class problem. VSM-AV is VSM with the AltaVista corpus and VSM-WMTS is VSM with the WMTS corpus. The results for VSM-AV are taken from Turney and Littman (2005). All three pairwise differences in the three F measures are statistically significant at the 95% level, according to the Paired T-Test (Feelders and Verkooijen, 1995). The accuracy of LRA is significantly higher than the accuracies of VSM-AV and VSM-WMTS, according to the Fisher Exact Test (Agresti, 1990), but the difference between the two VSM accuracies is not significant. Table 21 compares the performance of LRA and VSM on the 5 class problem. The accuracy and F measure of LRA are significantly higher than the accuracies and F measures of VSM-AV and VSM-WMTS, but the differences between the two VSM accuracies and F measures are not significant.
LRA versus VSM
Discussion
The experimental results in Sections 6 and 7 demonstrate that LRA performs significantly better than the VSM, but it is also clear that there is room for improvement. The accuracy might not yet be adequate for practical applications, although past work has shown that it is possible to adjust the tradeoff of precision versus recall (Turney and Littman, 2005). For some of the applications, such as information extraction, LRA might be suitable if it is adjusted for high precision, at the expense of low recall.
Another limitation is speed; it took almost nine days for LRA to answer 374 analogy questions. However, with progress in computer hardware, speed will gradually become less of a concern. Also, the software has not been optimized for speed; there are several places where the efficiency could be increased and many operations are parallelizable. It may also be possible to precompute much of the information for LRA, although this would require substantial changes to the algorithm.
The difference in performance between VSM-AV and VSM-WMTS shows that VSM is sensitive to the size of the corpus. Although LRA is able to surpass VSM-AV when the WMTS corpus is only about one tenth the size of the AV corpus, it seems likely that LRA would perform better with a larger corpus. The WMTS corpus requires one terabyte of hard disk space, but progress in hardware will likely make ten or even one hundred terabytes affordable in the relatively near future.
For noun-modifier classification, more labeled data should yield performance improvements. With 600 noun-modifier pairs and 30 classes, the average class has only 20 examples. We expect that the accuracy would improve substantially with five or ten times more examples. Unfortunately, it is time consuming and expensive to acquire hand-labeled data.
Another issue with noun-modifier classification is the choice of classification scheme for the semantic relations. The 30 classes of Nastase and Szpakowicz (2003) might not be the best scheme. Other researchers have proposed different schemes (Vanderwende, 1994;Barker and Szpakowicz, 1998;Rosario and Hearst, 2001;Rosario, Hearst, and Fillmore, 2002). It seems likely that some schemes are easier for machine learning than others. For some applications, 30 classes may not be necessary; the 5 class scheme may be sufficient.
LRA, like VSM, is a corpus-based approach to measuring relational similarity. Past work suggests that a hybrid approach, combining multiple modules, some corpusbased, some lexicon-based, will surpass any purebred approach (Turney et al., 2003). In future work, it would be natural to combine the corpus-based approach of LRA with the lexicon-based approach of Veale (2004), perhaps using the combination method of Turney et al. (2003).
The Singular Value Decomposition is only one of many methods for handling sparse, noisy data. We have also experimented with Nonnegative Matrix Factorization (NMF) (Lee and Seung, 1999), Probabilistic Latent Semantic Analysis (PLSA) (Hofmann, 1999), Kernel Principal Components Analysis (KPCA) (Scholkopf, Smola, and Muller, 1997), and Iterative Scaling (IS) (Ando, 2000). We had some interesting results with small matrices (around 2,000 rows by 1,000 columns), but none of these methods seemed substantially better than SVD and none of them scaled up to the matrix sizes we are using here (e.g., 17,232 rows and 8,000 columns; see Section 6.1).
In step 4 of LRA, we simply select the top num patterns most frequent patterns and discard the remaining patterns. Perhaps a more sophisticated selection algorithm would improve the performance of LRA. We have tried a variety of ways of selecting patterns, but it seems that the method of selection has little impact on performance. We hypothesize that the distributed vector representation is not sensitive to the selection method, but it is possible that future work will find a method that yields significant improvement in performance.
Conclusion
This paper has introduced a new method for calculating relational similarity, Latent Relational Analysis. The experiments demonstrate that LRA performs better than the VSM approach, when evaluated with SAT word analogy questions and with the task of classifying noun-modifier expressions. The VSM approach represents the relation between a pair of words with a vector, in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus. LRA extends this approach in three ways:
A:B and C:D be any two word pairs in the input set. From step 2, we have (num f ilter + 1) versions of A:B, the original and num f ilter alternates. Likewise, we have (num f ilter + 1) versions of C:D. Therefore we have (num f ilter + 1) 2 ways to compare a version of A:B with a version of C:D. Look for the row vectors in U k Σ k that correspond to the versions of A:B and the versions of C:D and calculate the (num f ilter + 1) 2 cosines (in our experiments, there are 16 cosines). For example, suppose A:B is quart:volume and C:D is mile:distance.
sim r (A : B, C : D) = sim r (B : A, D : C)
sim r (A : B, C : D) = sim r (B : A, C : D)
Table 1
1An example of a typical SAT question, from the collection of 374 questions.Stem:
mason:stone
Choices: (a) teacher:chalk
(b) carpenter:wood
(c) soldier:gun
(d) photograph:camera
(e) book:word
Solution: (b) carpenter:wood
printing.
Table 2
2An example of a typical TOEFL question, from the collection of 80 questions.Stem:
levied
Choices: (a) imposed
(b) believed
(c) requested
(d) correlated
Solution: (a) imposed
Table 3
3Performance of attributional similarity measures on the 80 TOEFL questions. (The average non-English US college applicant's performance is included in the bottom row, for comparison.) tions taken from the Test of English as a Foreign Language (TOEFL). An example of one of the 80 TOEFL questions appears inReference
Description
Percent Correct
Jarmasz and Szpakowicz (2003) best lexicon-based algorithm
78.75
Terra and Clarke (2003)
best corpus-based algorithm
81.25
Turney et al. (2003)
best hybrid algorithm
97.50
Landauer and Dumais (1997)
average human score
64.50
Table 4
4Performance of attributional similarity measures on the 374 SAT questions. Precision, recall, and F are reported as percentages. (The bottom two rows are not attributional similarity measures. They are included for comparison.)Algorithm
Type
Precision
Recall
F
Hirst and St-Onge (1998)
lexicon-based
34.9
32.1
33.4
Jiang and Conrath (1997)
hybrid
29.8
27.3
28.5
Leacock and Chodorow (1998) lexicon-based
32.8
31.3
32.0
Lin (1998b)
hybrid
31.2
27.3
29.1
Resnik (1995)
hybrid
35.7
33.2
34.4
Turney (2001)
corpus-based
35.0
35.0
35.0
Turney and Littman (2005)
relational (VSM)
47.7
47.1
47.4
random
random
20.0
20.0
20.0
recall =
number of correct guesses
maximum possible number correct
(3)
Table 5
5Metaphorical sentences from Lakoff and Johnson (1980), rendered as SAT-style verbal analogies.Metaphorical sentence
SAT-style verbal analogy
He shot down all of my arguments.
aircraft:shoot down::argument:refute
I demolished his argument.
building:demolish::argument:refute
You need to budget your time.
money:budget::time:schedule
I've invested a lot of time in her.
money:invest::time:allocate
My mind just isn't operating today.
machine:operate::mind:think
Life has cheated me.
charlatan:cheat::life:disappoint
Inflation is eating up our profits.
Some examples of phrases that contain quart:volume. Suffixes are ignored when searching for matching phrases in the WMTS corpus. At least one word must occur between quart and volume. At most max phrase words can appear in a phrase.Similarity
Frequency
Filtering step
quart:volume
NA
632
accept (original pair)
pint:volume
0.210
372
gallon:volume
0.159
1500
accept (top alternate)
liter:volume
0.122
3323
accept (top alternate)
squirt:volume
0.084
54
pail:volume
0.084
28
vial:volume
0.084
373
pumping:volume
0.073
1386
accept (top alternate)
ounce:volume
0.071
430
spoonful:volume
0.070
42
tablespoon:volume
0.069
96
quart:turnover
0.229
0
quart:output
0.225
34
quart:export
0.206
7
quart:value
0.203
266
quart:import
0.186
16
quart:revenue
0.185
0
quart:sale
0.169
119
quart:investment
0.161
11
quart:earnings
0.156
0
quart:profit
0.156
24
Table 8
is 0.781; see choice (b) in column #3). Performance of LRA on the 374 SAT questions. Precision, recall, and F are reported as percentages. (The bottom five rows are included for comparison.)Average
Original
Highest
cosines
cosines
cosines
Stem:
quart:volume
#1
#2
#3
Choices:
(a)
day:night
0.374
0.327
0.443
(b)
mile:distance
0.677
0.525
0.781
(c)
decade:century
0.389
0.327
0.470
(d)
friction:heat
0.428
0.336
0.552
(e)
part:whole
0.370
0.330
0.408
Solution:
(b)
mile:distance
0.677
0.525
0.781
Gap:
(b)-(d)
0.249
0.189
0.229
Table 12
Algorithm
Precision
Recall
F
LRA
56.8
56.1
56.5
Veale (2004)
42.8
42.8
42.8
best attributional similarity
35.0
35.0
35.0
random guessing
20.0
20.0
20.0
lowest co-occurrence frequency
16.8
16.8
16.8
highest co-occurrence frequency
11.8
11.8
11.8
Table 13
13LRA elapsed run time.Step
Description
Time H:M:S
Hardware
1
Find alternates
24:56:00
1 CPU
2
Filter alternates
0:00:02
1 CPU
3
Find phrases
109:52:00
16 CPUs
4
Find patterns
33:41:00
1 CPU
5
Map pairs to rows
0:00:02
1 CPU
6
Map patterns to columns
0:00:02
1 CPU
7
Generate a sparse matrix
38:07:00
1 CPU
8
Calculate entropy
0:11:00
1 CPU
9
Apply SVD
0:43:28
1 CPU
10
Projection
0:08:00
1 CPU
11
Evaluate alternates
2:11:00
1 CPU
12
Calculate relational similarity
0:00:02
1 CPU
Total
209:49:36
Table 14
14LRA versus VSM with 374 SAT analogy questions.Algorithm
Correct
Incorrect
Skipped
Precision
Recall
F
VSM-AV
176
193
5
47.7
47.1
47.4
VSM-WMTS
144
196
34
42.4
38.5
40.3
LRA
210
160
4
56.8
56.1
56.5
corpus)
).System
Recall
95% confidence
Human-level
(% correct)
interval for recall
(57%)
VSM-AV
47.1
42.2-52.5
NO
VSM-WMTS
38.5
33.5-43.6
NO
LRA
56.1
51.0-61.2
YES
Table 16
16Variation in performance with different parameter values. The Baseline column marks the baseline parameter values. TheStep column gives the step number in Section 5.5 where each parameter is discussed.Parameter
Baseline
Value
Step
Precision
Recall
F
num sim
5
1
54.2
53.5
53.8
num sim
⇒
10
1
56.8
56.1
56.5
num sim
15
1
54.1
53.5
53.8
max phrase
4
2
55.8
55.1
55.5
max phrase
⇒
5
2
56.8
56.1
56.5
max phrase
6
2
56.2
55.6
55.9
num f ilter
1
2
54.3
53.7
54.0
num f ilter
2
2
55.7
55.1
55.4
num f ilter
⇒
3
2
56.8
56.1
56.5
num f ilter
4
2
55.7
55.1
55.4
num f ilter
5
2
54.3
53.7
54.0
num patterns
1000
4
55.9
55.3
55.6
num patterns
2000
4
57.6
57.0
57.3
num patterns
3000
4
58.4
57.8
58.1
num patterns
⇒
4000
4
56.8
56.1
56.5
num patterns
5000
4
57.0
56.4
56.7
num patterns
6000
4
57.0
56.4
56.7
num patterns
7000
4
58.1
57.5
57.8
k
100
10
55.7
55.1
55.4
k
⇒
300
10
56.8
56.1
56.5
k
500
10
57.6
57.0
57.3
k
700
10
56.5
55.9
56.2
k
900
10
56.2
55.6
55.9
Table 17
17Results of ablation experiments.LRA
LRA
baseline
LRA
LRA
no SVD
system
no SVD
no synonyms
no synonyms
VSM-WMTS
#1
#2
#3
#4
#5
Correct
210
198
185
178
144
Incorrect
160
172
167
173
196
Skipped
4
4
22
23
34
Precision
56.8
53.5
52.6
50.7
42.4
Recall
56.1
52.9
49.5
47.6
38.5
F
56.5
53.2
51.0
49.1
40.3
Table 18
18Performance as a function of N .N
Correct
Incorrect
Skipped
Precision
Recall
F
1
114
179
81
38.9
30.5
34.2
3
146
206
22
41.5
39.0
40.2
10
167
201
6
45.4
44.7
45.0
30
174
196
4
47.0
46.5
46.8
100
178
192
4
48.1
47.6
47.8
300
192
178
4
51.9
51.3
51.6
1000
198
172
4
53.5
52.9
53.2
3000
207
163
4
55.9
55.3
55.6
Table 20
20Comparison of LRA and VSM on the 30 class problem. Comparison of LRA and VSM on the 5 class problem.VSM-AV
VSM-WMTS
LRA
Correct
167
148
239
Incorrect
433
452
361
Total
600
600
600
Accuracy
27.8
24.7
39.8
Precision
27.9
24.0
41.0
Recall
26.8
20.9
35.9
F
26.5
20.3
36.6
Table 21
VSM-AV
VSM-WMTS
LRA
Correct
274
264
348
Incorrect
326
336
252
Total
600
600
600
Accuracy
45.7
44.0
58.0
Precision
43.4
40.2
55.9
Recall
43.1
41.4
53.6
F
43.2
40.6
54.6
The College Board eliminated analogies from the SAT in 2005, apparently because it was believed that analogy questions discriminate against minorities, although it has been argued by liberals(Goldenberg, 2005) that dropping analogy questions has increased discrimination against minorities and by conservatives(Kurtz, 2002) that it has decreased academic standards. Analogy questions remain an important component in many other tests, such as the GRE.
See http://www.d.umn.edu/∼tpederse/similarity.html.
See http://www.altavista.com/robots.txt for AltaVista's current policy towards "robots" (software for automatically gathering web pages or issuing batches of queries). The protocol of the "robots.txt" file is explained in http://www.robotstxt.org/wc/robots.html.
See http://multitext.uwaterloo.ca/.
The online demonstration is at http://www.cs.ualberta.ca/∼lindek/demos/depsim.htm and the downloadable version is at http://armena.cs.ualberta.ca/lindek/downloads/sims.lsp.gz. 6 SVDLIBC is available at http://tedlab.mit.edu/∼dr/SVDLIBC/ and SVDPACKC is available at http://www.netlib.org/svdpack/.
AcknowledgmentsThanks to Michael Littman for sharing the 374 SAT analogy questions and for inspiring me to tackle them. Thanks to Vivi Nastase and Stan Szpakowicz for sharing their 600 classified nounmodifier phrases. Thanks to Egidio Terra, Charlie Clarke, and the School of Computer Science of the University of Waterloo, for giving us a copy of the Waterloo MultiText System and their Terabyte Corpus. Thanks to Dekang Lin for making his Dependency-Based Word Similarity lexicon available online. Thanks to Doug Rohde for SVDLIBC and Michael Berry for SVDPACK. Thanks to Ted Pedersen for making his Word-Net::Similarity package available. Thanks to Joel Martin for comments on the paper. Thanks to the anonymous reviewers of Computational Linguistics for their very helpful comments and suggestions.
LRA achieves an F of 56.5%, whereas the F of VSM is 40.3%. We have presented several examples of the many potential applications for measures of relational similarity. Just as attributional similarity measures have proven to have many practical uses, we expect that relational similarity measures will soon become widely used. Gentner et al. (2001) argue that relational similarity is essential to understanding novel metaphors (as opposed to conventional metaphors). Many researchers have. With the WMTS corpus. SVD is used to smooth the data, and (3) a thesaurus is used to explore variations of the word pairs. argued that metaphor is the heart of human thinking (Lakoff and Johnsonthe patterns are generated dynamically from the corpus, (2) SVD is used to smooth the data, and (3) a thesaurus is used to explore variations of the word pairs. With the WMTS corpus (about 5 × 10 10 English words), LRA achieves an F of 56.5%, whereas the F of VSM is 40.3%. We have presented several examples of the many potential applications for mea- sures of relational similarity. Just as attributional similarity measures have proven to have many practical uses, we expect that relational similarity measures will soon be- come widely used. Gentner et al. (2001) argue that relational similarity is essential to understanding novel metaphors (as opposed to conventional metaphors). Many re- searchers have argued that metaphor is the heart of human thinking (Lakoff and Johnson, 1980;
. Gentner, FrenchHofstadter and the Fluid Analogies Research GroupHofstadter and the Fluid Analogies Research Group, 1995; Gentner et al., 2001; French, 2002).
We believe that relational similarity plays a fundamental role in the mind and therefore relational similarity measures could be crucial for artificial intelligence. In future work, we plan to investigate some potential applications for LRA. It is possible that the error rate of LRA is still too high for practical applications. but the fact that LRA matches average human performance on SAT analogy questions is encouragingWe believe that relational similarity plays a fundamental role in the mind and therefore relational similarity measures could be crucial for artificial intelligence. In future work, we plan to investigate some potential applications for LRA. It is pos- sible that the error rate of LRA is still too high for practical applications, but the fact that LRA matches average human performance on SAT analogy questions is encouraging.
Categorical Data Analysis. Alan Agresti, WileyAgresti, Alan. 1990. Categor- ical Data Analysis. Wiley.
Latent semantic space: Iterative scaling improves precision of inter-document similarity measurement. Rie Ando, Kubota, Proceedings of the 23rd Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-2000). the 23rd Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-2000)Ando, Rie Kubota. 2000. La- tent semantic space: Iterative scaling improves precision of inter-document similarity measurement. In Proceed- ings of the 23rd Annual ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-2000), pages 216-223.
Extended gloss overlaps as a measure of semantic relatedness. Ribeiro-Neto1999] Baeza-Yates, Ricardo A Baeza-Yates, A Berthier, ; Ribeiro-Neto, Banerjee, Satanjeev Banerjee, Ted Pedersen, ; Barker, Ken , Stan Szpakowicz, Proceedings of the Thirty-Sixth Annual Meeting of the Association for Computational Linguistics and Seventeenth International Conference on Computational Linguistics (COLING-ACL'98). Christian Boitet and Pete Whitelockthe Thirty-Sixth Annual Meeting of the Association for Computational Linguistics and Seventeenth International Conference on Computational Linguistics (COLING-ACL'98)Acapulco, Mexico; San Francisco, California; Berland, Matthew and Eugene Charniak; New Brunswick, NJMorgan Kaufmann PublishersProceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL '99)[Baeza-Yates and Ribeiro-Neto1999] Baeza-Yates, Ricardo A. and Berthier A. Ribeiro-Neto. 1999. Modern Information Retrieval. ACM Press. [Banerjee and Pedersen2003] Banerjee, Sa- tanjeev and Ted Pedersen. 2003. Ex- tended gloss overlaps as a measure of semantic relatedness. In Proceedings of the Eighteenth International Joint Confer- ence on Artificial Intelligence (IJCAI-03), pages 805-810, Acapulco, Mexico. [Barker and Szpakowicz1998] Barker, Ken and Stan Szpakowicz. 1998. Semi- automatic recognition of noun modi- fier relationships. In Christian Boitet and Pete Whitelock, editors, Proceed- ings of the Thirty-Sixth Annual Meet- ing of the Association for Computational Linguistics and Seventeenth International Conference on Computational Linguistics (COLING-ACL'98), pages 96-102, San Francisco, California. Morgan Kauf- mann Publishers. [Berland and Charniak1999] Berland, Matthew and Eugene Charniak. 1999. Finding parts in very large corpora. In Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics (ACL '99), pages 57-64, New Brunswick, NJ.
Large scale singular value computations. Michael W Berry, International Journal of Supercomputer Applications. 61Berry, Michael W. 1992. Large scale singular value computations. In- ternational Journal of Supercomputer Ap- plications, 6(1):13-49.
Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures. [ Budanitsky, ] Hirst2001, Alexander Budanitsky, Graeme Hirst, Proceedings of the Workshop on WordNet and Other Lexical Resources, Second Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-2001). the Workshop on WordNet and Other Lexical Resources, Second Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-2001)Pittsburgh, PA; Christine, Curt Burgess, Lorie Richards, and Alma Pollock38Semantic and associative priming in the cerebral hemispheres: Some words do. some words don't ... sometimes, some places. Brain and Language[Budanitsky and Hirst2001] Budanitsky, Alexander and Graeme Hirst. 2001. Semantic distance in wordnet: An experimental, application-oriented evaluation of five measures. In Pro- ceedings of the Workshop on WordNet and Other Lexical Resources, Second Meeting of the North American Chapter of the Association for Computational Lin- guistics (NAACL-2001), pages 29-24, Pittsburgh, PA. [Chiarello et al.1990] Chiarello, Christine, Curt Burgess, Lorie Richards, and Alma Pollock. 1990. Semantic and as- sociative priming in the cerebral hemi- spheres: Some words do, some words don't ... sometimes, some places. Brain and Language, 38:75-104.
. Cathy Claman, Claman, Cathy. 2000. 10
College Entrance Examination Board. Real Sats, ; Clarke, Cormack Clarke, Charles L A Gordon, V Cormack, Christopher R Palmer, ACM SIGIR Forum. 322An overview of multitextReal SATs. College Entrance Examina- tion Board. [Clarke, Cormack, and Palmer1998] Clarke, Charles L.A., Gordon V. Cor- mack, and Christopher R. Palmer. 1998. An overview of multitext. ACM SIGIR Forum, 32(2):14-15.
The cell transmission model: A dynamic representation of highway traffic consistent with the hydrodynamic theory. Carlos F Daganzo, Transportation Research Part B: Methodological. 284Daganzo, Carlos F. 1994. The cell transmission model: A dy- namic representation of highway traf- fic consistent with the hydrodynamic theory. Transportation Research Part B: Methodological, 28(4):269-287.
Indexing by latent semantic analysis. [ Deerwester, Journal of the American Society for Information Science (JASIS). 416[Deerwester et al.1990] Deerwester, Scott C., Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science (JASIS), 41(6):391-407.
Metaphor as an emergent property of machine-readable dictionaries. William B Dolan, Proceedings of the AAAI 1995 Spring Symposium Series: Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and Generativity. the AAAI 1995 Spring Symposium Series: Representation and Acquisition of Lexical Knowledge: Polysemy, Ambiguity and GenerativityDolan, William B. 1995. Metaphor as an emergent property of machine-readable dictionaries. In Pro- ceedings of the AAAI 1995 Spring Sympo- sium Series: Representation and Acquisi- tion of Lexical Knowledge: Polysemy, Am- biguity and Generativity, pages 27-32.
Enhancing performance in latent semantic indexing (LSI) retrieval. Susan T Dumais, TM-ARH-017527Bellcore, Morristown, NJTechnical ReportDumais, Susan T. 1990. En- hancing performance in latent seman- tic indexing (LSI) retrieval. Techni- cal Report TM-ARH-017527, Bellcore, Morristown, NJ.
Latent semantic indexing (LSI) and TREC-2. Susan T Dumais, Proceedings of the Second Text REtrieval Conference (TREC-2). D.K. Harmanthe Second Text REtrieval Conference (TREC-2)National Institute of Standards and TechnologyDumais, Susan T. 1993. Latent semantic indexing (LSI) and TREC-2. In D.K. Harman, editor, Pro- ceedings of the Second Text REtrieval Conference (TREC-2), pages 105-115. National Institute of Standards and Technology.
. Brian Falkenhainer, Falkenhainer, Brian.
Forbus, and Dedre Gentner. 1989. The structure-mapping engine: Algorithm and examples. Forbus Falkenhainer, Brian Falkenhainer, D Kenneth, Artificial Intelligence. 411Falkenhainer, Forbus, and Gentner1989] Falkenhainer, Brian, Kenneth D. For- bus, and Dedre Gentner. 1989. The structure-mapping engine: Algorithm and examples. Artificial Intelligence, 41(1):1-63.
Feelders and Verkooijen1995] Feelders, Ad and William Verkooijen. 1995. Which method learns the most from data? Methodological issues in the analysis of comparative studies. Montemagni Federici, Stefano Federici, Simonetta Montemagni, Vito Pirrelli, Proceedings of the ACL/EACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications. the ACL/EACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP ApplicationsMadrid, Spain; Ft. Lauderdale, FloridaFifth International Workshop on Artificial Intelligence and Statistics[Federici, Montemagni, and Pirrelli1997] Federici, Stefano, Simonetta Mon- temagni, and Vito Pirrelli. 1997. Inferring semantic similarity from distributional evidence: An analogy- based approach to word sense disambiguation. In Proceedings of the ACL/EACL Workshop on Automatic Information Extraction and Building of Lexical Semantic Resources for NLP Applications, pages 90-97, Madrid, Spain. [Feelders and Verkooijen1995] Feelders, Ad and William Verkooijen. 1995. Which method learns the most from data? Methodological issues in the analysis of comparative studies. In Fifth International Workshop on Arti- ficial Intelligence and Statistics, pages 219-225, Ft. Lauderdale, Florida.
WordNet: An Electronic Lexical Database. Christiane Fellbaum, MIT PressFellbaum, Christiane, edi- tor. 1998. WordNet: An Electronic Lexi- cal Database. MIT Press.
The computational modeling of analogymaking. Robert M French, Trends in Cognitive Sciences. 65French, Robert M. 2002. The computational modeling of analogy- making. Trends in Cognitive Sciences, 6(5):200-205.
Structure-mapping: A theoretical framework for analogy. Dedre Gentner, Cognitive Science. 72Gentner, Dedre. 1983. Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7(2):155-170.
Metaphor is like analogy. [ Gentner, The Analogical Mind: Perspectives from Cognitive Science. Dedre Gentner, Keith J. Holyoak, and Boicho N. KokinovCambridge, MAMIT Press[Gentner et al.2001] Gentner, Dedre, Brian Bowdle, Phillip Wolff, and Consuelo Boronat. 2001. Metaphor is like analogy. In Dedre Gentner, Keith J. Holyoak, and Boicho N. Kokinov, ed- itors, The Analogical Mind: Perspectives from Cognitive Science, pages 199-253, Cambridge, MA. MIT Press.
Automatic labeling of semantic roles. [ Gildea, Daniel , Daniel Jurafsky, Computational Linguistics. 283[Gildea and Jurafsky2002] Gildea, Daniel and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computa- tional Linguistics, 28(3):245-288.
Learning semantic constraints for the automatic discovery of part-whole relations. Badulescu Girju, Roxana Girju, Adriana Badulescu, Dan I Moldovan, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2003). the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2003)Girju, Badulescu, and Moldovan2003] Girju, Roxana, Adriana Badulescu, and Dan I. Moldovan. 2003. Learning semantic constraints for the automatic discovery of part-whole relations. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2003), pages 80-87.
The emperor's new clothes: Undressing the new and unimproved sat. David Goldenberg, Gene H Golub, F Charles, Van Loan, Matrix Computations. Johns Hopkins University PressBaltimore, MDGelf Magazine. Golub and Van Loan1996. third editionGoldenberg, David. 2005. The emperor's new clothes: Undressing the new and unimproved sat. Gelf Mag- azine, March. http://www.gelf- magazine.com/mt/archives/the emp- erors new clothes.html. [Golub and Van Loan1996] Golub, Gene H. and Charles F. Van Loan. 1996. Matrix Computations. Johns Hopkins Univer- sity Press, Baltimore, MD, third edi- tion.
An experimental study of factors important in document ranking. Donna Harman, Proceedings of the Ninth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'86). the Ninth Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR'86)Pisa, ItalyHarman, Donna. 1986. An experimental study of factors impor- tant in document ranking. In Pro- ceedings of the Ninth Annual Interna- tional ACM SIGIR Conference on Re- search and Development in Information Retrieval (SIGIR'86), pages 186-193, Pisa, Italy.
Lexical chains as representations of context for the detection and correction of malapropisms. Marti A Hearst, Proceedings of the Fourteenth International Conference on Computational Linguistics. Christiane Fellbaumthe Fourteenth International Conference on Computational LinguisticsNantes, FranceMIT PressAutomatic acquisition of hyponyms from large text corporaHearst, Marti A. 1992. Au- tomatic acquisition of hyponyms from large text corpora. In Proceedings of the Fourteenth International Conference on Computational Linguistics, pages 539- 545, Nantes, France. [Hirst and St-Onge1998] Hirst, Graeme and David St-Onge. 1998. Lexical chains as representations of context for the detection and correction of malapropisms. In Christiane Fell- baum, editor, WordNet: An Electronic Lexical Database, pages 305-332. MIT Press.
Probabilistic Latent Semantic Indexing. Thomas Hofmann, Proceedings of the 22nd Annual ACM Conference on Research and Development in Information Retrieval (SIGIR '99). the 22nd Annual ACM Conference on Research and Development in Information Retrieval (SIGIR '99)Berkeley, CaliforniaHofstadter and the Fluid Analogies Research Group1995] Hofstadter, Douglas and the Fluid Analogies Research GroupHofmann, Thomas. 1999. Probabilistic Latent Semantic Index- ing. In Proceedings of the 22nd Annual ACM Conference on Research and Devel- opment in Information Retrieval (SIGIR '99), pages 50-57, Berkeley, California, August. [Hofstadter and the Fluid Analogies Research Group1995] Hofstadter, Douglas and the Fluid Analogies Research Group.
Semantic similarity based on corpus statistics and lexical taxonomy. Mario Jarmasz, Stan Szpakowicz, ; Jiang, Jay J , David W Conrath, Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Basic Books. New York, NY; Borovets, Bulgaria; Tapei, TaiwanProceedings of the International Conference on Research in Computational Linguistics (ROCLING X)Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought. Basic Books, New York, NY. [Jarmasz and Szpakowicz2003] Jarmasz, Mario and Stan Szpakowicz. 2003. Roget's thesaurus and semantic simi- larity. In Proceedings of the International Conference on Recent Advances in Nat- ural Language Processing (RANLP-03), pages 212-219, Borovets, Bulgaria. [Jiang and Conrath1997] Jiang, Jay J. and David W. Conrath. 1997. Seman- tic similarity based on corpus statistics and lexical taxonomy. In Proceedings of the International Conference on Research in Computational Linguistics (ROCLING X), pages 19-33, Tapei, Taiwan.
A solution to Plato's problem: The latent semantic analysis theory of the acquisition, induction, and representation of knowledge. Stanley Kurtz, George Lakoff, Mark Johnson, ; Landauer, K Thomas, Susan T Dumais, National Review Magazine. 1042University of Chicago PressPsychological ReviewKurtz, Stanley. 2002. Testing debate. National Review Magazine, August. http://www.nationalreview.com/kur- tz/kurtz082102.asp. [Lakoff and Johnson1980] Lakoff, George and Mark Johnson. 1980. Metaphors We Live By. University of Chicago Press, Chicago, IL. [Landauer and Dumais1997] Landauer, Thomas K. and Susan T. Dumais. 1997. A solution to Plato's problem: The latent semantic analysis theory of the acquisition, induction, and repre- sentation of knowledge. Psychological Review, 104(2):211-240.
The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks. Mirella Lapata, Frank Keller, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004). the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2004)Lapata and Keller2004[Lapata and Keller2004] Lapata, Mirella and Frank Keller. 2004. The web as a baseline: Evaluating the performance of unsupervised web-based models for a range of NLP tasks. In Proceed- ings of the Human Language Technology Conference of the North American Chap- ter of the Association for Computational Linguistics (HLT-NAACL 2004), pages 121-128.
Designing Statistical Language Learners: Experiments on Compound Nouns. Mark Lauer, Macquarie UniversityPh.D. thesisLauer, Mark. 1995. Design- ing Statistical Language Learners: Exper- iments on Compound Nouns. Ph.D. the- sis, Macquarie University.
Combining local context and WordNet similarity for word sense identification. ] Chodorow1998, Claudia Leacock, Martin Chodorow, WordNet: An Electronic Lexical Database. Christiane FellbaumMIT Pressand Chodorow1998] Leacock, Claudia and Martin Chodorow. 1998. Combining local context and WordNet similarity for word sense identification. In Christiane Fellbaum, editor, WordNet: An Electronic Lexical Database, pages 265-283. MIT Press.
Learning the parts of objects by nonnegative matrix factorization. ] Seung1999, Daniel D Lee, H Sebastian Seung, Nature. 401and Seung1999] Lee, Daniel D. and H. Sebastian Seung. 1999. Learn- ing the parts of objects by nonnegative matrix factorization. Nature, 401:788- 791.
Wordword associations in document retrieval systems. Michael E Lesk, American Documentation. 201Lesk, Michael E. 1969. Word- word associations in document re- trieval systems. American Documenta- tion, 20(1):27-38.
Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from a ice cream cone. Michael E Lesk, Proceedings of ACM SIGDOC '86. ACM SIGDOC '86Lesk, Michael E. 1986. Auto- matic sense disambiguation using ma- chine readable dictionaries: How to tell a pine cone from a ice cream cone. In Proceedings of ACM SIGDOC '86, pages 24-26.
Evaluating text categorization. David D Lewis, Proceedings of the Speech and Natural Language Workshop. the Speech and Natural Language WorkshopAsilomar, CAMorgan KaufmannLewis, David D. 1991. Eval- uating text categorization. In Pro- ceedings of the Speech and Natural Lan- guage Workshop, pages 312-318, Asilo- mar, CA. Morgan Kaufmann.
Automatic retrieval and clustering of similar words. Dekang Lin, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL '98). the 36th Annual Meeting of the Association for Computational Linguistics and the 17th International Conference on Computational Linguistics (COLING-ACL '98)Montreal, CanadaLin, Dekang. 1998a. Auto- matic retrieval and clustering of simi- lar words. In Proceedings of the 36th An- nual Meeting of the Association for Com- putational Linguistics and the 17th In- ternational Conference on Computational Linguistics (COLING-ACL '98), pages 768-774, Montreal, Canada.
An information-theoretic definition of similarity. Dekang Lin, Proceedings of the 15th International Conference on Machine Learning (ICML '98). the 15th International Conference on Machine Learning (ICML '98)Lin, Dekang. 1998b. An information-theoretic definition of similarity. In Proceedings of the 15th International Conference on Machine Learning (ICML '98), pages 296-304.
Coupled clustering: A method for detecting structural correspondence. Morgan Kaufmann, San Francisco, C A Marx, Journal of Machine Learning Research. 3Morgan Kaufmann, San Francisco, CA. [Marx et al.2002] Marx, Zvika, Ido Dagan, Joachim Buhmann, and Eli Shamir. 2002. Coupled clustering: A method for detecting structural correspon- dence. Journal of Machine Learning Re- search, 3:747-780.
Similarity involving attributes and relations: Judgments of similarity and difference are not inverses. Goldstone Medin, Douglas L Medin, L Robert, Dedre Goldstone, Gentner ; Moldovan, Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics. Marta Tatu, Daniel Antohe, and Roxana Girju; Boston, MA1Proceedings of the Computational Lexical Semantics Workshop at HLT-NAACL 2004Medin, Goldstone, and Gentner1990] Medin, Douglas L., Robert L. Gold- stone, and Dedre Gentner. 1990. Similarity involving attributes and relations: Judgments of similarity and difference are not inverses. Psychological Science, 1(1):64-69. [Moldovan et al.2004] Moldovan, Dan, Adriana Badulescu, Marta Tatu, Daniel Antohe, and Roxana Girju. 2004. Models for the semantic classifi- cation of noun phrases. In Proceedings of the Computational Lexical Semantics Workshop at HLT-NAACL 2004, pages 60-67, Boston, MA. [Morris and Hirst1991] Morris, Jane and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Com- putational Linguistics, 17(1):21-48.
Exploring noun-modifier semantic relations. Vivi Szpakowicz2003] Nastase, Stan Szpakowicz, ; Pantel, Patrick , Dekang Lin, Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining. ACM SIGKDD Conference on Knowledge Discovery and Data MiningTilburg, The NetherlandsFifth International Workshop on Computational Semantics (IWCS-5)[Nastase and Szpakowicz2003] Nastase, Vivi and Stan Szpakowicz. 2003. Exploring noun-modifier seman- tic relations. In Fifth International Workshop on Computational Semantics (IWCS-5), pages 285-301, Tilburg, The Netherlands. [Pantel and Lin2002] Pantel, Patrick and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 613- 619.
Development and application of a metric on semantic nets. Roy Rada, Hafedh Mili, Ellen Bicknell, Maria Blettner, IEEE Transactions on Systems, Man, and Cybernetics. 191Rada et al.1989[Rada et al.1989] Rada, Roy, Hafedh Mili, Ellen Bicknell, and Maria Blettner. 1989. Development and application of a metric on semantic nets. IEEE Trans- actions on Systems, Man, and Cybernet- ics, 19(1):17-30.
. [ Rehder, [Rehder et al.1998] Rehder, Bob, M.E.
. Michael B W Schreiner, Wolfe, Schreiner, Michael B.W. Wolfe, Darrell
Using latent semantic analysis to assess knowledge: Some technical considerations. Thomas K Laham, Walter Landauer, Kintsch, Discourse Processes. 25Laham, Thomas K. Landauer, and Walter Kintsch. 1998. Using latent semantic analysis to assess knowl- edge: Some technical considerations. Discourse Processes, 25:337-354.
Cognition and Thought: An Information Processing Approach. Walter R Reitman, John Wiley and SonsNew York, NYReitman, Walter R. 1965. Cognition and Thought: An Information Processing Approach. John Wiley and Sons, New York, NY.
Using information content to evaluate semantic similarity in a taxonomy. Philip Resnik, Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95). the 14th International Joint Conference on Artificial Intelligence (IJCAI-95)San Mateo, CAMorgan KaufmannResnik, Philip. 1995. Us- ing information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95), pages 448-453, San Mateo, CA. Morgan Kaufmann.
Learning dictionaries for information extraction by multi-level bootstrapping. Ellen Riloff, Rosie Jones, Proceedings of the Sixteenth National Conference on Artificial Intelligence (AAAI-99). the Sixteenth National Conference on Artificial Intelligence (AAAI-99)Riloff and Jones1999[Riloff and Jones1999] Riloff, Ellen and Rosie Jones. 1999. Learning dictio- naries for information extraction by multi-level bootstrapping. In Proceed- ings of the Sixteenth National Conference on Artificial Intelligence (AAAI-99), pages 474-479.
Classifying the semantic relations in nouncompounds via a domain-specific lexical hierarchy. [ Rosario, Barbara Rosario, Marti Hearst, EMNLP-01Proceedings of the 2001 Conference on Empirical Methods in Natural Language Processing. the 2001 Conference on Empirical Methods in Natural Language Processing[Rosario and Hearst2001] Rosario, Barbara and Marti Hearst. 2001. Classify- ing the semantic relations in noun- compounds via a domain-specific lexi- cal hierarchy. In Proceedings of the 2001 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP-01), pages 82-90.
The descent of hierarchy, and selection in relational semantics. Hearst Rosario, Barbara Rosario, Marti Hearst, Charles Fillmore, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL '02). the 40th Annual Meeting of the Association for Computational Linguistics (ACL '02)Philadelphia, PA[Rosario, Hearst, and Fillmore2002] Rosario, Barbara, Marti Hearst, and Charles Fillmore. 2002. The descent of hierarchy, and selection in relational semantics. In Proceedings of the 40th Annual Meeting of the Associa- tion for Computational Linguistics (ACL '02), pages 417-424, Philadelphia, PA.
Experiments on linguistically-based term associations. Information Processing and Management. Gerda Ruge, 28Ruge, Gerda. 1992. Experi- ments on linguistically-based term as- sociations. Information Processing and Management, 28(3):317-332.
Termweighting approaches in automatic text retrieval. Information Processing and Management. Gerard Salton, Addison-Wesley, M A Reading, Gerard Salton, Chris Buckley, ; Salton, Michael J Gerard, Mcgill ; Scholkopf, Alexander J Bernhard, Klaus-Robert Smola, ; Muller, Terra, Charles L A Egidio, Clarke, Proceedings of the Human Language Technology and North American Chapter of Association of Computational Linguistics Conference. the Human Language Technology and North American Chapter of Association of Computational Linguistics ConferenceNew York, NY; BerlinMcGraw-Hill24Automatic Text Processing: The Transformation, Analysis, and Retrieval of Information by ComputerSalton, Gerard. 1989. Au- tomatic Text Processing: The Transfor- mation, Analysis, and Retrieval of Infor- mation by Computer. Addison-Wesley, Reading, MA. [Salton and Buckley1988] Salton, Gerard and Chris Buckley. 1988. Term- weighting approaches in automatic text retrieval. Information Processing and Management, 24(5):513-523. [Salton and McGill1983] Salton, Gerard and Michael J. McGill. 1983. Intro- duction to Modern Information Retrieval. McGraw-Hill, New York, NY. [Scholkopf, Smola, and Muller1997] Scholkopf, Bernhard, Alexander J. Smola, and Klaus-Robert Muller. 1997. Kernel principal component analysis. In Proceedings of the In- ternational Conference on Artificial Neural Networks (ICANN-1997), pages 583-588, Berlin. [Terra and Clarke2003] Terra, Egidio and Charles L.A. Clarke. 2003. Frequency estimates for statistical word similar- ity measures. In Proceedings of the Human Language Technology and North American Chapter of Association of Com- putational Linguistics Conference 2003 (HLT/NAACL 2003), pages 244-251.
Mining the Web for synonyms: PMI-IR versus LSA on TOEFL. Peter D Turney, Proceedings of the Twelfth European Conference on Machine Learning. the Twelfth European Conference on Machine LearningBerlinSpringerTurney, Peter D. 2001. Min- ing the Web for synonyms: PMI-IR versus LSA on TOEFL. In Proceed- ings of the Twelfth European Conference on Machine Learning, pages 491-502, Berlin. Springer.
Peter D Turney, Thumbs up or thumbs down? Sefor Computational Linguistics (ACL'02). Turney, Peter D. 2002. Thumbs up or thumbs down? Se- for Computational Lin- guistics (ACL'02), pages 417-424.
Measuring semantic similarity by latent relational analysis. Peter D Turney, Proceedings of the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05). the Nineteenth International Joint Conference on Artificial Intelligence (IJCAI-05)Edinburgh, ScotlandTurney, Peter D. 2005. Mea- suring semantic similarity by latent relational analysis. In Proceedings of the Nineteenth International Joint Con- ference on Artificial Intelligence (IJCAI- 05), pages 1136-1141, Edinburgh, Scot- land.
Corpus-based learning of analogies and semantic relations. Littman2005] Turney, Turney, D Peter, L Michael, Littman, Machine Learning. 601-3[Turney and Littman2005] Turney, Peter D. and Michael L. Littman. 2005. Corpus-based learning of analogies and semantic relations. Machine Learn- ing, 60(1-3):251-278.
Combining independent modules to solve multiple-choice synonym and analogy problems. Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP-03). the International Conference on Recent Advances in Natural Language Processing (RANLP-03)Borovets, Bulgariaet al.2003] Turney, Peter D., Michael L. Littman, Jeffrey Bigham, and Victor Shnayder. 2003. Com- bining independent modules to solve multiple-choice synonym and anal- ogy problems. In Proceedings of the International Conference on Recent Ad- vances in Natural Language Processing (RANLP-03), pages 482-489, Borovets, Bulgaria.
Algorithm for automatic interpretation of noun sequences. Lucy Vanderwende, Proceedings of the Fifteenth International Conference on Computational Linguistics. the Fifteenth International Conference on Computational LinguisticsKyoto, JapanVanderwende, Lucy. 1994. Algorithm for automatic in- terpretation of noun sequences. In Proceedings of the Fifteenth International Conference on Computational Linguistics, pages 782-788, Kyoto, Japan.
The analogical thesaurus. Tony Veale, Proceedings of the 15th Innovative Applications of Artificial Intelligence Conference (IAAI 2003). the 15th Innovative Applications of Artificial Intelligence Conference (IAAI 2003)Acapulco, MexicoVeale, Tony. 2003. The analogi- cal thesaurus. In Proceedings of the 15th Innovative Applications of Artificial In- telligence Conference (IAAI 2003), pages 137-142, Acapulco, Mexico.
WordNet sits the SAT: A knowledge-based approach to lexical analogy. Tony Veale, Proceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004). the 16th European Conference on Artificial Intelligence (ECAI 2004)Valencia, SpainVeale, Tony. 2004. WordNet sits the SAT: A knowledge-based ap- proach to lexical analogy. In Pro- ceedings of the 16th European Conference on Artificial Intelligence (ECAI 2004), pages 606-612, Valencia, Spain.
Counter-training in discovery of semantic patterns. Roman Yangarber, Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003). the 41st Annual Meeting of the Association for Computational Linguistics (ACL-2003)Sapporo, JapanYangarber, Roman. 2003. Counter-training in discovery of se- mantic patterns. In Proceedings of the 41st Annual Meeting of the Associa- tion for Computational Linguistics (ACL- 2003), pages 343-350, Sapporo, Japan.
Kernel methods for relation extraction. David Yarowsky, Aone, Zelenko, Chinatsu Dmitry, Anthony Aone, Richardella, Proceedings of the ARPA Human Language Technology Workshop. the ARPA Human Language Technology WorkshopPrinceton, NJ3One sense per collocationYarowsky, David. 1993. One sense per collocation. In Pro- ceedings of the ARPA Human Language Technology Workshop, pages 266-271, Princeton, NJ. [Zelenko, Aone, and Richardella2003] Zelenko, Dmitry, Chinatsu Aone, and Anthony Richardella. 2003. Ker- nel methods for relation extraction. Journal of Machine Learning Research, 3:1083-1106.
| [] |
[
"Neural language models for text classification in evidence-based medicine",
"Neural language models for text classification in evidence-based medicine"
] | [
"Andrés Carvallo afcarvallo@uc.cl \nDepartment of Computer Science\nPontificia Universidad Católica de Chile\nSantiagoChile\n",
"Denis Parra dparra@ing.puc.cl \nDepartment of Computer Science\nPontificia Universidad Católica de Chile\nSantiagoChile\n",
"Gabriel Rada radagabriel@epistemonikos.org \nEpistemonikos Foundation\nSantiagoChile\n",
"Daniel Pérez \nEpistemonikos Foundation\nSantiagoChile\n",
"Juan Ignacio Vásquez \nEpistemonikos Foundation\nSantiagoChile\n",
"Camilo Vergara camilo@epistemonikos.org \nEpistemonikos Foundation\nSantiagoChile\n"
] | [
"Department of Computer Science\nPontificia Universidad Católica de Chile\nSantiagoChile",
"Department of Computer Science\nPontificia Universidad Católica de Chile\nSantiagoChile",
"Epistemonikos Foundation\nSantiagoChile",
"Epistemonikos Foundation\nSantiagoChile",
"Epistemonikos Foundation\nSantiagoChile",
"Epistemonikos Foundation\nSantiagoChile"
] | [] | The COVID-19 has brought about a significant challenge to the whole of humanity, but with a special burden upon the medical community. Clinicians must keep updated continuously about symptoms, diagnoses, and effectiveness of emergent treatments under a never-ending flood of scientific literature. In this context, the role of evidence-based medicine (EBM) for curating the most substantial evidence to support public health and clinical practice turns essential but is being challenged as never before due to the high volume of research articles published and pre-prints posted daily. Artificial Intelligence can have a crucial role in this situation. In this article, we report the results of an applied research project to classify scientific articles to support Epistemonikos, one of the most active foundations worldwide conducting EBM. We test several methods, and the best one, based on the XLNet neural language model, improves the current approach by 93% on average F1score, saving valuable time from physicians who volunteer to curate COVID-19 research articles manually. | 10.52591/lxai202012126 | [
"https://arxiv.org/pdf/2012.00584v1.pdf"
] | 227,239,242 | 2012.00584 | f731dcc951abbde0ce0fbd1fdf4f2fb25defacf2 |
Neural language models for text classification in evidence-based medicine
1 Dec 2020
Andrés Carvallo afcarvallo@uc.cl
Department of Computer Science
Pontificia Universidad Católica de Chile
SantiagoChile
Denis Parra dparra@ing.puc.cl
Department of Computer Science
Pontificia Universidad Católica de Chile
SantiagoChile
Gabriel Rada radagabriel@epistemonikos.org
Epistemonikos Foundation
SantiagoChile
Daniel Pérez
Epistemonikos Foundation
SantiagoChile
Juan Ignacio Vásquez
Epistemonikos Foundation
SantiagoChile
Camilo Vergara camilo@epistemonikos.org
Epistemonikos Foundation
SantiagoChile
Neural language models for text classification in evidence-based medicine
1 Dec 2020
The COVID-19 has brought about a significant challenge to the whole of humanity, but with a special burden upon the medical community. Clinicians must keep updated continuously about symptoms, diagnoses, and effectiveness of emergent treatments under a never-ending flood of scientific literature. In this context, the role of evidence-based medicine (EBM) for curating the most substantial evidence to support public health and clinical practice turns essential but is being challenged as never before due to the high volume of research articles published and pre-prints posted daily. Artificial Intelligence can have a crucial role in this situation. In this article, we report the results of an applied research project to classify scientific articles to support Epistemonikos, one of the most active foundations worldwide conducting EBM. We test several methods, and the best one, based on the XLNet neural language model, improves the current approach by 93% on average F1score, saving valuable time from physicians who volunteer to curate COVID-19 research articles manually.
Introduction
Evidence-based medicine (EBM) is a medical practice that aims to find all the evidence to support medical decisions. This evidence nowadays is obtained from biomedical journals, usually accessible through online databases like PubMed [5] and EMBASE [4], which provide free access to articles' abstracts and in some cases, to full articles. In the context of the COVID-19 pandemic, EBM is critical to making decisions at the individual level and public health since research articles address topics like treatments, adverse cases, and effects of public policies in medicine. The EBM foundation Epistemonikos has made essential contributions by curating and publishing updated guides of what treatments are working and not against COVID-19 1 . Epistemonikos addresses EBM by a combination of software tools for data collection, storage, filtering [2,1], and retrieval, as well as by the vital labor of volunteer physicians who curate and label research articles based on quality (to include in the database), type (systematic review, randomized trial, among others) and PICO labels (patient, intervention, comparison, outcome). However, this workflow has been challenged during 2020 by increasing growth and rapidly evolving evidence of COVID-19 articles published in the latest months. Moreover, to ensure the rapid collection of the latest evidence published, pre-print repositories such as medRXiv and bioRXiv have been added to the traditional online databases. In order to support Epistemonikos' effort to filter and curate the flood of articles related to COVID-19, we present the results of an applied AI project where we implement and evaluate a text classification system to filter and categorize research articles related to COVID-19. The current model, based on Random Forests, has an acceptable performance classifying systematic reviews (SR) but fails on classifying other document categories. In this article, we show how using BioBERT yields marginal improvements, while XLNET results in significant progress with the best performance. These results save a considerable amount of time from volunteer physicians by pre-filtering the articles worth of manual curation and labeling for EBM. In average, a physician takes two minutes in reviewing one article, while the system we present in this article can review up to 32, 000 within one hour.
Methods and results
Methods and data. We compare document classification results among a (i) random forest with a customized tokenizer made by Epistemonikos, (ii) an XLNET [8] language model representing documents using a linear layer as a classifier, and (iii) the same setting with a BioBERT [3]language model. The documents' classification can be a systematic review, a primary study using a randomized controlled trial, non-randomized primary study, broad synthesis, and excluded document. The distribution of documents can be observed in the second column of Table 1. Notice that the type of document partially explains the classification models' mistakes: broad synthesis and systematic review are both kinds of surveys, while primary studies (rct and non-rct) deal with specific treatments and populations. Excluded can be of any of the other four classes, but they are not included in the official Epistemonikos dataset due to their low quality.
Results. Table 1 shows the performance of each model in terms of precision (Prec.), recall (Rec.), and F1-score (F-1) for every type of document. In general terms, we observe that XLNet obtains the top F-1 score for any document category, in some cases by a small margin, such as under systematic review (F-1=.97), and in other cases by a large margin, as in the classes Broad synthesis (F-1=.61), and Excluded (F-1=.78). The results indicate that the random forest and BioBERT with a linear layer have a bias towards the most dominant class, Systematic review, reporting slightly better recall (Rec.=.99 and Rec.=1.0) than XLNet (Rec.=.98) in this particular type of document. However, XLNet is better than the other two models in terms of Precision upon all classes, with the only exception of Broad synthesis, where random forest (Prec.=.75) performs better than XLNet (Prec.=.67). However, XLNet recall outperforms (Rec.=.56) random forest (Rec.=.15). It is important to note that when using the random forest implemented for Epistemonikos, a new tokenizer has to be made depending on the document categories. In the case of XLNET, it is more versatile because it is enough to train embeddings and classify them regardless of the document category. In the case of BioBERT, which has a similar operation, it does not yield consistent performance for the minority classes Broad synthesis and Excluded.
Conclusion
In this study, we have compared three methods, one of which is currently in production at the Epistemonikos foundation, the random forest. The others are BioBERT, which, although it is based on the transformer architecture, does not achieve the results shown by XLNET. Having such reliable results means a big impact in times of the COVID-19 pandemic where there is an exponential growth of available literature. In future work we will incorporate explanations obtained from transformer attention mechanisms, compare them against other explanation methods like LIME [7] or SHAP [6], and conduct a user study to assess whether physicians' work is facilitated by this feature.
Broader Impact
This work seeks to decrease manual effort in the practice of evidence-based medicine, allowing physicians to distinguish relevant documents for clinical questions. Implementing the method with the largest performance in our offline evaluation (XLNet) in production might imply an increased cost in terms of GPU needs for Epistemonikos, which is not under their current infrastructure. Adding more documents might also imply additional fine-tuning of the model, incurring in larger costs. Another aspect not addressed in this research is that of fairness: is the current model performing better to classify certain populations being treated (e.g. white males) compared to black females? we should address this aspect actively to prevent our model from learning undesired biases already seen in several applications.
Table 1 :
1Distribution of document and results obtained for document classification of Broad Synthesis, Systematic Review, Primary Study randomized controlled trial (Primary rct), Primary Study non-randomized controlled trial (Primary non-rct), and Excluded.Random Forest
XLNet
BioBERT
# docs.
Prec. Rec. F-1 Prec. Rec. F-1 Prec. Rec. F-1
Broad synthesis
17,324
.75
.15
.26
.67
.56
.61
0
0
0
Systematic review 286,050
.93
.99
.96
.96
.98
.97
.85
1.0
.92
Primary rct
56,623
.25
.79
.38
.94
.85
.89
.71
.71
.71
Primary non-rct
35,644
.63
.40
.49
.64
.91
.75
.61
.90
.72
Excluded
6,096
.70
.21
.32
.82
.74
.78
0
0
0
http://epistemonikos.org/ 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada.
Automatic document screening of medical literature using word and text embeddings in an active learning setting. Andres Carvallo, Denis Parra, Hans Lobel, Alvaro Soto, Scientometrics. Andres Carvallo, Denis Parra, Hans Lobel, and Alvaro Soto. Automatic document screening of medical literature using word and text embeddings in an active learning setting. Scientometrics, pages 1-38, 2020.
An interactive relevance feedback interface for evidence-based health care. Ivania Donoso-Guzmán, Denis Parra, 23rd International Conference on Intelligent User Interfaces. Ivania Donoso-Guzmán and Denis Parra. An interactive relevance feedback interface for evidence-based health care. In 23rd International Conference on Intelligent User Interfaces, pages 103-114, 2018.
Biobert: a pre-trained biomedical language representation model for biomedical text mining. Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, Jaewoo Kang, Bioinformatics. 364Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformat- ics, 36(4):1234-1240, 2020.
Enhancing access to reports of randomized trials published world-wide-the contribution of embase records to the cochrane central register of controlled trials (central) in the cochrane library. Carol Lefebvre, Anne Eisinga, Steve Mcdonald, Nina Paul, Emerging Themes in Epidemiology. 5113Carol Lefebvre, Anne Eisinga, Steve McDonald, and Nina Paul. Enhancing access to reports of random- ized trials published world-wide-the contribution of embase records to the cochrane central register of controlled trials (central) in the cochrane library. Emerging Themes in Epidemiology, 5(1):13, 2008.
Pubmed searches: Overview and strategies for clinicians. T Wesley, Bernie R Lindsey, Olin, Nutrition in Clinical Practice. 282Wesley T Lindsey and Bernie R Olin. Pubmed searches: Overview and strategies for clinicians. Nutrition in Clinical Practice, 28(2):165-176, 2013.
A unified approach to interpreting model predictions. M Scott, Su-In Lundberg, Lee, Advances in neural information processing systems. Scott M Lundberg and Su-In Lee. A unified approach to interpreting model predictions. In Advances in neural information processing systems, pages 4765-4774, 2017.
why should i trust you?" explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. " why should i trust you?" explaining the predic- tions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135-1144, 2016.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, R Russ, Quoc V Salakhutdinov, Le, Advances in neural information processing systems. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. Xlnet: Generalized autoregressive pretraining for language understanding. In Advances in neural information processing systems, pages 5753-5763, 2019.
| [] |
[
"Solving Aspect Category Sentiment Analysis as a Text Generation Task",
"Solving Aspect Category Sentiment Analysis as a Text Generation Task"
] | [
"Jian Liu jianliu17@fudan.edu.cn \nSchool of Computer Science\nFudan University\n\n",
"Zhiyang Teng \nSchool of Engineering\nWestlake University\n\n\nInstitute of Advanced Technology\nWestlake Institute for Advanced Study\n\n",
"Leyang Cui \nSchool of Engineering\nWestlake University\n\n\nInstitute of Advanced Technology\nWestlake Institute for Advanced Study\n\n",
"Hanmeng Liu \nSchool of Engineering\nWestlake University\n\n\nInstitute of Advanced Technology\nWestlake Institute for Advanced Study\n\n",
"Yue Zhang zhangyue@westlake.edu.cn \nSchool of Engineering\nWestlake University\n\n\nInstitute of Advanced Technology\nWestlake Institute for Advanced Study\n\n"
] | [
"School of Computer Science\nFudan University\n",
"School of Engineering\nWestlake University\n",
"Institute of Advanced Technology\nWestlake Institute for Advanced Study\n",
"School of Engineering\nWestlake University\n",
"Institute of Advanced Technology\nWestlake Institute for Advanced Study\n",
"School of Engineering\nWestlake University\n",
"Institute of Advanced Technology\nWestlake Institute for Advanced Study\n",
"School of Engineering\nWestlake University\n",
"Institute of Advanced Technology\nWestlake Institute for Advanced Study\n"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Aspect category sentiment analysis has attracted increasing research attention. The dominant methods make use of pre-trained language models by learning effective aspect category-specific representations, and adding specific output layers to its pre-trained representation. We consider a more direct way of making use of pre-trained language models, by casting the ACSA tasks into natural language generation tasks, using natural language sentences to represent the output. Our method allows more direct use of pre-trained knowledge in seq2seq language models by directly following the task setting during pre-training. Experiments on several benchmarks show that our method gives the best reported results, having large advantages in few-shot and zero-shot settings. | 10.18653/v1/2021.emnlp-main.361 | [
"https://www.aclanthology.org/2021.emnlp-main.361.pdf"
] | 238,857,158 | 2110.07310 | 9b25300bc8ceaba50626db5b8ae2d492328f4dfa |
Solving Aspect Category Sentiment Analysis as a Text Generation Task
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Jian Liu jianliu17@fudan.edu.cn
School of Computer Science
Fudan University
Zhiyang Teng
School of Engineering
Westlake University
Institute of Advanced Technology
Westlake Institute for Advanced Study
Leyang Cui
School of Engineering
Westlake University
Institute of Advanced Technology
Westlake Institute for Advanced Study
Hanmeng Liu
School of Engineering
Westlake University
Institute of Advanced Technology
Westlake Institute for Advanced Study
Yue Zhang zhangyue@westlake.edu.cn
School of Engineering
Westlake University
Institute of Advanced Technology
Westlake Institute for Advanced Study
Solving Aspect Category Sentiment Analysis as a Text Generation Task
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 2021
Aspect category sentiment analysis has attracted increasing research attention. The dominant methods make use of pre-trained language models by learning effective aspect category-specific representations, and adding specific output layers to its pre-trained representation. We consider a more direct way of making use of pre-trained language models, by casting the ACSA tasks into natural language generation tasks, using natural language sentences to represent the output. Our method allows more direct use of pre-trained knowledge in seq2seq language models by directly following the task setting during pre-training. Experiments on several benchmarks show that our method gives the best reported results, having large advantages in few-shot and zero-shot settings.
Introduction
Aspect-based sentiment analysis (ABSA) is a finegrained sentiment analysis task that includes a number of subtasks, two of which are aspect category sentiment analysis (ACSA) and aspect category detection (ACD). Figure 1 shows an example, where the input is "The restaurant was expensive, but the menu was great". ACD detects the aspect categories, such as price and food, and ACSA predicts the sentiment polarities toward each aspect category. In this work, we focus on these two tasks as well as the joint task that combines both.
Previous studies have investigated various methods that treat ACSA and ACD as classification tasks, learning aspect-specific sentence representations (Wang et al., 2016;Ruder et al., 2016). Recently, pre-trained language models (PLM) have shown their effectiveness to this end (Jiang et al., 2019). The main idea is to make use of pre-trained models such as BERT (Devlin et al., 2019a) for representing an aspect-specific form of the input (e.g., by concatenating the aspect category to the end of the input sentence (Figure 3(a))), which provides useful semantic features for ACSA and ACD classifiers. Such methods have given highly competitive results Li et al., 2020b).
The above classification models benefit from contextualized representations, which contain knowledge learned by pre-training over large data (Lin et al., 2019). However, their use of pre-trained knowledge can be viewed as indirect due to at least two reasons. First, the classification task is performed by using a neural network on top of pretrained representation, with separate network parameters. Second, the integration of aspect category makes the aspect-specific input representation not exactly a natural language sentence, which differs from the pre-training setting. Intuitively, more pre-trained knowledge could be leveraged by connecting pre-training and ACSA at the task level, rather than only at the representation level.
We investigate the above potentials by casting the sentiment classification tasks into language modelling tasks. In particular, as shown in Figure 2, both ACSA and ACD are transformed into sequence-to-sequence (seq2seq) tasks, where the encoder takes the input sentence and the decoder generates a natural language sentence. For ACD, the output follows a template stating whether the specific aspect is discussed (e.g., "The category_type category is discussed"); for ACSA, the sentiment polarity of a specific aspect is stated (e.g., "The sentiment polarity of given_category is polarity_type "). The setting corresponds closely to the denoising auto-
The restaurant was too expensive
The sentiment polarity of price is positive (scoring: 0.1) The sentiment polarity of price is neutral (scoring: 0.2) The sentiment polarity of price is negative (scoring: 0.7)
Aspect category sentiment analysis
The price category is discussed (scoring: 0.9) The price category is not discussed (scoring: 0.1) Aspect category detection Figure 2: ACSA as a generation task. encoder training scheme of BART (Lewis et al., 2020), which we use as the pre-trained model. Compared with classification-based methods, our method does not include more network parameters, and thus can potentially generalize better to new domains (Brown et al., 2020;Gao et al., 2020). Given a new domain with completely unseen aspect categories and sentiment labels, our method can be applied without changing output layer structure.
In addition to classification-based methods, we take masked language models (MLM) as a baseline also, for which a natural counterpart of our method is a mask-refilling task. As shown in Figure 3(b), different from our method, the output template is concatenated to the input, with the keyword being masked for prediction. This MLM task corresponds closely to BERT (Devlin et al., 2019a) pre-training. In comparison to this MLM method, a generation method can better learn the correlation between the input and output template as two related sequences, which has been demonstrated by the strong performance of BART for abstractive text summarization (Lewis et al., 2020).
Experimental results on three standard benchmarks datasets show that both generation and MLM methods outperform classification methods using the same pre-trained language models. Finally, generation methods give stronger performances than MLM methods, outperforming the previous stateof-the-art methods by a large margin. In addition, using the generation method, we show that jointly performing ACSA and ACD leads to better results than the traditional pipeline. To our knowledge, we are the first to employ a generative pre-trained language model to address an ACSA/ACD problem. We release our code at https://github. com/lgw863/ACSA-generation.
Related Work
Aspect Category Sentiment Analysis Wang et al. (2016) propose an attention-based LSTM network, which can concentrate on different parts of a sentence when different aspect categories are taken as input. Ruder et al. (2016) model the interdependencies of sentences in a text with a hierarchical bidirectional LSTM. Yin et al. (2017) model the task as a machine comprehension problem by constructing pseudo question-answer pairs. Xue and Li (2018) incorporate aspect category information into sentence encoders in the context modeling stage. construct auxiliary sentences from the aspect categories and convert ACSA to a sentence-pair classification task. Li et al. (2020b) predict the sentiment of an aspect category mentioned in a sentence by aggregating the sentiments of the words indicating the aspect category in the sentence.
Several joint models were proposed to avoid error propagation, which perform ACD and ACSA jointly. Schmitt et al. (2018) propose two joint models: end-to-end LSTM and end-to-end CNN, which produce all the aspect categories and their corresponding sentiment polarities at once. Hu et al. (2019) propose constrained attention networks (CAN) to constrain the attention weight allocation. propose the aspect-level sentiment capsules model (AS-Capsules), which utilizes the correlation between aspect category and sentiment through shared components. Li et al. (2020a) propose a novel joint model which contains a shared sentiment prediction layer.
All the models above are classification methods, which use a separate output network to give the output label. In contrast, we investigate natural language generation methods by directly following the pre-training process of language models.
Masked Language Model Methods There is a line of work using the masked language model (MLM) for natural language understanding tasks. The basic idea is to leverage information from pre-trained models by defining specific sentence prompt in a language modelling task. Brown et al. (2020) use prompt for few-shot learning in text classification tasks. rephrase inputs as cloze questions for text classification. and Gao et al. (2020)
Pre-trained Encoder
The menu was great </s> extend by automatically generating label words and templates, respectively. Petroni et al. (2019) extract relation between entities from BERT by constructing cloze-style templates. We are the first to apply such methods to ACSA, taking it as a baseline. Different from these template-based models, our final model uses BART for text generation, which better models the correlations between the input sentence and the output sentence compared with BERT.
Generation Methods There has been work casting NLP problems as sequence generation tasks (Vinyals et al., 2015;Ma et al., 2017;Stanovsky and Dagan, 2018;Raffel et al., 2020), where the output is a sequence of tokens rather than a natural language sentence. Daza and Frank (2018) treat semantic role labelling as a sequence-to-sequence process. solve the entity-relation extraction task as a multi-turn question answering generation method. Our work is similar in casting an NLP task as a generation task. Different from the above methods, our goal is to make the most of pre-trained knowledge in BART for ACSA.
Methods
Formally for ACD, the input is a sentence X = {x 1 , . . . , x n } = x 1:n , where x i denotes the i-th word. For ACSA, a set of pre-identified aspect categories are also given. We introduce relevant pre-trained language models in 3.1, classification methods in Section 3.2, MLM methods in Section 3.3, and our generation method in Section 3.4.
Pre-trained language Models
We take BERT (Devlin et al., 2019a) and BART (Lewis et al., 2020) as the pre-trained language models. Both are built on the Transformer (Vaswani et al., 2017) architecture. BERT (Devlin et al., 2019a) is an encoder stack of Transformer for masked text filling, where a model uses the context words to predict masked words. BART (Lewis et al., 2020) is a denoising auto-encoder seq2seq model pre-training for natural language generation. Its training applies document corruption such as randomly deleting tokens from the input and corrupting text with an arbitrary noising function. BART is trained to reconstruct the original text.
The Classification Method
We use a multi-layer perceptrons network as the classifier model, which takes a representation vector as input. Both BERT and BART are considered as the encoders.
BERT Classification BERT adopts "[CLS] input sentence [SEP] given_category [SEP]" as input.
The final hidden state corresponding to "[CLS]" is used as the representation for classification.
BART Classification BART adopts " S input sentence /S given_category /S " as input and predicts the sentiment polarity of the sentence towards the given category. The same input is fed into the encoder and decoder (see Figure 3(a)). Formally, suppose that the query category is a, x 0 = S , x n+1 = /S , x n+2 = a, x n+3 = /S , then the input to BART is x 0:n+3 = S x 1 , . . . , x n /S a /S . The output hidden vec-tors obtained by the BART encoder (ENCODER) and BART decoder (DECODER) are:
h enc = ENCODER(x 0:n+3 ) h 0 . . . h n+3 = DECODER(h enc ; x 0:n+3 )
The output vector h n+3 is then taken as the representation vector for classification.
The MLM Method
Masked language models (MLM) (Devlin et al., 2019a) complete a given prompt by filling missing tokens. We refer to the template including a given category and MASK token together as a prompt. For sentiment analysis tasks, BERT MLM adopts the input sentence and the prompt as the model input and predicts the sentiment polarity label word towards the given category. For BART MLM, the same input is fed into the encoder and decoder, and the highest decoder prediction from label words of the MASK token is the predicted polarity label(see Figure 3(b)). We use the same template in the MLM method and generation method, following the template creation method in section 3.4.1.
The Generation Method
We take both ACSA and ACD as language model ranking problems under a seq2seq framework (see Figure 3(c)). The target sequence T a i ,p k (T a i ) = {t 1 , . . . , t m } is a template filled by the given category a i and the polarity type p k . We first introduce how to create templates in Section 3.4.1, and then show the inference and training details in Section 3.4.2 and Section 3.4.3, respectively.
Template Creation
For ACSA, we manually create templates containing one slot for the given_category and another slot for the polarity_type label. We set a category word set A = {a 1 , . . . , a |C| }, |C| is the category type size (e.g., a i ="price") and polarity type word set P = {p 1 , . . . , p |L| }, |L| is the polarity type size (e.g., p k ="positive"), and use words to define templates T a i ,p k (e.g. "The sentiment polarity of price is positive"). The template T is "The sentiment polarity of a i is p k ". For a given category a i , we can obtain a list of templates
T a i = [T a i ,p 1 , . . . , T a i ,p |L| ].
For ACD, we use a i to create a sentiment template T + a i for an existing aspect category, and a none-category template T − a i . T + is "The a i category is discussed" and T − is "The a i category is not discussed".
Inference
For ACSA, we first enumerate all possible polarities for the given category of the sentence X and fill them in the prepared templates, and then use the fine-tuned pre-trained generative language model to assign a score for each template T a i ,p k = {t 1 , . . . , t m }, formulated as:
f (Ta i ,p k ) = m c=1 log P (tc|t1:c−1, X)(1)
We calculate a score f (T a i ,p k ) for each possible polarity by employing the pre-trained generative language model (i.e., BART) to score the templates, and then choose the polarity of category a i with the largest score.
For ACD, we first create templates T + a i and T − a i for all possible categories of the sentence X, and then use the fine-tuned pre-trained generative language model to assign a score for each template T a i = {t 1 , . . . , t m }, in a similar way as Equation 1. Also, we decide whether the a i category is discussed or not in the input sentence according to the higher score between T + a i and T − a i .
Training
For ACSA, suppose that the polarity type of a i is p k . We fill the given category a i and the polarity type p k into template T to create a gold target output T a i ,p k . Similarly for ACD, if the category of a i is discussed, the gold target T + a i is obtained by filling a i into T + , and otherwise is T − a i . For ACSA, we use all gold polarities in the training set to construct (X, T) pairs. For ACD, we use all gold categories in the training set to construct (X, T + ) pairs, and additionally create negative samples (X, T − ) by sampling all none existing categories in the input. Finally, we obtain
{(X, T)} = {(X, T + ) ∪ (X, T − )}
Given a sequence pair (X, T), we feed the input X = x 1:n to the BART encoder, obtaining hidden representations of the sentence:
h enc = ENCODER(x1:n)(2)
At the c th step of the decoder, h enc and previous output tokens t 1:c−1 are then as inputs, yielding a representation using attention (Vaswani et al., 2017) h dec c = DECODER(h enc , t1:c−1)
The conditional probability of the word t c is defined as:
P (tc|t1:c−1, X) = SOFTMAX(h dec c W lm + b lm ),(4)
where W lm ∈ R d h ×|V| and b lm ∈ R |V| , |V| represents the vocab size of pre-trained BART. The cross-entropy between the decoder's output and the original template is used as the loss function:
L = − m c=1 log P (tc|t1,c−1, X)(5)
Experiments
We choose the SemEval-2014 restaurant review (Rest14) (Pontiki et al., 2014a), a variant of Rest14 (Rest14-hard) (Xue and Li, 2018) and the multiaspect multi-sentiment (MAMS) (Jiang et al., 2019) datasets for sentence-level sentiment , the Tri-pAdvisor (Wang et al., 2010) We use the pre-trained BERT-base 1 and BARTbase 2 models for task fine-tuning. We select the fine-tuning learning rate from {4e-5, 2e-5, and 1e-5} and batch size from {8, 16, 24} for different models. The dropout probability is 0.1. The best model configuration is selected according to the highest performance on the development set. The details of settings are shown in Appendix A.
Baseline Methods
We compare our generation method with classification and MLM baselines (Figure 3) using the same encoder. In particular, BART generation (i.e., Figure 3(c)) is compared with BART classification (Figure 3(a)) and BART MLM (Figure 3(b)), as well as BERT classification and BERT MLM. In addition, our method is also compared with other models in the literature as follows.
For sentence-level ACSA, we also compare our method with the following state-of-the-art methods in the literature. (1) For document-level ACSA, we compare our method with the following methods. (1) non-BERT models: LSTM (Tang et al., 2015), HAN (Yang et al., 2016) and MR (machine comprehension pat-1 https://github.com/google-research/ bert 2 https://huggingface.co/facebook/ bart-base/tree/main ACSA Template T Dev accuracy
The sentiment polarity of ai is p k 83.78 The sentiment is p k for ai 83.44 The ai category has a p k label 82.31 Table 1: ACSA results using different templates. a i indicates given category, p k indicates polarity type.
ACD Template T + /T − Dev F1
The ai category is discussed The ai category is not discussed 93.13
The sentence discusses the ai category The sentence discusses no ai category 92.67
It is about the ai category It is not about the ai category 92.44
Development Experiments
Different templates can be used for expressing the same meaning. For instance, "The sentiment polarity of given_category is positive" can also be expressed by "The sentiment is positive for given_category ". For ACSA, we investigate the impact of manual templates using the MAMS development set. Table 1 shows the impact of different choice of templates. For instance, "The given_category category has a polarity_type label" and "The sentiment polarity of given_category is polarity_type " give 82.31% and 83.78% accuracy, respectively, indicating that the template has influence on the final performance. This is consistent with finds of Gao et al. (2020) for the fewshot task. Based on the development results, we use the top performing template "The sentiment polarity of given_category is polarity_type " in our ACSA experiments.
For ACD, we investigate the impact of templates using the Rest14 development set. Table 2 shows the performance impact of different templates. We use the top performing template "The category_type category is discussed" as template T + and "The category_type category is not discussed" as template T − in our ACD experiments.
ACSA Experiments
The results of sentence-level ACSA are shown in Table 3. We can see that, first, the performance of BERT MLM and BART MLM is better than BERT classification and BART classification, respectively. In particular, BERT MLM gives a strong baseline, outperforming all non-BERT and BERT classification baselines. This shows that making use of pre-training at the task level can achieve better results than that at the representation level. Also, the BART MLM and classification models perform better than the corresponding BERT models. Second, BART generation outperforms all baselines on all three datasets, which indicates that our model can better detect multiple sentiment polarities in one sentence toward different aspect categories. Third, BART generation performs significantly better than BART MLM, giving absolutely 3.89% stronger accuracy on MAMS, demonstrating the effectiveness of the generation method. This shows the strength of BART pre-training for generating semantically related content, which was also reflected by the strong performance of BART on abstractive sum- marization (Lewis et al., 2020). In contrast, the MLM method concatenates the input and output into one sequence, and thus fails to model their correlation in encoder-decoder pre-trainng.
The performances of our model on documentlevel ACSA are shown in Table 4. Compared with LSTM, HAN and MR, BERT classification and BART classification outperform all baselines, which shows the effectiveness of pre-training. BERT MLM and BART MLM surpass BERT classification and BART classification, respectively. Our BART generation model achieves improvements of 1.15% and 0.70% over BART MLM on TripAdvisor and BeerAdvocate, respectively, demonstrating that the generation method can more effectively make use of BART for ACSA.
ACD Experiments
Results on the Rest14 ACD subtask are presented in Table 5 on precision and F-1 score. In particular, a more than 95% precision score is achieved, which shows that our model can effectively exclude the aspect categories not mentioned in the input.
We also investigate the performance on the MAMS dataset, which consists of at least two unique aspect categories with different sentiment polarities in each input sentence. Table 7 shows that BART generation outperforms all baselines, indicating better ability of our model to detect multiple aspect categories in one sentence.
A Joint Model
The generation method allows us to build a straightforward joint model by extending the first template in Table 1, using "The sentiment polarity of <given_category> is none" as a template for nonexisting aspect categories. The results on Rest-14 and MAMS are presented in Table 6. We find that joint BART generation achieves better results on this task with improvements over pipeline BART generation. Joint BART generation outperforms all baselines on precision, recall and F-1 score, which shows the advantage of joint learning.
Few-Shot and Zero-Shot Learning
We evaluate the model performance on ACSA where only a small amount of labelled data is available for training, simulating the low-resource data scenarios by randomly sampling training instances from a large training set. In particular, we use different numbers of instances for training, randomly sampling a fixed number of instances per category type (10,20,50,100,200, 500 instances per category type for Rest14 and MAMS). The results are shown in Figure 4, where the methods of BERT classification, BART classification and BART MLM are also compared. It can be seen that on all the datasets, our model outperforms BERT classification, BART classification and BART MLM, especially when the number of training instances is small. For example, when there are only 10 training instances, our model gives accuracy scores of 82.01% on Rest14, as compared to 38.57% by BERT classification and 50.16% by BART classification. When the number of instances grows as large as 500, our model gives 2.24% and 2.65% better accuracies than BART MLM on Rest14 and MAMS, respectively. One possible reason is that our method makes more use of direct sentiment knowledge in the pre-trained language model by directly adopting the original structure of BART mentioned earlier. In contrast, classification methods cannot achieve this due to transferring the sentiment bias indirectly.
The results of our zero-shot learning experiments are in Table 8. In all cases, our method outperforms all the baselines. In particular, the model trained on MAMS has a better performance on Rest14 than the reverse zero-shot setting, which proves that the MAMS dataset has a higher challenge.
Analysis
Influence of Category Frequency
Aspect categories can be implicit and do not necessarily occur as terms in the given sentence. To explore the correlation between ACSA accuracy and the occurrence frequency of a given category, we split the eight categories in the MAMS test set into four subsets based on the occurrence frequency. The category (i.e., miscellaneous) that never occurs in the given sentence is put into the zero frequency subset, the 15% least frequent (i.e., ambience, staff ) are put into low frequency subset, the 30% most frequent (i.e., menu, service) are put into high frequency subset, and the remaining (i.e., price, food, place) are put into mid frequency subset. Figure 5 shows the accuracy of BART classification and our model against the frequency. As the category occurrence frequency decreases, the relative gap of accuracy between the two models increases. In the zero frequency, our method gives absolutely 8.03% stronger accuracy than BART classification. This demonstrates that our method is more robust in summarizing the sentiment polarity of abstract or rare categories. Even if there are no explicit category terms in the sentence, the generation method can give the implicit category opinion of the whole sentence according to the context. Service was fine and the food delivered in reasonable time given the crowd, but for the price I was disappointed.
< miscellaneous: neutral > < incorrect output: negative >
The kids really enjoyed their food and the value on the kids menu is good. < menu: neutral > < incorrect output: positive > The decor could be a bit better, and if there was a small bar the overall atmosphere would be a bit more inviting. < place: negative > < incorrect output: neutral > (a) (b) (c) Figure 6: Examples of BART classification. (a) is an instance with category do not occur as term in sentence. (b) represents that our method is not affected by the surrounding interference information. (c) needs conditional reasoning for analysis. Our method can obtain correct sentiment polarity. Figure 6 shows typical examples from the test set which cannot be inferred by the BART classification model. In sentence (a), the given category miscellaneous does not occur as a term in the given sentence. Our method can synthesize different sentiment polarities with different aspects to obtain correct polarity. In sentence (b), "the value on the kids menu is good", good modifies the value, rather than the given category menu. Our method gives the correct polarity, not being affected by the surrounding other aspect sentiments. The last instance (c) has conditional reasoning which is difficult for BART classification. In contrast, BART generation gives the correct label by correctly recognizing the negativity in "if there was ... would be a bit more inviting". This is likely because our method makes use of pre-trained knowledge to infer the inter-sentential correlations between the input and the output sequences, which the BART classification model failed to achieve due to the indirect use of BART in the additional classification network.
Case Study
Conclusion
We investigated a generation method for aspect category detection (ACD) and aspect category sentiment analysis (ACSA), which can make better use of BART's advantages in making semantic level summaries to the input by not introducing additional model parameters. Experiments show that our proposed method obtains superior performance over the baseline models for both sentence-level and document-level aspect sentiment analysis. In contrast to the traditional sentiment classification methods, our method is also more powerful on zero-shot and few-shot tasks.
A.2 Document-Level Datasets
TripAdvisor (Wang et al., 2010) and BeerAdvocate (McAuley et al., 2012;Lei et al., 2016) contain seven aspects (value, room, location, cleanliness, check in/front desk, service, and business service) and four aspects (feel, look, smell, and taste) respectively. We randomly split them into training, development, and testing sets with 80/10/10%.
Statistics of these three sentence-level datasets are given in Table 9 and two document-level datasets are described in Table 10.
B Settings
Each method is trained for 30 epochs, during which the model with the best performance on the validation set is saved. We also apply early stopping in training, which means that the training will stop if the performance on validation set does not improve in 5 epochs.
Figure 1 :
1The restaurant was expensive, but the menu was great <price, food> < price: negative > < food: positive > ACSA ACD Example of aspect category detection (ACD) and aspect category sentiment analysis (ACSA).
Figure 3 :
3A comparison of aspect category sentiment analysis methods.
non-BERT models: GCAE(Xue and Li, 2018), As-capsule and CapsNet(Jiang et al., 2019); (2) BERT(Devlin et al., 2019b) based models: BERT-pair-QA-B, CapsNet-BERT(Jiang et al., 2019) and AC-MIMLLN-BERT(Li et al., 2020b).
tern)(Yin et al., 2017); (2) BERT(Devlin et al., 2019b) based model: BERT classification.For ACD, we compare our method with the following methods. (1) non-BERT models: XRCE(Brun et al., 2014), NRC-Canada(Kiritchenko et al., 2014); (2) BERT(Devlin et al., 2019b) based models: BERT classification, BERT-pair-NLI-B , CNE-net(Dai et al., 2020).
5 :
5Rest14 results: Aspect Category Detection. We use the results reported in XRCE(Brun et al., 2014), NRC-Canada(Kiritchenko et al., 2014), BERT-pair-NLI-B and CNE-net(Dai et al., 2020).
Figure 5 :
5Comparison of accuracy with different category frequency on MAMS.
The menu was great. The sentiment polarity of food is[MASK].Pre-trained
Decoder
label
<s>
</s>
food
The menu was great </s>
<s>
</s>
food
(a) BART classification.
MLM
head
positive
neutral
negative
Input sentence
Template
(b) Masked language model(MLM).
Encoder
x 1
x2
x3
x4
x5
Decoder
t0
t1
t2
t3
t4
The sentiment polarity
of
price
<s>
is
The
restaurant was
too
expensive
t5
t6
t1
t2
t3
t4
t5
t6
t7
sentiment polarity
of
price
positive
is
The
(c) BART generation.
Table 2 :
2ACD results using different templates. a i indicates category type.
Table 3 :
3Results of the sentence-level ACSA in terms of accuracy (%, mean±(std)). † refers toJiang et al. (2019). * means the result is significant at p < 0.01 using paired t-test comparing to BART MLM and BART classification.Model
TripAdvisor BeerAdvocate
LSTM
44.02
34.78
HAN
44.68
36.03
MR
46.56
38.06
BERT classification
47.03
39.85
BART classification
47.45
40.44
BERT MLM
48.03
40.58
BART MLM
48.36
40.72
BART generation
49.51 *
41.42 *
Table 4 :
4Results of the document-level ACSA in terms of accuracy (%). * means the result is significant at p < 0.01 using paired t-test comparing to BART MLM and BART classification.
Table
. FollowingPontiki et al. (2014b), we use Micro-F1 for evaluating. Again BART generation achieves better results than BART classification and BART MLM. Our model outperforms all baselinesModel
Rest14
MAMS
P
R
F1
P
R
F1
Pipeline BART generation 82.03 76.46 79.15 77.04 71.92 74.39
Joint BERT classification
77.75 76.07 76.90 74.14 71.92 73.01
Joint BART classification
81.92 73.59 77.53 74.59 74.13 74.36
Joint BART MLM
81.88 76.73 79.22 75.32 75.07 75.19
Joint BART generation
82.76 81.91 82.33 77.18 76.58 76.88
Table 6 :
6Performance on combination setting.
Figure 4: Few-shot ACSA performance on different test sets.
Model
P
R
F1
BERT classification 90.50 86.68 88.50
BART classification 90.67 88.34 89.49
BART MLM
90.57 88.86 89.71
BART generation
90.71 90.16 90.43
Table 7 :
7MAMS results: Aspect Category Detection.
Table 8 :
8Zero-Shot results: ACSA. R → M indicates training on Rest14 and testing on MAMS. M → R indicates training on MAMS and testing on Rest14.
Table 9 :
9Statistics of the sentence-level datasets.Dataset
#docs
#words/doc words/sent
TripAdvisor
29,391
251.7
18.0
BeerAdvocate 51,020
144.5
12.1
Table 10 :
10Statistics of the document-level datasets. The rating scale of TripAdvisor dataset is 1-5. The rating scale of BeerAdvocate dataset is 1-10.Rest14-hard FollowingXue and Li (2018), we construct Rest14-hard, where the training set and development set are the same as Rest14's, while test set is constructed from the test set of Rest14. The test set of Rest14-hard only includes sentences containing at least two aspect categories with different sentiment polarities.MAMS Jiang et al. (2019) Since the test set of Rest14-hard is small, we also adopt the Multi-Aspect Multi-Sentiment dataset for Aspect Category Sentiment Analysis (denoted by MAMS). All sentences in MAMS contain multiple aspect categories with different sentiment polarities.
AcknowledgementsZhiyang Teng is the corresponding author. We would like to thank the anonymous reviewers for their insightful comments. We gratefully acknowledge funding from the National Natural Science Foundation of China (NSFC No.61976180).A DatasetsA.1 Sentence-Level Datasets Rest14(Pontiki et al., 2014a)Following previous work(Cheng et al., 2017;Tay et al., 2018;Hu et al., 2019), we remove samples with conflict polarities. Since there is no official development set for Rest14, we use the split offered by Tay et al.
. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc-Candlish, Alec Radford, Ilya Sutskeverand Dario Amodei. 2020. Language models are few-shot learnersTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers.
XRCE: hybrid classification for aspect-based sentiment analysis. Caroline Brun, Diana Nicoleta Popa, Claude Roux, 10.3115/v1/s14-2149Proceedings of the 8th International Workshop on Semantic Evaluation. the 8th International Workshop on Semantic EvaluationDublin, IrelandThe Association for Computer LinguisticsCaroline Brun, Diana Nicoleta Popa, and Claude Roux. 2014. XRCE: hybrid classification for aspect-based sentiment analysis. In Proceedings of the 8th In- ternational Workshop on Semantic Evaluation, Se- mEval@COLING 2014, Dublin, Ireland, August 23- 24, 2014, pages 838-842. The Association for Com- puter Linguistics.
Aspect-level sentiment classification with heat (hierarchical attention) network. Jiajun Cheng, Shenglin Zhao, Jiani Zhang, Irwin King, Xin Zhang, Hui Wang, 10.1145/3132847.3133037Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementJiajun Cheng, Shenglin Zhao, Jiani Zhang, Irwin King, Xin Zhang, and Hui Wang. 2017. Aspect-level sen- timent classification with heat (hierarchical atten- tion) network. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Manage- ment, pages 97-106.
A multi-task incremental learning framework with category name embedding for aspect-category sentiment analysis. Zehui Dai, Cheng Peng, Huajie Chen, Yadong Ding, 10.18653/v1/2020.emnlp-main.565Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language Processing2020Association for Computational LinguisticsZehui Dai, Cheng Peng, Huajie Chen, and Yadong Ding. 2020. A multi-task incremental learning framework with category name embedding for aspect-category sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, On- line, November 16-20, 2020, pages 6955-6965. As- sociation for Computational Linguistics.
A sequence-tosequence model for semantic role labeling. Angel Daza, A Frank, In Rep4NLP@ACLAngel Daza and A. Frank. 2018. A sequence-to- sequence model for semantic role labeling. In Rep4NLP@ACL.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019a. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019b. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Making pre-trained language models better few-shot learners. Tianyu Gao, Adam Fisch, Danqi Chen, Tianyu Gao, Adam Fisch, and Danqi Chen. 2020. Making pre-trained language models better few-shot learners.
Can: Constrained attention networks for multi-aspect sentiment analysis. Mengting Hu, Shiwan Zhao, Li Zhang, Keke Cai, Zhong Su, Renhong Cheng, Xiaowei Shen, 10.18653/v1/D19-1467Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Mengting Hu, Shiwan Zhao, Li Zhang, Keke Cai, Zhong Su, Renhong Cheng, and Xiaowei Shen. 2019. Can: Constrained attention networks for multi-aspect sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4593-4602.
A challenge dataset and effective models for aspect-based sentiment analysis. Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, Min Yang, 10.18653/v1/D19-1654Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Qingnan Jiang, Lei Chen, Ruifeng Xu, Xiang Ao, and Min Yang. 2019. A challenge dataset and effective models for aspect-based sentiment analysis. In Pro- ceedings of the 2019 Conference on Empirical Meth- ods in Natural Language Processing and the 9th In- ternational Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 6281-6286.
Nrc-canada-2014: Detecting aspects and sentiment in customer reviews. Svetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, Saif Mohammad, 10.3115/v1/s14-2076Proceedings of the 8th International Workshop on Semantic Evaluation. the 8th International Workshop on Semantic EvaluationDublin, IrelandThe Association for Computer LinguisticsSvetlana Kiritchenko, Xiaodan Zhu, Colin Cherry, and Saif Mohammad. 2014. Nrc-canada-2014: De- tecting aspects and sentiment in customer reviews. In Proceedings of the 8th International Workshop on Semantic Evaluation, SemEval@COLING 2014, Dublin, Ireland, August 23-24, 2014, pages 437-442. The Association for Computer Linguistics.
Rationalizing neural predictions. Tao Lei, Regina Barzilay, Tommi S Jaakkola, 10.18653/v1/d16-1011Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, Texas, USAThe Association for Computational LinguisticsTao Lei, Regina Barzilay, and Tommi S. Jaakkola. 2016. Rationalizing neural predictions. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, November 1-4, 2016, pages 107- 117. The Association for Computational Linguistics.
BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, 10.18653/v1/2020.acl-main.703Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.
Entity-relation extraction as multi-turn question answering. Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, Jiwei Li, 10.18653/v1/p19-1129Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Xiaoya Li, Fan Yin, Zijun Sun, Xiayu Li, Arianna Yuan, Duo Chai, Mingxin Zhou, and Jiwei Li. 2019. Entity-relation extraction as multi-turn question an- swering. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 1340-1350. Association for Computational Linguistics.
A joint model for aspect-category sentiment analysis with shared sentiment prediction layer. Yuncong Li, Zhe Yang, Cunxiang Yin, Lunan Xu Pan, Qiang Cui, Ting Huang, Wei, Proceedings of the 19th Chinese National Conference on Computational Linguistics. the 19th Chinese National Conference on Computational LinguisticsHaikou, ChinaChinese Information Processing Society of ChinaYuncong Li, Zhe Yang, Cunxiang Yin, Xu Pan, Lunan Cui, Qiang Huang, and Ting Wei. 2020a. A joint model for aspect-category sentiment analysis with shared sentiment prediction layer. In Proceedings of the 19th Chinese National Conference on Computa- tional Linguistics, pages 1112-1121, Haikou, China. Chinese Information Processing Society of China.
Multi-instance multi-label learning networks for aspect-category sentiment analysis. Yuncong Li, Cunxiang Yin, Sheng-Hua Zhong, Xu Pan, 10.18653/v1/2020.emnlp-main.287Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnlineAssociation for Computational Linguistics2020Yuncong Li, Cunxiang Yin, Sheng-hua Zhong, and Xu Pan. 2020b. Multi-instance multi-label learning networks for aspect-category sentiment analysis. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 3550- 3560. Association for Computational Linguistics.
A novel aspect-guided deep transition model for aspect based sentiment analysis. Yunlong Liang, Fandong Meng, Jinchao Zhang, Jinan Xu, Yufeng Chen, Jie Zhou, 10.18653/v1/D19-1559Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Yunlong Liang, Fandong Meng, Jinchao Zhang, Ji- nan Xu, Yufeng Chen, and Jie Zhou. 2019. A novel aspect-guided deep transition model for as- pect based sentiment analysis. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5572-5584.
Open sesame: Getting inside BERT's linguistic knowledge. Yongjie Lin, Yi Chern Tan, Robert Frank, 10.18653/v1/W19-4825Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPFlorence, ItalyAssociation for Computational LinguisticsYongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Work- shop BlackboxNLP: Analyzing and Interpreting Neu- ral Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.
Deterministic attention for sequence-to-sequence constituent parsing. Chunpeng Ma, L Liu, Akihiro Tamura, T Zhao, E Sumita, AAAI. Chunpeng Ma, L. Liu, Akihiro Tamura, T. Zhao, and E. Sumita. 2017. Deterministic attention for sequence-to-sequence constituent parsing. In AAAI.
Learning attitudes and attributes from multiaspect reviews. Julian J Mcauley, Jure Leskovec, Dan Jurafsky, abs/1210.3926CoRRJulian J. McAuley, Jure Leskovec, and Dan Jurafsky. 2012. Learning attitudes and attributes from multi- aspect reviews. CoRR, abs/1210.3926.
Association for Computational Linguistics. Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander Miller, 10.18653/v1/D19-1250Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaLanguage models as knowledge bases?Fabio Petroni, Tim Rocktäschel, Sebastian Riedel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, and Alexander Miller. 2019. Language models as knowl- edge bases? In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 2463-2473, Hong Kong, China. As- sociation for Computational Linguistics.
SemEval-2014 task 4: Aspect based sentiment analysis. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, 10.3115/v1/S14-2004Proceedings of the 8th International Workshop on Semantic Evaluation. the 8th International Workshop on Semantic EvaluationDublin, IrelandAssociation for Computational LinguisticsIon Androutsopoulos, and Suresh ManandharMaria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014a. SemEval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evalua- tion (SemEval 2014), pages 27-35, Dublin, Ireland. Association for Computational Linguistics.
Semeval-2014 task 4: Aspect based sentiment analysis. Maria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, 10.3115/v1/s14-2004Proceedings of the 8th International Workshop on Semantic Evaluation. the 8th International Workshop on Semantic EvaluationSemEval@COLING; Dublin, Ireland, AuThe Association for Computer LinguisticsIon Androutsopoulos, and Suresh ManandharMaria Pontiki, Dimitris Galanis, John Pavlopoulos, Harris Papageorgiou, Ion Androutsopoulos, and Suresh Manandhar. 2014b. Semeval-2014 task 4: Aspect based sentiment analysis. In Proceedings of the 8th International Workshop on Semantic Evalua- tion, SemEval@COLING 2014, Dublin, Ireland, Au- gust 23-24, 2014, pages 27-35. The Association for Computer Linguistics.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, J. Mach. Learn. Res. 2167Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.
A hierarchical model of reviews for aspectbased sentiment analysis. Sebastian Ruder, Parsa Ghaffari, John G Breslin, 10.18653/v1/D16-1103Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingSebastian Ruder, Parsa Ghaffari, and John G Breslin. 2016. A hierarchical model of reviews for aspect- based sentiment analysis. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 999-1005.
Automatically identifying words that can serve as labels for few-shot text classification. Timo Schick, Helmut Schmid, Hinrich Schütze, 10.18653/v1/2020.coling-main.488Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsTimo Schick, Helmut Schmid, and Hinrich Schütze. 2020. Automatically identifying words that can serve as labels for few-shot text classification. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5569-5578, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Exploiting cloze questions for few shot text classification and natural language inference. Timo Schick, Hinrich Schütze, Timo Schick and Hinrich Schütze. 2020. Exploiting cloze questions for few shot text classification and natural language inference.
Joint aspect and polarity classification for aspect-based sentiment analysis with end-to-end neural networks. Martin Schmitt, Simon Steinheber, Konrad Schreiber, Benjamin Roth, 10.18653/v1/D18-1139Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingMartin Schmitt, Simon Steinheber, Konrad Schreiber, and Benjamin Roth. 2018. Joint aspect and polar- ity classification for aspect-based sentiment analysis with end-to-end neural networks. In Proceedings of the 2018 Conference on Empirical Methods in Natu- ral Language Processing, pages 1109-1114.
Semantics as a foreign language. Gabriel Stanovsky, Ido Dagan, EMNLP. Gabriel Stanovsky and Ido Dagan. 2018. Semantics as a foreign language. In EMNLP.
Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. Chi Sun, Luyao Huang, Xipeng Qiu, 10.18653/v1/N19-1035Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Chi Sun, Luyao Huang, and Xipeng Qiu. 2019. Utiliz- ing bert for aspect-based sentiment analysis via con- structing auxiliary sentence. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 380-385.
Document modeling with gated recurrent neural network for sentiment classification. Duyu Tang, Bing Qin, Ting Liu, 10.18653/v1/d15-1167Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalThe Association for Computational LinguisticsDuyu Tang, Bing Qin, and Ting Liu. 2015. Document modeling with gated recurrent neural network for sentiment classification. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, EMNLP 2015, Lisbon, Portugal, September 17-21, 2015, pages 1422-1432. The As- sociation for Computational Linguistics.
Learning to attend via word-aspect associative fusion for aspect-based sentiment analysis. Yi Tay, Anh Luu, Siu Cheung Tuan, Hui, Thirty-Second AAAI Conference on Artificial Intelligence. Yi Tay, Luu Anh Tuan, and Siu Cheung Hui. 2018. Learning to attend via word-aspect associative fu- sion for aspect-based sentiment analysis. In Thirty- Second AAAI Conference on Artificial Intelligence.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Illia Kaiser, Polosukhin, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems, volume 30, pages 5998-6008. Cur- ran Associates, Inc.
Grammar as a foreign language. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, Geoffrey E Hinton, NIPS. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Gram- mar as a foreign language. In NIPS.
Latent aspect rating analysis on review text data: a rating regression approach. Hongning Wang, Yue Lu, Chengxiang Zhai, 10.1145/1835804.1835903Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data MiningDC, USAACMHongning Wang, Yue Lu, and Chengxiang Zhai. 2010. Latent aspect rating analysis on review text data: a rating regression approach. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Wash- ington, DC, USA, July 25-28, 2010, pages 783-792. ACM.
Attention-based lstm for aspectlevel sentiment classification. Yequan Wang, Minlie Huang, Xiaoyan Zhu, Li Zhao, 10.18653/v1/D16-1058Proceedings of the 2016 conference on empirical methods in natural language processing. the 2016 conference on empirical methods in natural language processingYequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. 2016. Attention-based lstm for aspect- level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606-615.
Aspect-level sentiment analysis using ascapsules. Yequan Wang, Aixin Sun, Minlie Huang, Xiaoyan Zhu, 10.1145/3308558.3313750The World Wide Web Conference. Yequan Wang, Aixin Sun, Minlie Huang, and Xiaoyan Zhu. 2019. Aspect-level sentiment analysis using as- capsules. In The World Wide Web Conference, pages 2033-2044.
Earlier attention? aspectaware lstm for aspect sentiment analysis. Lejian Bowen Xing, Dandan Liao, Jingang Song, Fuzheng Wang, Zhongyuan Zhang, Heyan Wang, Huang, arXiv:1905.07719arXiv preprintBowen Xing, Lejian Liao, Dandan Song, Jingang Wang, Fuzheng Zhang, Zhongyuan Wang, and Heyan Huang. 2019. Earlier attention? aspect- aware lstm for aspect sentiment analysis. arXiv preprint arXiv:1905.07719.
Aspect based sentiment analysis with gated convolutional networks. Wei Xue, Tao Li, 10.18653/v1/P18-1234Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Wei Xue and Tao Li. 2018. Aspect based sentiment analysis with gated convolutional networks. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 2514-2523.
Hierarchical attention networks for document classification. Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J Smola, Eduard H Hovy, 10.18653/v1/n16-1174The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. San Diego California, USANAACL HLT 2016Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alexander J. Smola, and Eduard H. Hovy. 2016. Hi- erarchical attention networks for document classifi- cation. In NAACL HLT 2016, The 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, San Diego California, USA, June 12- 17, 2016, pages 1480-1489.
Document-level multi-aspect sentiment classification as machine comprehension. Yichun Yin, Yangqiu Song, Ming Zhang, 10.18653/v1/d17-1217Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, Denmark2017Yichun Yin, Yangqiu Song, and Ming Zhang. 2017. Document-level multi-aspect sentiment classifica- tion as machine comprehension. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, EMNLP 2017, Copen- hagen, Denmark, September 9-11, 2017, pages 2044-2054.
Aspect aware learning for aspect category sentiment analysis. Peisong Zhu, Zhuang Chen, Haojie Zheng, Tieyun Qian, 10.1145/3350487ACM Trans. Knowl. Discov. Data. 613Peisong Zhu, Zhuang Chen, Haojie Zheng, and Tieyun Qian. 2019. Aspect aware learning for aspect cate- gory sentiment analysis. ACM Trans. Knowl. Discov. Data, 13(6).
| [
"https://github.com/google-research/"
] |
[
"Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features",
"Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features"
] | [
"Didan Deng \nDepartment of Electronic and Computer Engineering\nHong Kong University of Science and Technology\n\n",
"Yuqian Zhou yuqian2@illinois.edu \nIFP\nUniversity of Illinois at Urbana-Champaign\nBeckman\n",
"Jimin Pi \nDepartment of Electronic and Computer Engineering\nHong Kong University of Science and Technology\n\n",
"Bertram E Shi \nDepartment of Electronic and Computer Engineering\nHong Kong University of Science and Technology\n\n"
] | [
"Department of Electronic and Computer Engineering\nHong Kong University of Science and Technology\n",
"IFP\nUniversity of Illinois at Urbana-Champaign\nBeckman",
"Department of Electronic and Computer Engineering\nHong Kong University of Science and Technology\n",
"Department of Electronic and Computer Engineering\nHong Kong University of Science and Technology\n"
] | [] | The integration of information across multiple modalities and across time is a promising way to enhance the emotion recognition performance of affective systems. Much previous work has focused on instantaneous emotion recognition. The 2018 One-Minute Gradual-Emotion Recognition (OMG-Emotion) challenge, which was held in conjunction with the IEEE World Congress on Computational Intelligence, encouraged participants to address long-term emotion recognition by integrating cues from multiple modalities, including facial expression, audio and language. Intuitively, a multi-modal inference network should be able to leverage information from each modality and their correlations to improve recognition over that achievable by a single modality network. We describe here a multi-modal neural architecture that integrates visual information over time using an LSTM, and combines it with utterance level audio and text cues to recognize human sentiment from multimodal clips. Our model outperforms the unimodal baseline, achieving the concordance correlation coefficients (CCC) of 0.400 on the arousal task, and 0.353 on the valence task. | null | [
"https://arxiv.org/pdf/1805.00625v2.pdf"
] | 19,104,745 | 1805.00625 | c3602b4d6fa9b5dee47ff8a1c89a19e505e4b357 |
Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features
Didan Deng
Department of Electronic and Computer Engineering
Hong Kong University of Science and Technology
Yuqian Zhou yuqian2@illinois.edu
IFP
University of Illinois at Urbana-Champaign
Beckman
Jimin Pi
Department of Electronic and Computer Engineering
Hong Kong University of Science and Technology
Bertram E Shi
Department of Electronic and Computer Engineering
Hong Kong University of Science and Technology
Multimodal Utterance-level Affect Analysis using Visual, Audio and Text Features
The integration of information across multiple modalities and across time is a promising way to enhance the emotion recognition performance of affective systems. Much previous work has focused on instantaneous emotion recognition. The 2018 One-Minute Gradual-Emotion Recognition (OMG-Emotion) challenge, which was held in conjunction with the IEEE World Congress on Computational Intelligence, encouraged participants to address long-term emotion recognition by integrating cues from multiple modalities, including facial expression, audio and language. Intuitively, a multi-modal inference network should be able to leverage information from each modality and their correlations to improve recognition over that achievable by a single modality network. We describe here a multi-modal neural architecture that integrates visual information over time using an LSTM, and combines it with utterance level audio and text cues to recognize human sentiment from multimodal clips. Our model outperforms the unimodal baseline, achieving the concordance correlation coefficients (CCC) of 0.400 on the arousal task, and 0.353 on the valence task.
I. INTRODUCTION
Sentiment analysis or affective computing systems are designed to analyze human emotional states, and may benefit the development of human-computer interaction. The basic tasks include recognition of human sentiment using information from multiple modalities like facial expressions, body movement and gestures, speech and physiological signals. The labels for human sentiment are often either discrete categorical labels of six universal emotions (Disgust, Fear, Happiness, Surprise, Sadness, and Anger) [1], or continuous-valued annotations in the arousal and valence spaces [2]. Previous research, therefore, has normally modeled the problem as either a classification [3] or a regression [4] task, using deep models like the CNN [5], or traditional approaches like the SVM or Regression Tree [6].
Further improvements in the performance and reliability of affective systems will rely on long-term contextual information modeling, and cross-modality analysis. Since emotions normally change gradually under the same context, analyzing long-term dependency of emotions will stabilize the overall predictions. Meanwhile, humans perceive others' emotional states by combining informatino across multiple modalities simultaneously. Combining different modalities will yield better emotion recognition with more human-like computational models [7]. These two aspects are explicitly emphasized in the 2018 IJCNN challenge "One-Minute Gradual-Emotion Recognition (OMG-Emotion)" [8]. In this challenge, long monologue videos with gradual emotional changes are selected from YouTube, and carefully annotated using both arousal/valence and emotional categories at the utterancelevel. All the video clips contain visual, audio and transcript information. The performance of three unimodal recognition systems are provided as the baseline.
In developing our multimodal system for sentiment analysis to address this challenge, we have been inspired by many previous works, such as that combining visual and audio features [9], as well as speech content [7,10,11]. People have also combined physiological signals into emotion recognition systems [12]. Methods of combining cues from each modality can be categorized into early or late fusion. For early fusion, features from different modalities are projected into the same joint feature space before being fed into the classifier [13,14]. For late fusion, classifications are made on each modality and their decisions or predictions are later merged together, e.g. by taking the mean or other linear combination [15,16]. Some works [17,18] even implemented a hybrid fusion strategy to utilize both the advantages of late fusion and early fusion.
In this paper, we investigated the use of a number of feature extraction, classification and fusion methods. Our final trimodal method aggregates visual, audio and text features for a single-shot utterance-level sentiment regression using early fusion. To verify the effectiveness of multimodal fusion, we compared it with three unimodal methods. Our proposed multimodal approach outperformed the unimodal ones as well as the baseline methods, achieving validation set concordance correlation coefficients (CCC) of 0.400 on the arousal task, and 0.353 on the valence task.
II. METHODOLOGY
A. Dataset and Metrics
The OMG-Emotion Behavior Dataset [8] is a long-term multi-modal corpus for sentiment analysis. It is constructed by picking out the videos with emotion behaviors from Youtube videos using keywords like "monologues", "auditions" etc. Most videos in OMG dataset have standard resolution of 1280x720, and the main language is English. Utterances are then extracted from each video where there are high speech probability. The dataset is split into training, validation and testing set. There are 231 videos in the training set, 60 videos in the validation set, and 204 videos in the testing set. Thus the number of utterances are 2440, 617 and 2229 respectively. Each utterance is annotated by arousal/valence value in dimensional space, as well as seven discrete emotion labels. Arousal is a continuous score ranging from 0 (calm) to 1 (excited), while valence is a continuous score ranging from -1 (negative) to +1 (positive).
Two following metrics are used to evaluate the arousal/valence estimation over this dataset: MSE (mean squared error) and CCC (the concordance correlation coefficients). The CCC is defined as:
ρ c = 2ρσ Gnd σ P red σ 2 Gnd + σ 2 P red + (µ Gnd − µ P red )(1)
where ρ is the Correlation Coefficient between the predictions and groundtruth. µ Gnd and µ P red denote the mean, and σ 2
Gnd and σ 2 P red are the corresponding variance. Figure 1 shows the architecture of our proposed model. Our deep neural network model consists of three parts: (1) the subnetworks for each single modality; (2) the early fusion layer which concatenates three unimodal representations together; and (3) the final decision layer that estimates the sentiment.
B. System Architecture
1) Visual Subnetwork: Visual features consist of Open-Face [19] estimators on the whole frames, and VGG face representation [20] on facial regions. For OpenFace features, we use OpenFace toolkit to extract the estimated 68 facial landmarks in both 2D and 3D world coordinates, eye gaze direction vector in 3D, head pose, rigid head shape, and Facial Action Units intensity [21] indicating the facial muscle movements. The detailed feature descriptions are seen in [22] Those visual descriptors are regarded as strong indicators of human emotions and sentiments [12,23]. For the VGG face representation, facial region in each frame is cropped and aligned using a 3D Constrained Local Model described in [24]. We zero out the background according to the face contour indicated by the facial landmarks. Then, the cropped faces are resized to 224×224×3 and fed into a VGG Face model pretrained on a large face dataset. We take the 4096dimensional feature vectors in the fc6 layer, and concatenate them with the visual features extracted by OpenFace. The total dimension of the concatenated features is 4805.
The concatenated visual features from a single utterance are further fed into a LSTM layer with 64 hidden units followed by a dense layer with 256 hidden neurons for temporal modeling. Specifically, 20 frames are uniformly sampled from each utterance and fed into the network for training and testing. In the case of shorter length of utterance, we duplicated the last frame to fill the gap.
2) Audio Subnetwork: Audio features are extracted using openSMILE toolkit [25], and we use the same feature set as suggested in the INTERSPEECH 2010 paralinguistics challenge [26]. The set contains Mel Frequency Cepstral Coefficients (MFCCs), ∆MFCC, loudness, pitch, jitter, etc. [27]. These features describe the prosodic pattern of different speakers and are consistent signs of their affective states. For each utterance sample, We extract 1582 dimensional features from the audio signal. These audio features are then fed into a fully connected layer with 256 units.
3) Text Subnetwork: We use two opinion lexicons to analyze the patterns in language context. The first one is Bing Lius opinion Lexicon [28] with 2006 positive words and 4783 negative words. The second one is MPQA Subjectivity Lexicon [29] with 2718 positive words and 4913 negative words. For each utterance, we compute the frequency of positive and negative words according to the two lexicons, as well as the total word number in the whole utterance. For utterances without transcript, we replicate the transcript of the closest utterance in time. We also extract the word frequencies over the entire video, and assign them as features for all utterances in the same video. The total dimension of word feature is finally 10, including utterance-level and video-level word frequency from two lexicons and the total word counts. These text features are also fed into a fully connected layer with 256 units.
4) Fusion and Decision Layers:
We combine cues from the three modalities using early fusion strategy. The aggregated feature vector is fully connected to a two-layer neural network with 1024 hidden units and a single output neuron, activated by sigmoid (for arousal task) or hyperbolic tangent function (for valence task). We first use MSE as the loss function for joint training, and apply 1 − ρ c loss for further refinement.
In comparison, we also design a late fusion strategy. In this case, we add a decision layer in each subnetwork and combine the 3 predictions using a linear regression trained by MSE.
III. EXPERIMENTS
We trained and evaluated the multimodal network on OMG dataset. The model was trained for at most 300 epochs. To prevent overfitting, we applied an early-stopping policy with 20 epochs patience, which means to stop training after the validation loss doesn't drop for 20 epochs, and we deployed dropout strategy with ratio 0.5 for each fully connected layer. The learning rate was 1e −2 for arousal task and 1e −3 for valence task.
A. Unimodal Approach
We first evaluated the performance of model trained with single modality. For each unimodal model, the same decision layer introduced in Section II-B4 was deployed.
For visual unimodal model, we investigated the effectiveness of VGG-face and OpenFace features separately in an ablation test. The comparison results are shown in Table I. Our results demonstrated that VGG-face features outperformed OpenFace features under the same model architecture. Better performance on both arousal and valence tasks were achieved when the two features are fused.
For the audio network, we focused on studying the importance of temporal modeling in utterance. We implemented another LSTM-based network for audio modality. Specifically, we divided each audio file into audio frames of 0.5 second length, and extracted openSMILE features for each single frame. Those features are then fed into a 64 cells LSTM layer followed by the decision layer. We compared this LSTM-based model with our audio unimodal model described in section II-B2. The results in Table I show the model without LSTM performs better than the audio model with LSTM. The LSTM layer does not benefit the estimation.
For text modality, we compared the proposed word frequency statistical approach with models using pretrained word embeddings and LSTM layers in NLP(Natural Language Processing). We implemented the latter approach by using the 100 dimensional GloVe word vectors pretrained on English WikiPedia [30] and a 64 cells LSTM layer in Text(LSTM) model. We compared the performance with text unimodal model using simple opinion lexicon features. The result is shown in Table I. Surprisingly, simple lexicon features performed better. This results from the frequently occurring errors as being transcribed using Automatic Speech Recognition Tool in this dataset. The opinion lexicon features, however, mostly ignore these errors by only counting the words appearing in opinion lexicon.
B. Multimodal Approach
We trained the trimodal network by using the concatenated multimodal features. With respect to fusion strategies, We compared the early and late feature fusion strategies in Table II. The results demonstrated that learning benefits more from early fused representation. The performance is further improved by fine-tuning the system using 1 − ρ c loss. Table III showed the comparison of our unimodal or multimodal systems performances with the baseline results. The trimodal model has better performance than any of the unimodal models.
IV. CONCLUSION
In this paper, we propose a multimodal system that utilizes visual, audio and text features to perform a continuous affect prediction task in utterance level. Early feature fusion strategy is deployed and CCC loss is directly applied for network fine-tuning to boost the estimation performance. In the OMG dataset, both our unimodal or multimodal models outperform the baseline methods significantly. Our results shows that cross-modal information will greatly benefit the estimation of long-term affective states.
Fig. 1 :
1The architecture of the proposed model. The unimodal features are extracted separately and concatenated in an early fusion strategy. A two-layer fully-connected neural network is applied to estimate the estimate the arousal and valence of a single utterance.
TABLE I :
IThe Ablation Test of Unimodal ModelsModels
Arousal
Valence
(CCC) (MSE) (CCC) (MSE)
Visual(VGG-Face)
0.109
0.047
0.237
0.110
Visual(OpenFace)
0.046
0.047
0.080
0.122
Visual(Fused Feature)
0.175
0.047
0.261
0.122
Audio(with LSTM)
0.146
0.044
0.154
0.106
Audio(without LSTM)
0.273
0.054
0.266
0.108
Text(Word Embedding) 0.007
0.048
0.098
0.120
Text(Lexicon)
0.137
0.044
0.259
0.108
TABLE II :
IIThe performance of two fusion strategiesFusion Methods
Arousal
Valence
(CCC) (MSE) (CCC) (MSE)
Late Fusion
0.311
0.046
0.280
0.106
Early Fusion
0.386
0.054
0.305
0.105
Early Fusion(Fine Tuned)
0.400
0.058
0.353
0.136
TABLE III :
IIIThe Performance on the Validation PartitionModel
Arousal
Valence
Baseline
Ours
Baseline
Ours
CCC
MSE
CCC
MSE
CCC
MSE
CCC
MSE
Audio
0.122 0.04
0.273
0.054 0.049 0.013
0.266
0.108
Video
0.159 0.05
0.175
0.047 0.219 0.15
0.261
0.122
Text
0.003 0.04
0.137
0.044 0.068 0.13
0.259
0.108
Trimodal None
None 0.400
0.058 None
None
0.353
0.136
Constants across cultures in the face and emotion. P Ekman, W V Friesen, Journal of personality and social psychology. 172124P. Ekman and W. V. Friesen, "Constants across cultures in the face and emotion.," Journal of personality and social psychology, vol. 17, no. 2, p. 124, 1971.
Methods and measures in developmental emotions research: Some assembly required. R A Thompson, Journal of Experimental Child Psychology. 1102R. A. Thompson, "Methods and measures in devel- opmental emotions research: Some assembly required," Journal of Experimental Child Psychology, vol. 110, no. 2, pp. 275-285, 2011.
Action unit selective feature maps in deep networks for facial expression recognition. Y Zhou, B E Shi, 2017 International Joint Conference on. IEEENeural Networks (IJCNNY. Zhou and B. E. Shi, "Action unit selective feature maps in deep networks for facial expression recognition," in Neural Networks (IJCNN), 2017 International Joint Conference on, pp. 2031-2038, IEEE, 2017.
Pose-independent facial action unit intensity regression based on multi-task deep transfer learning. Y Zhou, J Pi, B E Shi, Automatic Face & Gesture Recognition (FG 2017. IEEE12th IEEE International Conference onY. Zhou, J. Pi, and B. E. Shi, "Pose-independent fa- cial action unit intensity regression based on multi-task deep transfer learning," in Automatic Face & Gesture Recognition (FG 2017), 2017 12th IEEE International Conference on, pp. 872-877, IEEE, 2017.
Do deep neural networks learn facial action units when doing expression recognition?. P Khorrami, T Paine, T Huang, Proceedings of the IEEE International Conference on Computer Vision Workshops. the IEEE International Conference on Computer Vision WorkshopsP. Khorrami, T. Paine, and T. Huang, "Do deep neural networks learn facial action units when doing expression recognition?," in Proceedings of the IEEE International Conference on Computer Vision Workshops, pp. 19-27, 2015.
Facial expression recognition based on local binary patterns: A comprehensive study. C Shan, S Gong, P W Mcowan, Image and Vision Computing. 276C. Shan, S. Gong, and P. W. McOwan, "Facial expression recognition based on local binary patterns: A comprehen- sive study," Image and Vision Computing, vol. 27, no. 6, pp. 803-816, 2009.
Towards multimodal sentiment analysis: Harvesting opinions from the web. L.-P Morency, R Mihalcea, P Doshi, Proceedings of the 13th international conference on multimodal interfaces. the 13th international conference on multimodal interfacesACML.-P. Morency, R. Mihalcea, and P. Doshi, "Towards mul- timodal sentiment analysis: Harvesting opinions from the web," in Proceedings of the 13th international conference on multimodal interfaces, pp. 169-176, ACM, 2011.
P Barros, N Churamani, E Lakomkin, H Siqueira, A Sutherland, S Wermter, arXiv:1803.05434The omg-emotion behavior dataset. arXiv preprintP. Barros, N. Churamani, E. Lakomkin, H. Siqueira, A. Sutherland, and S. Wermter, "The omg-emotion be- havior dataset," arXiv preprint arXiv:1803.05434, 2018.
End-to-end multimodal emotion recognition using deep neural networks. P Tzirakis, G Trigeorgis, M A Nicolaou, B W Schuller, S Zafeiriou, IEEE Journal of Selected Topics in Signal Processing. 118P. Tzirakis, G. Trigeorgis, M. A. Nicolaou, B. W. Schuller, and S. Zafeiriou, "End-to-end multimodal emo- tion recognition using deep neural networks," IEEE Jour- nal of Selected Topics in Signal Processing, vol. 11, no. 8, pp. 1301-1309, 2017.
Fusing audio, visual and textual clues for sentiment analysis from multimodal content. S Poria, E Cambria, N Howard, G.-B Huang, A Hussain, Neurocomputing. 174S. Poria, E. Cambria, N. Howard, G.-B. Huang, and A. Hussain, "Fusing audio, visual and textual clues for sentiment analysis from multimodal content," Neurocom- puting, vol. 174, pp. 50-59, 2016.
Tensor fusion network for multimodal sentiment analysis. A Zadeh, M Chen, S Poria, E Cambria, L.-P Morency, arXiv:1707.07250arXiv preprintA. Zadeh, M. Chen, S. Poria, E. Cambria, and L.-P. Morency, "Tensor fusion network for multimodal senti- ment analysis," arXiv preprint arXiv:1707.07250, 2017.
Multimodal emotion recognition using deep learning architectures. H Ranganathan, S Chakraborty, S Panchanathan, Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on. IEEEH. Ranganathan, S. Chakraborty, and S. Panchanathan, "Multimodal emotion recognition using deep learning ar- chitectures," in Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pp. 1-9, IEEE, 2016.
Multimodal sentiment analysis of spanish online videos. V P Rosas, R Mihalcea, L.-P Morency, IEEE Intelligent Systems. 283V. P. Rosas, R. Mihalcea, and L.-P. Morency, "Multi- modal sentiment analysis of spanish online videos," IEEE Intelligent Systems, vol. 28, no. 3, pp. 38-45, 2013.
Towards an intelligent framework for multimodal affective data analysis. S Poria, E Cambria, A Hussain, G.-B Huang, Neural Networks. 63S. Poria, E. Cambria, A. Hussain, and G.-B. Huang, "To- wards an intelligent framework for multimodal affective data analysis," Neural Networks, vol. 63, pp. 104-116, 2015.
Convolutional neural networks for multimedia sentiment analysis. G Cai, B Xia, Natural Language Processing and Chinese Computing. SpringerG. Cai and B. Xia, "Convolutional neural networks for multimedia sentiment analysis," in Natural Lan- guage Processing and Chinese Computing, pp. 159-167, Springer, 2015.
Kalman filter based classifier fusion for affective state recognition. M Glodek, S Reuter, M Schels, K Dietmayer, F Schwenker, International Workshop on Multiple Classifier Systems. SpringerM. Glodek, S. Reuter, M. Schels, K. Dietmayer, and F. Schwenker, "Kalman filter based classifier fusion for affective state recognition," in International Workshop on Multiple Classifier Systems, pp. 85-94, Springer, 2013.
Multimodal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic analysis. L Kessous, G Castellano, G Caridakis, Journal on Multimodal User Interfaces. 31-2L. Kessous, G. Castellano, and G. Caridakis, "Multi- modal emotion recognition in speech-based interaction using facial expression, body gesture and acoustic anal- ysis," Journal on Multimodal User Interfaces, vol. 3, no. 1-2, pp. 33-48, 2010.
Deep convolutional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analysis. S Poria, E Cambria, A Gelbukh, Proceedings of the 2015 conference on empirical methods in natural language processing. the 2015 conference on empirical methods in natural language processingS. Poria, E. Cambria, and A. Gelbukh, "Deep convolu- tional neural network textual features and multiple kernel learning for utterance-level multimodal sentiment analy- sis," in Proceedings of the 2015 conference on empirical methods in natural language processing, pp. 2539-2544, 2015.
Openface: an open source facial behavior analysis toolkit. T Baltrušaitis, P Robinson, L.-P Morency, Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on. IEEET. Baltrušaitis, P. Robinson, and L.-P. Morency, "Open- face: an open source facial behavior analysis toolkit," in Applications of Computer Vision (WACV), 2016 IEEE Winter Conference on, pp. 1-10, IEEE, 2016.
Deep face recognition. O M Parkhi, A Vedaldi, A Zisserman, BMVC. 16O. M. Parkhi, A. Vedaldi, A. Zisserman, et al., "Deep face recognition.," in BMVC, vol. 1, p. 6, 2015.
P Ekman, W V Friesen, Facial Action Coding System: Investigatoris Guide. Consulting Psychologists PressP. Ekman and W. V. Friesen, Facial Action Coding System: Investigatoris Guide. Consulting Psychologists Press, 1978.
Openface output format. "Openface output format." https://github.com/
. / Tadasbaltrusaitis, Openface, /wiki/Output-Format. Accessed. TadasBaltrusaitis/OpenFace/wiki/Output-Format. Accessed: 2018-4-30.
Multimodal emotion recognition in response to videos. M Soleymani, M Pantic, T Pun, IEEE transactions on affective computing. 32M. Soleymani, M. Pantic, and T. Pun, "Multimodal emotion recognition in response to videos," IEEE trans- actions on affective computing, vol. 3, no. 2, pp. 211- 223, 2012.
3d constrained local model for rigid and non-rigid facial tracking. T Baltrušaitis, P Robinson, L.-P Morency, Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEET. Baltrušaitis, P. Robinson, and L.-P. Morency, "3d constrained local model for rigid and non-rigid facial tracking," in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pp. 2610-2617, IEEE, 2012.
Opensmile: the munich versatile and fast open-source audio feature extractor. F Eyben, M Wöllmer, B Schuller, Proceedings of the 18th ACM international conference on Multimedia. the 18th ACM international conference on MultimediaACMF. Eyben, M. Wöllmer, and B. Schuller, "Opensmile: the munich versatile and fast open-source audio feature extractor," in Proceedings of the 18th ACM international conference on Multimedia, pp. 1459-1462, ACM, 2010.
The interspeech 2010 paralinguistic challenge. B Schuller, S Steidl, A Batliner, F Burkhardt, L Devillers, C Müller, S Narayanan, Proc. INTERSPEECH 2010. INTERSPEECH 2010Makuhari, JapanB. Schuller, S. Steidl, A. Batliner, F. Burkhardt, L. Dev- illers, C. Müller, and S. Narayanan, "The interspeech 2010 paralinguistic challenge," in Proc. INTERSPEECH 2010, Makuhari, Japan, pp. 2794-2797, 2010.
opensmile emobase2010 features. "opensmile emobase2010 features." https:
A holistic lexicon-based approach to opinion mining. X Ding, B Liu, P S Yu, Proceedings of the 2008 international conference on web search and data mining. the 2008 international conference on web search and data miningACMX. Ding, B. Liu, and P. S. Yu, "A holistic lexicon-based approach to opinion mining," in Proceedings of the 2008 international conference on web search and data mining, pp. 231-240, ACM, 2008.
Recognizing contextual polarity in phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, Proceedings of the conference on human language technology and empirical methods in natural language processing. the conference on human language technology and empirical methods in natural language processingAssociation for Computational LinguisticsT. Wilson, J. Wiebe, and P. Hoffmann, "Recognizing contextual polarity in phrase-level sentiment analysis," in Proceedings of the conference on human language technology and empirical methods in natural language processing, pp. 347-354, Association for Computational Linguistics, 2005.
Glove: Global vectors for word representation. J Pennington, R Socher, C Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)J. Pennington, R. Socher, and C. Manning, "Glove: Global vectors for word representation," in Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014.
| [] |
[
"Label-aware Double Transfer Learning for Cross-Specialty Medical Named Entity Recognition",
"Label-aware Double Transfer Learning for Cross-Specialty Medical Named Entity Recognition"
] | [
"Zhenghui Wang \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Yanru Qu \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Liheng Chen \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Jian Shen \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Weinan Zhang \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Shaodian Zhang shaodian@apex.sjtu.edu.cnchen.ken@synyi.com \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Yimei Gao \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Gen Gu \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Ken Chen \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n",
"Yong Yu \nAPEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai\n"
] | [
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai",
"APEX Data and Knowledge Management Lab\nJiao Tong University\nShanghai"
] | [
"Proceedings of NAACL-HLT 2018"
] | We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (La-DTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that La-DTL is potential to be seamlessly adapted to a wide range of NER tasks. | 10.18653/v1/n18-1001 | [
"https://www.aclweb.org/anthology/N18-1001.pdf"
] | 13,751,762 | 1804.09021 | 8b93193cc0beb3b2e0a653fa1c959aee06aef044 |
Label-aware Double Transfer Learning for Cross-Specialty Medical Named Entity Recognition
June 1 -6, 2018
Zhenghui Wang
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Yanru Qu
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Liheng Chen
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Jian Shen
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Weinan Zhang
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Shaodian Zhang shaodian@apex.sjtu.edu.cnchen.ken@synyi.com
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Yimei Gao
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Gen Gu
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Ken Chen
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Yong Yu
APEX Data and Knowledge Management Lab
Jiao Tong University
Shanghai
Label-aware Double Transfer Learning for Cross-Specialty Medical Named Entity Recognition
Proceedings of NAACL-HLT 2018
NAACL-HLT 2018New Orleans, LouisianaJune 1 -6, 2018
We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (La-DTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that La-DTL is potential to be seamlessly adapted to a wide range of NER tasks.
Introduction
The development of hospital information system and medical informatics drives the leverage of various medical data for a more efficient and intelligent medical care service. Among many kinds of medical data, electronic health records (EHRs) are one of the most valuable and informative data as they contain detailed information about the patients and the clinical practices. EHRs are essential to many intelligent clinical applications, such * Weinan Zhang is the corresponding author. as hospital quality control and clinical decision support systems (Wu et al., 2015). Most of EHRs are recorded in an unstructured form, i.e., natural language. Hence, extracting structured information from EHRs using natural language processing (NLP), e.g., named entity recognition (NER) and entity linking, plays a fundamental role in medical informatics (Zhang and Elhadad, 2013). In this paper, we focus on medical NER from EHRs, which is a fundamental task and is widely studied in the research community (Nadeau and Sekine, 2007;Uzuner et al., 2011).
In practice, the difficulty of building a universally robust and high-performance medical NER system lies in the variety of medical terminologies and expressions among different departments of specialties and hospitals. However, building separate NER systems for so many specialties comes with a prohibitively high cost. The data privacy issue further discourages the sharing of the data across departments or hospitals, making it more difficult to train a canonical NER system to be applied everywhere. This raises a natural question: if we have sufficient annotated EHRs data in one source specialty, can we distill the knowledge and transfer it to help training models in a related target specialty with few annotations? By transferring the knowledge we can achieve higher performance in target specialties with lower annotation cost and bypass the data sharing concerns. This is commonly referred to as transfer learning (Pan and Yang, 2010).
Current state-of-the-art transfer learning methods for NER are mainly based on deep neural networks, which perform an end-to-end training to distill sequential dependency patterns in the natural language (Ma and Hovy, 2016;Lample et al., 2016). These transfer learning methods include (i) feature representation transfer (Peng and Dredze, 2017;Kulkarni et al., 2016), which normally lever-1 ages deep neural networks to learn a close feature mapping between the source and target domains, and (ii) parameter transfer (Murthy et al., 2016;Yang et al., 2017), which performs parameter sharing or joint training to get the target-domain model parameters close to those of the source-domain model. To the best of our knowledge, there is no previous literature working on transfer learning for NER in the medical domain, or even in a larger scope, i.e., medical natural language processing.
In this paper, we propose a novel NER transfer learning framework, namely label-aware double transfer learning (La-DTL): (i) We leverage bidirectional long-short term memory (Bi-LSTM) network (Graves and Schmidhuber, 2005) to automatically learn the text representations, based on which we perform a label-aware feature representation transfer. We propose a variant of maximum mean discrepancy (MMD) (Gretton et al., 2012), namely label-aware MMD (La-MMD), to explicitly reduce the domain discrepancy of feature representations of tokens with the same label between two domains. (ii) Based on the learned feature representations from Bi-LSTM, two conditional random field (CRF) models are performed for sequence labeling for source and target domain separately, where parameter transfer learning is performed. Specifically, an upper bound of KL divergence between the source and target domain's CRF label distributions is added over the emission and transition matrices across the source and target CRF models to explore the shareable parts of the parameters. Both (i) and (ii) have a labelaware characteristic, which will be discussed later. We further argue that label-aware characteristic is crucial for transfer learning in sequence labeling problems, e.g., NER, because only when the corresponding labels are matched, can the "similar" contexts (i.e. feature representation) and model parameters be efficiently borrowed to improve the label prediction.
Extensive experiments are conducted on 12 cross-specialty medical NER tasks with real-world EHRs. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines, with overall 2.62% to 6.70% absolute F1-score improvement over the state-of-the-art methods. Besides, the promising experimental results on other two non-medical NER scenarios indicate that La-DTL has the potential to be seamlessly adapted to a wide range of NER tasks.
Related Works
Named Entity Recognition (NER) is fundamental in information extraction area which aims at automatic detection of named entities (e.g., person, organization, location and geo-political) in free text (Marrero et al., 2013). Many high-level applications such as entity linking (Moro et al., 2014) and knowledge graph construction (Hachey et al., 2011) could be built on top of an NER system. Traditional high-performance approaches include conditional random fields models (CRFs) (Lafferty et al., 2001), maximum entropy Markov models (MEMMs) (McCallum et al., 2000) and hidden Markov models (HMMs). Recently, many neural network-based models have been proposed (Collobert et al., 2011;Chiu and Nichols, 2016;Ma and Hovy, 2016;Lample et al., 2016), in which few feature engineering works are needed to train a high-performance NER system. The architecture of those neural network-based models are similar, where different neural networks (LSTMs, CNNs) at different levels (char-and word-level) are applied to learn feature representations, and on top of neural networks, a CRF model is employed to make label predictions. Transfer Learning distills knowledge from a source domain to help create a high-performance learner for a target domain. Transfer learning algorithms are mainly categorized into three types, namely instance transfer, feature representation transfer and parameter transfer (Pan and Yang, 2010). Instance transfer normally samples or reweights source-domain samples to match the distribution of the target domain (Chen et al., 2011;Chu et al., 2013). Feature representation transfer typically learns a feature mapping which projects source and target domain data simultaneously onto a common feature space following similar distributions (Zhuang et al., 2015;Long et al., 2015;Shen et al., 2017). Parameter transfer normally involves a joint or constrained training for the models on source and target domains, usually introduce connections between source target parameters via sharing (Srivastava and Salakhutdinov, 2013), initialization (Perlich et al., 2014), or intermodel parameter penalty schemes (Zhang et al., 2016). Transfer Learning for NER Training a highperformance NER system requires expensive and time-consuming manually annotated data. But sufficient labeled data is critical for the generalization of an NER system, especially for neural networkbased models. Thus, transfer learning for NER is a practically important problem. The first group of methods focuses on sharing model parameters but they differ in the training schemes. He and Sun (2017) proposed to train the parameter-shared model with source and target data jointly, while the learning rates for sentences from source domain are re-weighted by the similarity with target domain corpus. Yang et al. (2017) proposed a family of frameworks which share model parameters in hierarchical recurrent networks to handle crossapplication, cross-lingual, and cross-domain transfer in sequence labeling tasks. Differently, Lee et al. (2017) first trained the model with source domain data and then fine-tuned the model with little annotated target domain data.
Domain adaptation method has been well studied in NER scenarios such as using distributed word representations (Kulkarni et al., 2016) and leveraging rule-based annotators (Chiticariu et al., 2010). Multi-task learning has also been studied to improve performance in multiple NER tasks by transferring meaningful knowledge from other tasks (Collobert et al., 2011;Peng and Dredze, 2016). To take the advantages of both domain adaptation and multi-task learning, Peng and Dredze (2017) proposed a multi-task domain adaptation model.
Preliminaries
This section briefly introduces bidirectional LSTM, conditional random field and maximum mean discrepancy, which are the building blocks of our transfer learning framework. Bidirectional LSTM Recurrent neural networks (RNNs) are widely used in NLP tasks for their great capability to capture contextual information in sequence data. A widely used variant of RNNs is long short-term memory (LSTM) (Hochreiter and Schmidhuber, 1997), which incorporates input and forget gates to capture both long and short term dependencies. Furthermore, it will be beneficial if we process the sequence in not only a forward but also a backward way. Thus, bidirectional LSTM (Bi-LSTM) was employed in many previous works (Chiu and Nichols, 2016;Ma and Hovy, 2016;Lample et al., 2016) to capture bidirectional information in a sequence. More specifi-cally, for token x t (embedding vector) at timestep
t in sequence X = (x 1 , x 2 , ..., x n ), the θ b - parameterized Bi-LSTM recurrently updates hid- den vectors h → t = G f θ b (X, h → t−1 ) and h ← t = G b θ b (X, h ← t+1 )
produced by a forward LSTM and a backward one, respectively. Then we concatenate h → t and h ← t to h t as the final hidden vector produced by Bi-LSTM:
ht = h → t ⊕ h ← t .
The representations learned from Bi-LSTM for sequence X is thus denoted as H = (h 1 , h 2 , ..., h n ).
Conditional Random Field
The goal of NER is to detect named entities in a sequence X by predicting a sequence of labels y = (y 1 , y 2 , ..., y n ).
Conditional random field (CRF) is widely used to make joint labeling of the tokens in a sequence (Lafferty et al., 2001).
Recently, Lample et al. (2016) proposed to build a CRF layer on top of a Bi-LSTM so that the automatically learned feature representation H = (h 1 , h 2 , ..., h n ) of the sequence can be directly fed into the CRF for sequence labeling. For a sequence of labels y, given the hidden vector sequence H, we define its θ c -parametrized score function s θc (H, y) as:
s θc (H, y) = n i=1 Ei,y i + n−1 i=1 Ay i ,y i+1 ,
where E is the emission score matrix of size n×m (m is the number of unique labels), and is computed by E = HW where W is the label emission parameter matrix; A is the label transition parameter matrix; thus θ c = {W, A}. We then define the conditional probability of label sequence y given H by a softmax over all possible label sequences in set Y(H) as:
p θc (y|H) = exp{s θc (H, y)}/Z(H) (1) = exp{s θc (H, y)} y ∈Y(H) exp{s θc (H, y )},
where θ c is omitted for simplification in the following part. The training objective in the CRF layer is to maximize the log-likelihood max θc log p(y|H). In the label prediction phase, we give the output label sequence y * with the highest conditional probability y * = arg max y ∈Y(H) p(y |H) by dynamic programming (Sutton et al., 2012). parametric test statistic to measure the distribution discrepancy in terms of the distance between the kernel mean embeddings of two distributions p and q. The MMD is defined in particular function spaces that witness the difference in distributions
Maximum Mean Discrepancy
MMD(F, p, q) = sup f ∈F (Ex∼p[f (x)] − Ey∼q[f (y)]).
By defining the function class F as the unit ball in a universal Reproducing Kernel Hilbert Space (RKHS), denoted by H, it holds that MMD[F, p, q] = 0 if and only if p = q. And then given two sets of samples X = {x 1 , ..., x m } and Y = {y 1 , ..., y n } independently and identically distributed (i.i.d.) from p and q on the data space X , the empirical estimate of MMD can be written as the distance between the empirical mean embeddings after mapping to RKHS
MMD(X, Y ) = 1 m m i=1 φ(xi) − 1 n n j=1 φ(yj) H ,(2)
where φ(·) : X → H is the nonlinear feature mapping that induces H.
Methodology
In this section, we present a label-aware double transfer learning (La-DTL) framework and discuss its rationale. Figure 1 gives an overview of La-DTL for NER. From bottom up, each input sentence is converted into a sequence of embedding vectors, which are then fed into a Bi-LSTM to sequentially encode contextual information into fixed-length hidden vectors. The embedding and Bi-LSTM layers are shared among source/target domains. With labelaware maximum mean discrepancy (La-MMD) to reduce the feature representation discrepancy between two domains, the hidden vectors are directly fed into source/target domain specific CRF layers to predict the label sequence. We use domain constrained CRF layers to enhance the target domain performance.
Framework Overview
More formally, let D s = {(X s i , y s i )} N s i=1 be the training set of N s samples from the source domain and D t = {(X t i , y t i )} N t i=1 be the train- ing set of N t samples from the target domain, with N t N s . Bi-LSTM encodes a sentence X = (x 1 , x 2 , ..., x n ) to hidden vectors H = (h 1 , h 2 , ..., h n ).
We occasionally use H(X) to denote the corresponding hidden vectors when feeding X into the Bi-LSTM. CRF decodes hidden vectors H to a label sequenceŷ = (ŷ 1 ,ŷ 2 , ...,ŷ n ). Our goal is to improve label prediction accuracy on the target domain D t by utilizing the knowledge from the source domain D s :
p(y|X) =p(y|H(X)), log p(y|H) = n i=1 Ei,y i + n−1 i=1 Ay i ,y i+1 − log Z(H). (3)
Thus training a transferable model p(y|X) requires both H(X) and p(y|H) to be transferable.
We use share word embedding and Bi-LSTM by approaching the feature representation distributions p(h|D s ) and p(h|D t ), i.e., the distributions of Bi-LSTM hidden vectors at each timestep of the sentences from the source and target domains respectively. The rationale behind it lies on the insufficiency of labeled target data. Even though LSTM has high capacity, its generalization ability highly relies on viewing "sufficient" data. Otherwise, LSTM is very likely to overfit the data. Training on both source and target data, the Bi-LSTM is expected to learn feature representations with high quality. Yosinski et al. (2014) provided a justification of this solution that sharing bottom layers is promising for transfer learning in practice.
With the sentences projected onto the same hidden space, the conditional distribution p(h s |D s ) and p(h t |D t ), however, may be distant because LSTM hidden vectors contain contextual information which is different across domains. In order to reduce source/target discrepancy, we refine MMD (Gretton et al., 2012) with label constraints, i.e., label-aware MMD (La-MMD). Using La-MMD, the source/target hidden states are pushed to similar distributions to make the feature representation H(X) transfer feasible.
Based on the hidden vectors from Bi-LSTM, we adopt independent CRF layers for each domain. The rationale lies in the hypotheses that (i) the target domain predictor can better capture target data distribution which could be very unique; (ii) a good predictor trained on the source domain directly could be leveraged to assist the target domain predictor without directly borrowing the source domain training data to bypass the data privacy issue. With respect to the emission and transition score matrices E i,y i and A y i ,y i+1 , we adopt an upper bound between source/target domains, which helps the target domain predictor to be guided by the source domain predictor. Thus p(y|H) is also transferable.
There are also other transfer methods, including fine-tuning, sharing parameter directly (without constraints) (He and Sun, 2017;Lee et al., 2017;Yang et al., 2017), etc. However, simply sharing models may dismiss target specific instances.
Learning Objective
The learning objective is to minimize the following loss L with respect to parameters Θ = {θ b , θ c }:
L = Lc + α LLa-MMD + β Lp + γ Lr,
where L c is the CRF loss, L La-MMD is the La-MMD loss, L p is the parameter similarity loss on CRF layers, and L r is the regularization term, with α, β, γ as hyperparameters to balance loss terms.
The CRF loss is our ultimate objective predicting the label sequence given the input sentence, i.e., we minimize the negative log-likelihood of training samples from both source/target domains:
Lc = − ε N s N s i=1 log p(y s i |H s i ) − 1 − ε N t N t i=1 log p(y t i |H t i ),
where H are hidden vectors obtained from Bi-LSTM, ε is the balance coefficient. The La-MMD loss L La-MMD and parameter similarity loss L p are discussed in Section 4.3 and 4.4, respectively. The regularization term is to generally control overfitting:
Lr = θ b 2 2 + θc 2 2 .
We will provide the model convergence and hyperparameter study in Section 5.1.
Bi-LSTM Feature Representation Transfer
To learn transferable feature representations, the maximum mean discrepancy (MMD) which measures the distance between two distributions, has been widely used in domain adaptation scenarios (Long et al., 2015;Rozantsev et al., 2016). Almost all these works focus on reducing the marginal distribution distance between different domain features in an unsupervised manner to make them indistinguishable. However, considering a word is not evenly distributed conditioning on different labels, it may result in that the discriminative property of features from different domains may not be similar, which means that close source and target samples may not have the same label. Different from previous works, we propose label-aware MMD (La-MMD) in Eq. (5) to explicitly reduce the discrepancy between hidden representations with the same label, i.e., the linear combination of the MMD for each label. For each label class y ∈ Y v , where Y v is the set of matched labels in two domains, we compute the squared population MMD between the hidden representations of source/target samples with the same label y:
MMD 2 (R s y , R t y ) = 1 (N s y ) 2 N s y i,j=1 k(h s i , h s j ) + 1 (N t y ) 2 N t y i,j=1 k(h t i , h t j ) − 2 N s y N t y N s y ,N t y i,j=1 k(h s i , h t j ),(4)
where R s y and R t y are sets of hidden representation h s and h t with corresponding number N s y and N t y . Eq. (4) can be easily derived by casting Eq. (2) into inner product form and applying φ(x), φ(y) H = k(x, y) where k is the reproducing kernel function (Gretton et al., 2012). For each label class, we compute the MMD loss in a normal manner. After that, we define the La-MMD loss as:
LLa-MMD = y∈Yv µy · MMD 2 (R s y , R t y ),(5)
where µ y is the corresponding coefficient. The illustration of La-MMD is shown in Figure 2.
Once we have applied this La-MMD to our representations learned from Bi-LSTM, the representation distribution of instances with the same label from different domains should be close. Then the standard CRF layer which has a simple linear structure takes these similar representations as input and is likely to give a more transferable label decision for instances with the same label.
CRF Parameter Transfer
Simply sharing the CRF layer is non-promising when source/target data are diversely distributed. According to probability decomposition in Eq. (3), in order to transfer on source/target CRF layers, more specifically, p(y|H), we reduce the KL divergence from p t (y|H) to p s (y|H). But directly reducing D KL (p s (y|H)||p t (y|H)) is intractable, we tend to reduce its upper bound:
DKL(p s (y|H)||p t (y|H)) = y∈Y(H) p s (y|H) log( p s (y|H) p t (y|H) ) = − H(p s (y|H)) − y∈Y(H) p s (y|H) log p t (y|H) ≤c( W s − W t 2 2 + A s − A t 2 2 ) 1 2 ,(6)
where H(·) is the entropy of distribution (·) and c is a constant. The detailed proof is provided in Appendix A.1. Since c( W s − W t 2 2 + A s − A t 2 2 ) is the upper bound of D KL (p s (y|H) p t (y|H)), we conduct CRF parameter transfer by minimizing
Lp = W s − W t 2 2 + A s − A t 2 2 .
It turns out that a similar regularization term is applied in our CRF parameter transfer method and the regularization framework (RF) for domain adaptation (Lu et al., 2016). However, RF is proposed to generalize the feature augmentation method in (Daume III, 2007), and these two methods are only discussed from a perspective of the parameter. There is no guarantee that two models having similar parameters yields similar output distributions. In this work, we discuss the model behavior in CRF conditions, and we successfully prove that two CRF models having similar parameters (in Euclidean space) yields similar output distributions. In another word, our method guarantees transferability in the model behavior level, while previous works are limited in parameter level. The CRF parameter transfer is illustrated in Figure 3, which is also label-aware since the L2 constraint is added over parameters corresponding to the same label in two domains, e.g., W s O and W t O .
Training
We train La-DTL in an end-to-end manner with mini-batch AdaGrad (Duchi et al., 2011). One mini-batch contains training samples from both domains, otherwise the computation of L La-MMD can not be performed. During training, word (and character) embeddings are fine-tuned to adjust real data distribution. During both training and decoding (testing) of CRF layers, we use dynamic programming to compute the normalizer in Eq. (1) and infer the label sequence.
Experiments
In this section, we evaluate La-DTL 1 and other baseline methods on 12 cross-specialty NER problems based on real-world datasets. The experimental results show that La-DTL steadily outperforms other baseline models in all tasks significantly. We also conduct further ablation study and robustness study. We evaluate La-DTL on two more nonmedical NER transfer tasks to validate its general efficacy over a wide range of applications.
Cross-Specialty NER
Datasets We collected a Chinese medical NER (CM-NER) corpus for our experiments. This corpus contains 1600 de-identified EHRs of our affiliated hospital from four different specialties in four departments: Cardiology (500), Respiratory (500), Neurology (300) and Gastroenterology (300), and the research had been reviewed and approved by the ethics committee. Named entities are annotated in the BIOES format (Begin, Inside, Outside, End and Single), with 30 types in total. The statistics of CM-NER is shown in Table 1. Baselines The following methods are compared. For a fair comparison, we implement La-DTL and baselines with the same base model introduced in (Lample et al., 2016) but with different transfer techniques.
• Non-transfer uses the target domain labeled data only.
• Domain mask and Linear projection belong to the same framework proposed by Peng and Dredze (2017) but have different implementations at the projection layer, which aims to produce shared feature representations among different domains through a linear transformation.
• Re-training is proposed by Lee et al. (2017), where an artificial neural networks (ANNs) 1 https://github.com/felixwzh/La-DTL is first trained on the source domain and then re-trained on the target domain.
• Joint-training is a transfer learning method proposed by Yang et al. (2017) where different tasks are trained jointly.
• CD-learning is a cross-domain learning method proposed by He and Sun (2017), where each source domain training example's learning rate is re-weighted.
Experimental Settings We use 23,217 unlabeled clinical records to train the word embeddings (word2vec) at 128 dimensions using skipgram model (Mikolov et al., 2013). The hidden state size is set to be 200 for word-level Bi-LSTM. We evaluate La-DTL for cross-specialty NER with CM-NER in 12 transfer tasks, results shown in Table 2. For each task, we take the whole source domain training set D s and 10% sentences of the target domain training set D t as training data. We use the development set in target domain to search hyper-parameters including training epochs. We then take the models to make the prediction in target domain test set and use F1-score as the evaluation metric. Statistical significance has been determined using a randomization version of the paired sample t-test (Cohen, 1995).
Results and Discussion
From the results of 12 cross-specialty NER tasks shown in Table 2, we find that La-DTL outperforms all the strong baselines in all the 12 cross-specialty transfer learning tasks, with 2.62% to 6.70% F1-score lift over state-of-the-art baseline methods. Meanwhile, Linear projection and Domain mask (Peng and Dredze, 2017) do not perform as good as other three baselines, which may be because such linear transformation methods are likely to weaken the representations. While other three baseline methods all share the whole model between source/target domains but differ in the training schemes and performance.
To better understand the transferability of La-DTL, we also evaluate three variants of La-DTL: La-MMD, CRF-L2, and MMD-CRF-L2. La-MMD and CRF-L2 have the same networks and loss function as La-DTL but with different building blocks: La-MMD has β = 0, while CRF-L2 has α = 0. In MMD-CRF-L2, we replace La-MMD loss L La-MMD in La-DTL with a vanilla MMD loss: where R s and R t are sets of hidden representation from source and target domain. Results in Table 2 show that: (i) Using La-MMD alone does achieve satisfactory performance since it outperforms the best baseline Joint-training (Yang et al., 2017) in 7 of 12 tasks. And it has a significant improvement over Domain mask and Linear projection methods (Peng and Dredze, 2017), which indicates that using La-MMD to reduce the domain discrepancy of feature representations in sequence tagging tasks is promising. (ii) CRF-L2 is also a promising method when transferring between NER tasks, and it improves the La-MMD method significantly when these two methods are combined to form La-DTL. (iii) Label-aware characteristic is important in sequence labeling problems because there is an obvious performance drop when La-MMD is replaced with a vanilla MMD in La-DTL. But MMD-CRF-L2 still has very competitive performance compared to all the baseline methods. This shows positive empirical evidence that transferring knowledge at both Bi-LSTM feature representation level and CRF parameter level for NER tasks is better than transferring knowledge at only one of these two levels, as discussed in Section 4.1.
L MMD = MMD 2 (R s , R t ),
Robustness to Target Domain Data Sparsity
We further study the sparsity problem (target domain) of La-DTL in C→R task comparing to Joint-training (Yang et al., 2017) and Non-transfer method. We evaluate La-DTL with different data volume (sampling rate: 10%, 25%, 50%, 100%) on the target domain training set. Results are shown in Figure 4(a). We observe that La-DTL outperforms Joint-training and Non-transfer results under all circumstances, and the improvement of La-DTL is more significant when the sampling rate is lower.
To show La-DTL's convergence and significant improvement over Joint-training, we repeat the 10% sampling rate experiment for 10 times with 10 random seeds. The F1-score on the target domain development set for two methods with a 95% confidence interval is shown in Figure 4(b) where La-DTL outperforms Joint-training method significantly. Hyperparameter Study We study the influence of three key hyperparameters in La-DTL: α, β, and ε in C→R task with 10% target domain sampling rate. We first apply a rough grid search for the three hyperparameters, and the result is (α = 0.02, β = 0.03, ε = 0.3). We then fix two hyperparameters and test the third one in a finer granularity. The results in Figure 5 (Peng and Dredze, 2017) 56.99 Domain mask (Peng and Dredze, 2017) * 56.80 Domain mask (Peng and Dredze, 2017) 56.32 CD-learning (He and Sun, 2017) * 52.05 CD-learning (He and Sun, 2017) 56.46 Re-training (Lee et al., 2017) 55.36 Joint-training (Yang et al., 2017) 56.80
La-DTL 57.74 mance. This shows that we need to balance the learning objective of the source and target domains for better transferability.
NER Transfer Experiment on Non-medical Corpus
To show La-DTL could be applied in a wide range of NER transfer learning scenarios, we make experiments on two non-medical NER tasks. Corpora's details are shown in Table 3. WeiboNER Transfer Following He and Sun (2017); Peng and Dredze (2017), we transfer knowledge from SighanNER (MSR corpus of the sixth SIGHAN Workshop on Chinese language processing) to WeiboNER (a social media NER corpus) (Peng and Dredze, 2015). Results in Table 4 show that La-DTL outperforms all the baseline methods in Chinese social media domain. (Tjong Kim Sang and De Meulder, 2003) to TwitterNER (Ritter et al., 2011). Since the entity types in these two corpora cannot be exactly matched, La-DTL and Joint-training (Yang et al., 2017) can be applied directly in this case while other baselines can not. Because the CRF parameter transfer of La-DTL is label-aware, and Jointtraining simply leverages two independent CRF layers. The results are shown in fer learning scenarios with mismatched label sets and languages like English.
Conclusions
In this paper, we propose La-DTL, a label-aware double transfer learning framework, to conduct both Bi-LSTM feature representation transfer and CRF parameter transfer with label-aware constraints for cross-specialty medical NER tasks. To our best knowledge, this is the first work on transfer learning for medical NER in cross-specialty scenario. Experiments on 12 cross-specialty NER tasks show that La-DTL provides consistent performance improvement over strong baselines. We further perform a set of experiments on different target domain data size, hyperparameter study and other non-medical NER tasks, where La-DTL shows great robustness and wide efficacy. For future work, we plan to jointly perform NER and entity linking for better cross-specialty media structural information extraction.
A Appendix
A.1 Detailed Proof
Recall the bound as in Eq. (6):
Lemma A.1. c 1 ( W s − W t 2 2 + A s − A t 2 2 )
is the upper bound of (s s (H, y) − s t (H, y)) 2 .
Proof of Lemma A.1. ⊗ refers to convolutional product, H W , H A are mask matrices corresponding to the given hidden vectors H, and c 1 is a constant. We have:
(s s (H, y) − s t (H, y)) 2 =( n i=1 E s i,y i + n−1 i=1 A s y i ,y i+1 − n i=1 E t i,y i − n−1 i=1 A t y i ,y i+1 ) 2 =(W s ⊗ H W + A s ⊗ H A − W t ⊗ H W − A t ⊗ H A ) 2 =((W s − W t ) ⊗ H W + (A s − A t ) ⊗ H A ) 2 ≤2((W s − W t ) ⊗ H W ) 2 + 2((A s − A t ) ⊗ H A ) 2 =2( i,j (W s − W t ) i,j · H W i,j ) 2 + 2( p,q (A s − A t ) p,q · H A p,q ) 2 ≤2( i,j (W s − W t ) 2 i,j · i,j (H W i,j ) 2 ) + 2( p,q (A s − A t ) 2 p,q · p,q (H A p,q ) 2 )
=2( W s − W t 2 2 · H W 2 2 ) + 2( A s − A t 2 2 · H A 2 2 ) ≤c 1 ( W s − W t 2 2 + A s − A t 2 2 ).
Lemma A.2. c( W s − W t 2 2 + A s − A t 2 2 ) 1 2 is the upper bound of D KL (p s (y|H)||p t (y|H)).
Proof of Lemma A.2. With Lemma. (A.1), we set ε = (c 1 ( W s − W t 2 2 + A s − A t 2 2 )) 1 2 ≥ 0 and c = 2c
Figure 1 :
1Maximum MeanDiscrepancy(Gretton et al., 2012) is a non-La-DTL framework overview: embedding and Bi-LSTM layers are shared across domains, predictors in red (upper) boxes are task-specific CRFs, with label-aware MMD and L2 constraints to perform feature representation transfer and parameter transfer.
Figure 2 :
2Illustration for La-MMD. MMD-y is computed between two domains' hidden representations with the same ground truth label y. A linear combination is then applied to each label-wise MMD to form La-MMD and the coefficient is set as µ y = 1.
Figure 3 :
3Illustration for CRF parameter transfer.
Figure 4 :
4(a) F1-score of La-DTL, Joint-training and Non-transfer method in C→R task with different sampling rate. (b) The learning curve of La-DTL and Jointtraining in C→R task.
Figure 5 :
5Hyperparameter study for α, β, and ε.
1
, and we have:s s (H, y) − ε ≤ s t (H, y) ≤ s s (H, y) + ε,(7)log{ y ∈Y(H) exp[s s (H, y )]} − ε ≤ log{ y ∈Y(H) exp[s t (H, y )]} ≤ log{ y ∈Y(H) exp[s s (H, y )]} + ε. (y|H) s s (H, y) − ε − log{ y ∈Y(H) exp[s s (H, y )](y|H) log p s (y|H)−2ε =H(p s (y|H)) + 2ε.Finally, we haveD KL (p s (y|H)||p t (y|H)) = y∈Y(H) p s (y|H) log( p s (y|H) p t (y|H) ) = − H(p s (y|H)) − y∈Y(H) p s (y|H) log p t (y|H)≤ − H(p s (y|H)) + H(p s (y|H)) + 2ε =c( W s − W t 2 2 + A s − A t 2 2 )
Table 1 :
1Sentence numbers for CM-NER corpus.
Table 2 :
2Results (F1-score %) of 12 cross-specialty medical NER tasks. C, R, N, G are short for the department
of Cardiology, Respiratory, Neurology, and Gastroenterology, respectively. † indicates La-DTL outperforms the 6
baselines significantly (p < 0.05).
10% 25% 50%
100%
Sampling rate
(a)
0.68
0.74
0.80
0.86
0.92
F1-score
La-DTL
Joint-training
Non-transfer
0 30 60 90 120 150 180 210
Epochs
(b)
0.63
0.65
0.67
0.69
0.71
La-DTL
Joint-training
indicate that setting α ∈ [0.01, 0.04] could better leverage La-MMD and further setting β ∈ [0.03, 0.12] and ε ∈ [0.3, 0.4] yields the best empirical perfor-Corpus
# Train # Dev # Test
SighanNER
23,182
-
4,636
WeiboNER
1,350
270
270
CoNLL 2003 14,987 3,466 3,684
TwitterNER
1,900
240
254
Table 3 :
3Sentence numbers for non-medical corpora.Method
F1-score
Non-transfer
54.78
Linear projection (Peng and Dredze, 2017) * 56.40
Linear projection
Table 4 :
4Results (F1-score %) of WeiboNER transfer. * indicates the result reported in the corresponding reference.
TwitterNER Transfer Following Yang et al. (2017) we transfer knowledge from CoNLL 2003 English NER
Table 5 ,
5where La-DTL again outperforms Joint-training, indicating that La-DTL could be applied seamlessly to trans-Joint-training(Yang et al., 2017) * 43.24 Method
F1-score
Non-transfer
34.65
La-DTL
45.71
Table 5 :
5Results (F1-score %) of TwitterNER transfer. * indicates the result reported in the corresponding reference.
https://github.com/kimiyoung/transfer
AcknowledgmentsThe work done by SJTU is sponsored by Synyi-SJTU Innovation Program, National Natural Science Foundation of China (61632017, 61702327, 61772333) and Shanghai Sailing Program (17YF1428200).1 2 .A.2 Case AnalysisIn clinical practice, patients with specific diseases would be assigned to different departments, and specialist doctors in their department may pay more attention to the specific disease. When writing a medical chart, these specific diseases and related clinical findings would have a more detailed description. Therefore, some medical terms would have enriched meanings in different departments accordingly. For example, patients with rheumatic heart disease are often treated in the department of Cardiology. The term, "rheumatic", a modifier, describes and limits the type of "heart disease". In English, "rheumatic" is an adjective modifying "heart disease". However, in Chinese, "rheumatic heart disease" can be regarded as two diseases, "rheumatism" and "heart disease". In the department of Cardiology, "rheumatic heart dis-ease" is usually mentioned as a single term. While in other departments, "rheumatism" and "heart disease" are mostly two independent named entities in annotated datasets. As such, it is difficult to train an NER model to capture the relationship between "rheumatism" and "heart disease", and band them as a whole. In the training set of our study, the diagnostic term "rheumatic heart disease" (including synonym) is mentioned for 17 times in Dept. Cardiology, 16 times in Dept. Respiratory, none in Dept. Neurology and 3 times in Dept. Gastroenterology. We use the data from the first 3 departments as source domain training set respectively, and the data from Dept. Gastroenterology as the target domain training set. We test our models on the test set from Dept. Gastroenterology, where "rheumatic heart disease" is mentioned 3 times, and compare the results across models with/without transfer learning. As expected, models with source training data from Dept. Cardiovascular and Respiration correctly predict all these entities, but the model using source data from Dept. Neurology fails and so does a model without transfer learning. Patients with pulmonary heart disease were often referred to Dept. Respiratory and Dept. Cardiology. In our training set, "pulmonary heart disease" (including synonym) is labeled for 24 times in Dept. Respiratory and 4 times in Dept. Cardiology. In English, "pulmonary" modified "heart disease". In Chinese, "pulmonary heart disease" contains body structure "lung" and disease name "heart disease". The model trained with the source set from both from department of respiratory and cardiology could correctly recognize the relation between lung and heart disease and predict the entity in the test set from Dept. Gastroenterology.Similarly, "coronary atherosclerotic heart disease" contains two disease names, "coronary atherosclerosis" and "heart disease". Training model using source set from a department where the terms are enriched could improve the performance of recognizing the whole entity.A.3 Medical Experiments DetailsThe 30 entity types for medical domain are: Symptom, Disease, Examination, Treatment, Laboratory index, Products, Body structure, Frequency, Negative word, Value, Trend, Modification, Temporal word, Noun of locality, Degree modifier, Probability, Object, Organism, Location, Person, Pronoun, Privacy information, Accident, Action, Header, Instrument and material, Nonphysiological structure, Dosage, Scale, and Preposition.A.4 Non-medical Experiments DetailsWeiboNER TransferBoth SighanNER and WeiboNER are annotated in the BIO format (Begin, Inside and Outside), but there is one more entity type (geo-political) in Wei-boNER. For a fair comparison, we follow Peng and Dredze (2017);He and Sun (2017)to merge geo-political entities and locations in WeiboNER, to match different labeling schemes between Wei-boNER and SighanNER. We use the inconsistencies fixed second version of WeiboNER data and word embeddings provided by WeiboNER's developers(Peng and Dredze, 2015)2 in this experiment.TwitterNER TransferTo(Yang et al., 2017)separates the CRF layers for each domain to bypass the label mismatch problem. Since our La-DTL is label-aware, we match four pairs of named entities between two CoNLL 2003 English NER and TwitterNER: LOC with geo-loc, PER with person, ORG with company and MISC with other to compute L La-MMD and L p , and leave six named entities unmatched. FollowingYang et al. (2017), We leverage char-level Bi-LSTM to generate better word representations, concatenate it with pre-trained word embeddings and feed concatenated embeddings to the word-level Bi-LSTM. The framework used for language like English is illustrated inFigure 6.We also convert all characters to lowercase and use the same word embeddings provided byYang et al. (2017)3 . Also, we concatenate the training set and the development set for both domains and sample the same 10% from TwitterNER as(Yang et al., 2017)to be target domain training data. SinceYang et al. (2017)merge training and development set into training data, bothYang et al. (2017)and we report the best performance in the target domain test set.
Co-training for domain adaptation. Minmin Chen, Q Kilian, John Weinberger, Blitzer, Advances in Neural Information Processing Systems. Curran Associates, Inc24Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Ad- vances in Neural Information Processing Systems 24, pages 2456-2464. Curran Associates, Inc.
Domain adaptation of rule-based annotators for named-entity recognition tasks. Laura Chiticariu, Rajasekar Krishnamurthy, Yunyao Li, Frederick Reiss, Shivakumar Vaithyanathan, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingCambridge, MAAssociation for Computational LinguisticsLaura Chiticariu, Rajasekar Krishnamurthy, Yunyao Li, Frederick Reiss, and Shivakumar Vaithyanathan. 2010. Domain adaptation of rule-based annotators for named-entity recognition tasks. In Proceed- ings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 1002- 1012, Cambridge, MA. Association for Computa- tional Linguistics.
Named entity recognition with bidirectional lstm-cnns. Jason Chiu, Eric Nichols, Transactions of the Association for Computational Linguistics. 4Jason Chiu and Eric Nichols. 2016. Named entity recognition with bidirectional lstm-cnns. Transac- tions of the Association for Computational Linguis- tics, 4:357-370.
Selective transfer machine for personalized facial action unit detection. Wen-Sheng Chu, Fernando De La, Torre , Jeffery F Cohn, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWen-Sheng Chu, Fernando De la Torre, and Jeffery F Cohn. 2013. Selective transfer machine for person- alized facial action unit detection. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 3515-3522.
Empirical methods for artificial intelligence. Paul R Cohen, MIT press139Cambridge, MAPaul R Cohen. 1995. Empirical methods for artificial intelligence, volume 139. MIT press Cambridge, MA.
Natural language processing (almost) from scratch. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, Pavel Kuksa, J. Mach. Learn. Res. 12Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493-2537.
Frustratingly easy domain adaptation. Hal Daume, Iii , Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsAssociation for Computational LinguisticsHal Daume III. 2007. Frustratingly easy domain adap- tation. In Proceedings of the 45th Annual Meet- ing of the Association of Computational Linguistics, pages 256-263. Association for Computational Lin- guistics.
Adaptive subgradient methods for online learning and stochastic optimization. John Duchi, Elad Hazan, Yoram Singer, J. Mach. Learn. Res. 12John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res., 12:2121-2159.
Framewise phoneme classification with bidirectional lstm and other neural network architectures. Alex Graves, Jürgen Schmidhuber, Neural Networks. 185Alex Graves and Jürgen Schmidhuber. 2005. Frame- wise phoneme classification with bidirectional lstm and other neural network architectures. Neural Net- works, 18(5):602-610.
A kernel two-sample test. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Schölkopf, Alexander Smola, J. Mach. Learn. Res. 13Arthur Gretton, Karsten M. Borgwardt, Malte J. Rasch, Bernhard Schölkopf, and Alexander Smola. 2012. A kernel two-sample test. J. Mach. Learn. Res., 13:723-773.
Graph-based named entity linking with wikipedia. Ben Hachey, Will Radford, James R Curran, Proceedings of the 12th International Conference on Web Information System Engineering, WISE'11. the 12th International Conference on Web Information System Engineering, WISE'11Berlin, HeidelbergSpringer-VerlagBen Hachey, Will Radford, and James R. Curran. 2011. Graph-based named entity linking with wikipedia. In Proceedings of the 12th International Conference on Web Information System Engineer- ing, WISE'11, pages 213-226, Berlin, Heidelberg. Springer-Verlag.
A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. Hangfeng He, Xu Sun, AAAI. Hangfeng He and Xu Sun. 2017. A unified model for cross-domain and semi-supervised named entity recognition in chinese social media. In AAAI, pages 3216-3222.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735-1780.
Domain adaptation for named entity recognition in online media with word embeddings. Vivek Kulkarni, Yashar Mehdad, Troy Chevalier, arXiv:1612.00148arXiv preprintVivek Kulkarni, Yashar Mehdad, and Troy Chevalier. 2016. Domain adaptation for named entity recogni- tion in online media with word embeddings. arXiv preprint arXiv:1612.00148.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John D Lafferty, Andrew Mccallum, Fernando C N Pereira, Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01. the Eighteenth International Conference on Machine Learning, ICML '01San Francisco, CA, USA. MorganKaufmann Publishers IncJohn D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSan Diego, CaliforniaAssociation for Computational LinguisticsGuillaume Lample, Miguel Ballesteros, Sandeep Sub- ramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 260-270, San Diego, California. Association for Computational Linguistics.
Transfer learning for named-entity recognition with neural networks. Ji Young Lee, Franck Dernoncourt, Peter Szolovits, arXiv:1705.06273arXiv preprintJi Young Lee, Franck Dernoncourt, and Peter Szolovits. 2017. Transfer learning for named-entity recog- nition with neural networks. arXiv preprint arXiv:1705.06273.
Learning transferable features with deep adaptation networks. Mingsheng Long, Yue Cao, Jianmin Wang, Michael Jordan, PMLRProceedings of the 32nd International Conference on Machine Learning. the 32nd International Conference on Machine LearningLille, France37Mingsheng Long, Yue Cao, Jianmin Wang, and Michael Jordan. 2015. Learning transferable fea- tures with deep adaptation networks. In Proceedings of the 32nd International Conference on Machine Learning, volume 37 of Proceedings of Machine Learning Research, pages 97-105, Lille, France. PMLR.
A general regularization framework for domain adaptation. Wei Lu, Hai Leong Chieu, Jonathan Löfgren, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsWei Lu, Hai Leong Chieu, and Jonathan Löfgren. 2016. A general regularization framework for do- main adaptation. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 950-954, Austin, Texas. Associa- tion for Computational Linguistics.
End-to-end sequence labeling via bi-directional lstm-cnns-crf. Xuezhe Ma, Eduard Hovy, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Xuezhe Ma and Eduard Hovy. 2016. End-to-end se- quence labeling via bi-directional lstm-cnns-crf. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1064-1074, Berlin, Germany. Association for Computational Linguistics.
Named entity recognition: Fallacies, challenges and opportunities. Mnica Marrero, Julin Urbano, Sonia Snchez-Cuadrado, Jorge Morato, Juan Miguel Gmez-Berbs, Computer Standards & Interfaces. 355Mnica Marrero, Julin Urbano, Sonia Snchez-Cuadrado, Jorge Morato, and Juan Miguel Gmez-Berbs. 2013. Named entity recognition: Fallacies, challenges and opportunities. Computer Standards & Interfaces, 35(5):482 -489.
Maximum entropy markov models for information extraction and segmentation. Andrew Mccallum, Dayne Freitag, Fernando C N Pereira, Proceedings of the Seventeenth International Conference on Machine Learning, ICML '00. the Seventeenth International Conference on Machine Learning, ICML '00San Francisco, CA, USAMorgan Kaufmann Publishers IncAndrew McCallum, Dayne Freitag, and Fernando C. N. Pereira. 2000. Maximum entropy markov models for information extraction and segmentation. In Proceedings of the Seventeenth International Con- ference on Machine Learning, ICML '00, pages 591-598, San Francisco, CA, USA. Morgan Kauf- mann Publishers Inc.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. Curran Associates, Inc26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in Neural Information Processing Systems 26, pages 3111-3119. Curran Associates, Inc.
Entity linking meets word sense disambiguation: a unified approach. Andrea Moro, Alessandro Raganato, Roberto Navigli, Transactions of the Association for Computational Linguistics. 2Andrea Moro, Alessandro Raganato, and Roberto Nav- igli. 2014. Entity linking meets word sense disam- biguation: a unified approach. Transactions of the Association for Computational Linguistics, 2:231- 244.
Sharing network parameters for crosslingual named entity recognition. Mitesh V Murthy, Pushpak Khapra, Bhattacharyya, arXiv:1607.00198arXiv preprintV Murthy, Mitesh Khapra, Pushpak Bhattacharyya, et al. 2016. Sharing network parameters for crosslingual named entity recognition. arXiv preprint arXiv:1607.00198.
A survey of named entity recognition and classification. David Nadeau, Satoshi Sekine, Lingvisticae Investigationes. 30David Nadeau and Satoshi Sekine. 2007. A sur- vey of named entity recognition and classification. Lingvisticae Investigationes, 30(1):3-26.
A survey on transfer learning. Qiang Sinno Jialin Pan, Yang, IEEE Trans. on Knowl. and Data Eng. 2210Sinno Jialin Pan and Qiang Yang. 2010. A survey on transfer learning. IEEE Trans. on Knowl. and Data Eng., 22(10):1345-1359.
Named entity recognition for chinese social media with jointly trained embeddings. Nanyun Peng, Mark Dredze, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsNanyun Peng and Mark Dredze. 2015. Named en- tity recognition for chinese social media with jointly trained embeddings. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 548-554, Lisbon, Portugal. Association for Computational Linguistics.
Improving named entity recognition for chinese social media with word segmentation representation learning. Nanyun Peng, Mark Dredze, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyShort Papers2Association for Computational LinguisticsNanyun Peng and Mark Dredze. 2016. Improving named entity recognition for chinese social media with word segmentation representation learning. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 149-155, Berlin, Germany. As- sociation for Computational Linguistics.
Multi-task domain adaptation for sequence tagging. Nanyun Peng, Mark Dredze, Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPVancouver, CanadaAssociation for Computational LinguisticsNanyun Peng and Mark Dredze. 2017. Multi-task do- main adaptation for sequence tagging. In Proceed- ings of the 2nd Workshop on Representation Learn- ing for NLP, pages 91-100, Vancouver, Canada. As- sociation for Computational Linguistics.
Machine learning for targeted display advertising: Transfer learning in action. Claudia Perlich, Brian Dalessandro, Troy Raeder, Ori Stitelman, Foster Provost, Mach. Learn. 951Claudia Perlich, Brian Dalessandro, Troy Raeder, Ori Stitelman, and Foster Provost. 2014. Machine learn- ing for targeted display advertising: Transfer learn- ing in action. Mach. Learn., 95(1):103-127.
Named entity recognition in tweets: An experimental study. Alan Ritter, Sam Clark, Mausam , Oren Etzioni, Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. the 2011 Conference on Empirical Methods in Natural Language ProcessingEdinburgh, Scotland, UKAssociation for Computational LinguisticsAlan Ritter, Sam Clark, Mausam, and Oren Etzioni. 2011. Named entity recognition in tweets: An ex- perimental study. In Proceedings of the 2011 Con- ference on Empirical Methods in Natural Language Processing, pages 1524-1534, Edinburgh, Scotland, UK. Association for Computational Linguistics.
Beyond sharing weights for deep domain adaptation. Artem Rozantsev, Mathieu Salzmann, Pascal Fua, arXiv:1603.06432arXiv preprintArtem Rozantsev, Mathieu Salzmann, and Pascal Fua. 2016. Beyond sharing weights for deep domain adaptation. arXiv preprint arXiv:1603.06432.
Wasserstein distance guided representation learning for domain adaptation. Jian Shen, Yanru Qu, Weinan Zhang, Yong Yu, arXiv:1707.01217arXiv preprintJian Shen, Yanru Qu, Weinan Zhang, and Yong Yu. 2017. Wasserstein distance guided representation learning for domain adaptation. arXiv preprint arXiv:1707.01217.
Discriminative transfer learning with tree-based priors. Nitish Srivastava, R Ruslan, Salakhutdinov, Advances in Neural Information Processing Systems. Curran Associates, Inc26Nitish Srivastava and Ruslan R Salakhutdinov. 2013. Discriminative transfer learning with tree-based pri- ors. In Advances in Neural Information Processing Systems 26, pages 2094-2102. Curran Associates, Inc.
Charles Sutton, Andrew Mccallum, An introduction to conditional random fields. Foundations and Trends R in Machine Learning. 4Charles Sutton, Andrew McCallum, et al. 2012. An introduction to conditional random fields. Founda- tions and Trends R in Machine Learning, 4(4):267- 373.
Introduction to the conll-2003 shared task: Language-independent named entity recognition. Erik F Tjong, Kim Sang, Fien De Meulder, Proceedings of the Seventh Conference on Natural Language Learning at HLT-NAACL 2003. the Seventh Conference on Natural Language Learning at HLT-NAACL 2003Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proceedings of the Seventh Conference on Natu- ral Language Learning at HLT-NAACL 2003, pages 142-147.
i2b2/va challenge on concepts, assertions, and relations in clinical text. Ozlem Uzuner, R Brett, Shuying South, Scott L Shen, Duvall, Journal of the American Medical Informatics Association. 185Ozlem Uzuner, Brett R South, Shuying Shen, and Scott L DuVall. 2011. 2010 i2b2/va challenge on concepts, assertions, and relations in clinical text. Journal of the American Medical Informatics Asso- ciation, 18(5):552-556.
Named entity recognition in chinese clinical text using deep neural network. Yonghui Wu, Min Jiang, Jianbo Lei, Hua Xu, Studies in health technology and informatics. 216624Yonghui Wu, Min Jiang, Jianbo Lei, and Hua Xu. 2015. Named entity recognition in chinese clinical text us- ing deep neural network. Studies in health technol- ogy and informatics, 216:624.
Transfer learning for sequence tagging with hierarchical recurrent networks. Zhilin Yang, Ruslan Salakhutdinov, William W Cohen, ICLR. Zhilin Yang, Ruslan Salakhutdinov, and William W Cohen. 2017. Transfer learning for sequence tag- ging with hierarchical recurrent networks. In ICLR.
How transferable are features in deep neural networks?. Jason Yosinski, Jeff Clune, Yoshua Bengio, Hod Lipson, Proceedings of the 27th International Conference on Neural Information Processing Systems. the 27th International Conference on Neural Information Processing SystemsCambridge, MA, USAMIT Press2NIPS'14Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Proceedings of the 27th In- ternational Conference on Neural Information Pro- cessing Systems -Volume 2, NIPS'14, pages 3320- 3328, Cambridge, MA, USA. MIT Press.
Unsupervised biomedical named entity recognition: Experiments with clinical and biological texts. Shaodian Zhang, Noémie Elhadad, Journal of biomedical informatics. 466Shaodian Zhang and Noémie Elhadad. 2013. Unsuper- vised biomedical named entity recognition: Experi- ments with clinical and biological texts. Journal of biomedical informatics, 46(6):1088-1098.
Collective noise contrastive estimation for policy transfer learning. Weinan Zhang, Ulrich Paquet, Katja Hofmann, AAAI. Weinan Zhang, Ulrich Paquet, and Katja Hofmann. 2016. Collective noise contrastive estimation for policy transfer learning. In AAAI, pages 1408-1414.
Supervised representation learning: Transfer learning with deep autoencoders. Fuzhen Zhuang, Xiaohu Cheng, Ping Luo, Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15. the 24th International Conference on Artificial Intelligence, IJCAI'15AAAI PressSinno Jialin Pan, and Qing HeFuzhen Zhuang, Xiaohu Cheng, Ping Luo, Sinno Jialin Pan, and Qing He. 2015. Supervised representation learning: Transfer learning with deep autoencoders. In Proceedings of the 24th International Conference on Artificial Intelligence, IJCAI'15, pages 4119- 4125. AAAI Press.
| [
"https://github.com/felixwzh/La-DTL",
"https://github.com/kimiyoung/transfer"
] |
[
"RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification",
"RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification"
] | [
"Niloofar Safi \nDepartment of Computer Science\nUniversity of Houston Houston\n77204-3010TX\n",
"Samghabadi Deepthi \nDepartment of Computer Science\nUniversity of Houston Houston\n77204-3010TX\n",
"Mave Sudipta \nDepartment of Computer Science\nUniversity of Houston Houston\n77204-3010TX\n",
"Kar Thamar Solorio tsolorio@uh.edu \nDepartment of Computer Science\nUniversity of Houston Houston\n77204-3010TX\n"
] | [
"Department of Computer Science\nUniversity of Houston Houston\n77204-3010TX",
"Department of Computer Science\nUniversity of Houston Houston\n77204-3010TX",
"Department of Computer Science\nUniversity of Houston Houston\n77204-3010TX",
"Department of Computer Science\nUniversity of Houston Houston\n77204-3010TX"
] | [
"Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying"
] | This paper presents our system for "TRAC 2018 Shared Task on Aggression Identification". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/. | null | [
"https://www.aclweb.org/anthology/W18-4402.pdf"
] | 51,889,492 | 1807.11712 | 3b6a48fb7111412e1e8c406072194416a6df3a6e |
RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification
August 25. 2018
Niloofar Safi
Department of Computer Science
University of Houston Houston
77204-3010TX
Samghabadi Deepthi
Department of Computer Science
University of Houston Houston
77204-3010TX
Mave Sudipta
Department of Computer Science
University of Houston Houston
77204-3010TX
Kar Thamar Solorio tsolorio@uh.edu
Department of Computer Science
University of Houston Houston
77204-3010TX
RiTUAL-UH at TRAC 2018 Shared Task: Aggression Identification
Proceedings of the First Workshop on Trolling, Aggression and Cyberbullying
the First Workshop on Trolling, Aggression and CyberbullyingSanta Fe, USAAugust 25. 201812
This paper presents our system for "TRAC 2018 Shared Task on Aggression Identification". Our best systems for the English dataset use a combination of lexical and semantic features. However, for Hindi data using only lexical features gave us the best results. We obtained weighted F1measures of 0.5921 for the English Facebook task (ranked 12th), 0.5663 for the English Social Media task (ranked 6th), 0.6292 for the Hindi Facebook task (ranked 1st), and 0.4853 for the Hindi Social Media task (ranked 2nd).This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/.
Introduction
Users' activities on social media is increasing at a fast rate. Unfortunately, a lot of people misuse these online platforms to harass, threaten, and bully other users. This growing aggression against social media users has caused serious effects on victims, which can even lead them to harm themselves. The TRAC 2018 Shared Task on Aggression Identification (Kumar et al., 2018a) aims at developing a classifier that could make a 3-way classification of a given data instance between "Overtly Aggressive", "Covertly Aggressive", and "Non-aggressive". We present here the different systems we submitted to the shared task, which mainly use lexical and semantic features to distinguish different levels of aggression over multiple datasets from Facebook and other social media that cover both English and Hindi texts.
Related Work
In recent years, several studies have been done towards detecting abusive and hateful language in online texts. Some of these works target different online platforms like Twitter (Waseem and Hovy, 2016), Wikipedia (Wulczyn et al., 2016), and ask.fm (Samghabadi et al., 2017) to encourage other research groups to contribute to aggression identification in these sources.
Most of the approaches proposed to detect offensive language in social media make use of multiple types of hand-engineered features. Nobata et al. (2016) use n-grams, linguistic, syntactic and distributional semantic features to build a hate speech detection framework over Yahoo! Finance and News and get an F-score of 81% for a combination of all features. Davidson et al. (2017) combine n-grams, POS-colored n-grams, and sentiment lexicon features to detect hate speech on Twitter data. Van Hee et al. (2015) use word and character n-grams along with sentiment lexicon features to identify nasty posts in ask.fm. Samghabadi et al. (2017) build a model based on lexical, semantic, sentiment, and stylistic features to detect nastiness in ask.fm. They also show the robustness of the model by applying it to the dataset from different other sources.
Based on Malmasi and Zampieri (2018), distinguishing hate speech from profanity is not a trivial task and requires features that capture deeper information from the comments. In this paper, we try different combinations of lexical, semantic, sentiment, and lexicon-based features to identify various levels of aggression in online texts. The datasets were provided by Kumar et al. (2018b). Table 1 shows the distribution of training, validation and test (Facebook and social media) data for English and Hindi corpora. The data has been labeled with one out of three possible tags:
• Non-aggressive (NAG): There is no aggression in the text.
• Overtly aggressive (OAG): The text is containing either aggressive lexical items or certain syntactic structures.
• Covertly aggressive (CAG): The text is containing an indirect attack against the target using polite expressions in most cases.
Data Pre-processing
Generally the data from social media resources is noisy, grammar and syntactic errors are common, with a lot of ad-hoc spellings, that make it hard to analyze. Therefore, we first put our efforts to clean and prepare the data to feed it to our systems. For the English dataset, we lowercased the data and removed URLs, Email addresses, and numbers. We also did minor stemming by removing "ing", plural and possessive "s", and replaced a few common abstract grammatical forms with the formal versions. On manual inspection of the training data for Hindi, we found that some of the instances are Hindi-English code-mixed, some use Roman script for Hindi and others are in Devanagari. Only 26% of the training data is in Devanagari script. We normalize the data by transliterating instances in Devanagari to Roman script. These instances are identified using Unicode pattern matching and are transliterated to Roman script using indic-trans transliteration tool 1 . For further analysis, we run an in-house wordlevel language identification system on the training data (Mave et al., 2018). This CRF system is trained on Facebook posts and has an F1-weighted score of 97%. Approximately 60% of the training data is code-mixed, 39% is only Hindi and 0.42% is only English.
Features
We make use of the following features: Lexical: Words are powerful mediums to convey a feeling, describe or express ideas. With this notion, we use word n-grams (n=1, 2, 3), char n-grams (n=3, 4, 5), and k-skip n-grams (k=2, n=2, 3) as features. We weigh each term with its term frequency-inverse document frequency (TF-IDF). We also consider using another weighting scheme by trying binary word n-grams (n=1, 2, 3). Word Embeddings: The idea behind this approach is to use a vector space model for extracting semantic information from the text (Le and Mikolov, 2014). For the embedding model we use pre-trained vectors trained on part of Google News dataset including about 3 million words 2 . We computed word embeddings feature vectors by averaging the word vector of all the words in each comment. We skip the words which are not in the vocabulary of the pre-trained model. This representation is only used for English data and the coverage of the Google word embedding is 63% for this corpus. Sentiment: We use Stanford Sentiment Analysis tool (Socher et al., 2013) 3 to extract fine-grained sentiment distribution of each comment. For every message, we calculate the mean and standard deviation of sentiment distribution over all sentences and use them as feature vector. LIWC (Linguistic Inquiry and Word Count): LIWC2007 (Pennebaker et al., 2007) includes around 70 word categories to analyze different language dimensions. In our approach, we only use the categories related to positive or negative emotions and self-references. To build the feature vectors in this case, we use a normalized count of words separated by any of the mentioned categories. This feature is only applicable to English data. Gender Probability: Following the approach in Waseem (2016) we use the Twitter based lexicon presented in Sap et al. (2014) to calculate the probability of gender. We also convert these probabilities to binary gender by considering the positive cases as female and the rest as male. We make the feature vectors with the probability of the gender and binary gender for each message. This feature is not applicable to Hindi corpus.
Experiments and Results
Experimental Settings
For both datasets, we trained several classification models using different combinations of features discussed in 3.3. Since this is a multi-class classification task, we use a one-versus-rest classifier which trains a separate classifier for each class and labels each comment with the class with highest predicted probability across all classifiers. We tried Logistic Regression and linear SVM as the estimator for the classifier. We decided to use Logistic Regression in our final systems, since it works better in the validation phase. We implemented all models using scikit-learn tool 4 .
Results
To build our best systems for both English and Hindi data, we experimented with several models using the different combinations of available features. Table 3 shows the results of our three submitted systems for the English Facebook and Social Media data. In all three systems, we used the same set of features as follows: binary unigram, word unigram, character n-grams of length 4 and 5, and word embeddings. In the first system, we used both train and validation sets for training our ensemble classifier. In the second system we only used the train data for training the model. The only difference between the second and the third models is that we corrected the misspellings using PyEnchant 5 spell checking tool. Unfortunately, we could not try applying the sentiment and lexicon-based features after spell correction due to the restrictions on the total number of submissions. However, we believe that it can improve the performance of the system. Table 4 shows the performance of our systems for the Hindi Facebook and social media data. For the Hindi dataset, the combination of word unigrams, character n-grams of length 3, 4 and 5 gives the best performance over the validation set. These features capture the word usage distribution across classes. Both System 1 and System 2 use these features, trained over training set only and training and validation sets respectively.
Analysis
Looking at the mislabeled instances at validation phase, we found that there are two main reasons for the classifier mistakes:
1. Perceived level of aggression can be subjective. There are some examples in the validation dataset where the label is CAG but it is more likely to be OAG and vice versa. Table 5 shows some of these examples.
2. There are several typos and misspellings in the data that affect the performance.
Label Language Example Actual Predicted
English What has so far Mr.Yechuri done for this Country. Ask him to shut down his bloody piehole for good or I if given the chance will crap on his mouth hole.
CAG OAG
The time you tweeted is around 3 am morning,,which is not at all a namaz time.,As you bollywood carrier is almost finished, you are preparing yourself for politics by these comments.
OAG CAG
Hindi ajeeb chutya hai.... kahi se course kiya hai ya paida hee chutya hua tha CAG OAG Salman aur aamir ki kounsi movie release huyee jo aandhi me dub gaye?? ?Bikau chatukar media OAG CAG Also, it is obvious from Figure 1 that Hindi corpus is more balanced than the English one in case of OAG and CAG instances. That could be a good reason why the performance of the lexical features is better for Hindi data. Table 6 illustrates the most informative features learned by the classifier for all three classes in Hindi data. We observe that word unigrams and character trigrams are the most important features for the system. From the table, the top features for CAG are mostly swear words in Hindi and character n-grams of the swear words. More English words appear in the top list for NAG than the other two classes. There is no overlap between these features with top features from either CAG or OAG. Our system has difficulty differentiating between OAG and CAG when there is no strong swear word in the comments. Figure 2a shows the confusion matrix of our best model for all three classes in English Facebook corpus. The most interesting part of this figure is that the classifier mislabeled several NAG instances with CAG label. Since our system is mostly based on lexical features, we can conclude that there are much fewer profanities in CAG instances comparing with the OAG ones, which make it hard to distinguish them from NAG examples without considering the sentiment aspects of the messages. This fact can also be proved by looking at Figure 2b, since it seems that the classifier was also confused to label CAG instances in both cases with and without profanities in English Social Media corpus. Figure 3a shows that for Hindi Facebook data, the most biggest challenge is to distinguish OAG instances from CAG ones. Since our proposed system, in this case, is completely built on lexical features, it can be inferred from the figure that even indirect aggressive comments in Hindi language contains lots of profanities. However, for the Hindi Social Media corpus, we have the same concern as English data.
NAG
Conclusion
In this paper, we present our approaches to identify the aggression level in English and Hindi comments in two different datasets, one from Facebook and another from other social media. In our best performing systems, we use a combination of lexical and semantic features for English corpus, and lexical features for Hindi data.
Future work for English data includes exploring more sentiment features to capture implicit hateful comments and adding more pre-processing levels. For instance, non-English character removal can improve the system since our proposed model is mainly based on lexical features, and is likely very sensitive to unknown characters and words. For the Hindi dataset, identifying the Hindi-English codemixed instances and processing these instances and Hindi monolingual instances separately could be a future direction to explore. As the classification of aggression is subjective in most scenarios, adding sentiment features to the lexical information might help to model performance for Hindi data.
Figure 1 :
1Label distribution comparison between training and evaluation sets
Figure 2 :Figure 3 :
23plots of confusion matrices of our best performing systems for English Facebook and Plots of confusion matrices of our best performing systems for Hindi Facebook and Social Media data
Table 2
2shows the validation results on training and validation sets.F1-weighted
Feature
English Hindi
Unigram (U)
0.5804
0.6159
Bigram (B)
0.4637
0.5195
Trigram (T)
0.3846
0.4300
Char 3gram (C3)
0.5694
0.6065
Char 4gram (C4)
0.5794
0.6212
Char 5gram (C5)
0.5758
0.6195
Word Embeddings (W2V) 0.5463
N/A
Sentiment (S)
0.3961
N/A
LIWC
0.4350
N/A
Gender Probability (GP)
0.3440
N/A
BU + U + C4 + C5 + W2V 0.5875
N/A
C3 + C4 + C5
0.5494
0.6207
U + C3 + C4 + C5
0.5541
0.6267
Table 2 :
2Validation results for different features for the English and Hindi datasets using Logistic Regression model. In this table BU stands for Binary Unigram.
Table 3 :
3Results for the English test set. FB: Facebook and SM: Social Media.F1 (weighted)
System
FB
SM
Random Baseline 0.3571 0.3206
System 1
0.6451 0.4853
System 2
0.6292 0.4689
Table 4 :
4Results for the Hindi test set. FB: Facebook and SM: Social Media.
Table 5 :
5Misclassified examples in case of the aggression level5 https://pypi.org/project/pyenchant
Table 6 :
6Top 10 features learned by System 1 for each class for the Hindi dataset.
https://github.com/libindic/indic-trans 2 https://code.google.com/archive/p/word2vec/
https://nlp.stanford.edu/sentiment/code.html 4 http://scikit-learn.org/stable/
Automated Hate Speech Detection and the Problem of Offensive Language. Thomas Davidson, Dana Warmsley, Michael Macy, Ingmar Weber, Proceedings of ICWSM. ICWSMThomas Davidson, Dana Warmsley, Michael Macy, and Ingmar Weber. 2017. Automated Hate Speech Detection and the Problem of Offensive Language. In Proceedings of ICWSM.
Benchmarking Aggression Identification in Social Media. Ritesh Kumar, Atul Kr, Shervin Ojha, Marcos Malmasi, Zampieri, Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC). the First Workshop on Trolling, Aggression and Cyberbulling (TRAC)Santa Fe, USARitesh Kumar, Atul Kr. Ojha, Shervin Malmasi, and Marcos Zampieri. 2018a. Benchmarking Aggression Identifi- cation in Social Media. In Proceedings of the First Workshop on Trolling, Aggression and Cyberbulling (TRAC), Santa Fe, USA.
Aggression-annotated Corpus of Hindi-English Code-mixed Data. Ritesh Kumar, Aishwarya N Reganti, Akshit Bhatia, Tushar Maheshwari, Proceedings of the 11th Language Resources and Evaluation Conference (LREC). the 11th Language Resources and Evaluation Conference (LREC)Miyazaki, JapanRitesh Kumar, Aishwarya N. Reganti, Akshit Bhatia, and Tushar Maheshwari. 2018b. Aggression-annotated Corpus of Hindi-English Code-mixed Data. In Proceedings of the 11th Language Resources and Evaluation Conference (LREC), Miyazaki, Japan.
Distributed representations of sentences and documents. V Quoc, Tomas Le, Mikolov, ICML. 14Quoc V. Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In ICML, volume 14, pages 1188-1196.
Challenges in Discriminating Profanity from Hate Speech. Shervin Malmasi, Marcos Zampieri, Journal of Experimental & Theoretical Artificial Intelligence. 30Shervin Malmasi and Marcos Zampieri. 2018. Challenges in Discriminating Profanity from Hate Speech. Journal of Experimental & Theoretical Artificial Intelligence, 30:1-16.
Language Identification and Analysis of Code-Switched Social Media Text. Deepthi Mave, Suraj Maharjan, Thamar Solorio, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingMelbourne, AustraliaAssociation for Computational LinguisticsDeepthi Mave, Suraj Maharjan, and Thamar Solorio. 2018. Language Identification and Analysis of Code- Switched Social Media Text. In Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, Melbourne, Australia, July. Association for Computational Linguistics.
Abusive Language Detection in Online User Content. Chikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, Yi Chang, Proceedings of the 25th International Conference on World Wide Web. the 25th International Conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeChikashi Nobata, Joel Tetreault, Achint Thomas, Yashar Mehdad, and Yi Chang. 2016. Abusive Language Detec- tion in Online User Content. In Proceedings of the 25th International Conference on World Wide Web, pages 145-153. International World Wide Web Conferences Steering Committee.
Liwc2007: Linguistic inquiry and word count. James W Pennebaker, Roger J Booth, Martha E Francis, liwc.netAustin, TexasJames W. Pennebaker, Roger J. Booth, and Martha E. Francis. 2007. Liwc2007: Linguistic inquiry and word count. Austin, Texas: liwc.net.
Detecting nastiness in social media. Suraj Niloofar Safi Samghabadi, Alan Maharjan, Raquel Sprague, Thamar Diaz-Sprague, Solorio, Proceedings of the First Workshop on Abusive Language Online. the First Workshop on Abusive Language OnlineNiloofar Safi Samghabadi, Suraj Maharjan, Alan Sprague, Raquel Diaz-Sprague, and Thamar Solorio. 2017. Detecting nastiness in social media. In Proceedings of the First Workshop on Abusive Language Online, pages 63-72.
Developing age and gender predictive lexica over social media. Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, Hansen Andrew Schwartz, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Maarten Sap, Gregory Park, Johannes Eichstaedt, Margaret Kern, David Stillwell, Michal Kosinski, Lyle Ungar, and Hansen Andrew Schwartz. 2014. Developing age and gender predictive lexica over social media. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1146-1151.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Andrew Manning, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pages 1631-1642.
Detection and fine-grained classification of cyberbullying events. Cynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daelemans, Veronique Hoste, Proceedings of the International Conference Recent Advances in Natural Language Processing. the International Conference Recent Advances in Natural Language ProcessingBulgariaINCOMA Ltd. ShoumenCynthia Van Hee, Els Lefever, Ben Verhoeven, Julie Mennes, Bart Desmet, Guy De Pauw, Walter Daelemans, and Veronique Hoste. 2015. Detection and fine-grained classification of cyberbullying events. In Proceedings of the International Conference Recent Advances in Natural Language Processing, pages 672-680. INCOMA Ltd. Shoumen, Bulgaria.
Hateful symbols or hateful people? predictive features for hate speech detection on twitter. Zeerak Waseem, Dirk Hovy, Proceedings of the NAACL student research workshop. the NAACL student research workshopZeerak Waseem and Dirk Hovy. 2016. Hateful symbols or hateful people? predictive features for hate speech detection on twitter. In Proceedings of the NAACL student research workshop, pages 88-93.
Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. Zeerak Waseem, Proceedings of the first workshop on NLP and computational social science. the first workshop on NLP and computational social scienceZeerak Waseem. 2016. Are you a racist or am i seeing things? annotator influence on hate speech detection on twitter. In Proceedings of the first workshop on NLP and computational social science, pages 138-142.
Ex machina: Personal attacks seen at scale. CoRR. Ellery Wulczyn, Nithum Thain, Lucas Dixon, abs/1610.08914Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2016. Ex machina: Personal attacks seen at scale. CoRR, abs/1610.08914.
| [
"https://github.com/libindic/indic-trans"
] |
[
"Improving the expressiveness of neural vocoding with non-affine Normalizing Flows",
"Improving the expressiveness of neural vocoding with non-affine Normalizing Flows"
] | [
"Adam Gabryś gabrysa@amazon.com \nAlexaAI\n",
"Yunlong Jiao jyunlong@amazon.com \nAlexaAI\n",
"Viacheslav Klimkov vklimkov@amazon.com \nAlexaAI\n",
"Daniel Korzekwa korzekwa@amazon.com \nAlexaAI\n",
"Roberto Barra-Chicote \nAlexaAI\n"
] | [
"AlexaAI",
"AlexaAI",
"AlexaAI",
"AlexaAI",
"AlexaAI"
] | [] | This paper proposes a general enhancement to the Normalizing Flows (NF) used in neural vocoding. As a case study, we improve expressive speech vocoding with a revamped Parallel Wavenet (PW). Specifically, we propose to extend the affine transformation of PW to the more expressive invertible nonaffine function. The greater expressiveness of the improved PW leads to better-perceived signal quality and naturalness in the waveform reconstruction and text-to-speech (TTS) tasks. We evaluate the model across different speaking styles on a multispeaker, multi-lingual dataset. In the waveform reconstruction task, the proposed model closes the naturalness and signal quality gap from the original PW to recordings by 10%, and from other state-of-the-art neural vocoding systems by more than 60%. We also demonstrate improvements in objective metrics on the evaluation test set with L2 Spectral Distance and Cross-Entropy reduced by 3% and 6‰ comparing to the affine PW. Furthermore, we extend the probability density distillation procedure proposed by the original PW paper, so that it works with any non-affine invertible and differentiable function. 1 | 10.21437/interspeech.2021-1555 | [
"https://arxiv.org/pdf/2106.08649v1.pdf"
] | 235,446,941 | 2106.08649 | 54aa09e4b56e1c4c4f8370ae9649e33c2dcf5975 |
Improving the expressiveness of neural vocoding with non-affine Normalizing Flows
16 Jun 2021
Adam Gabryś gabrysa@amazon.com
AlexaAI
Yunlong Jiao jyunlong@amazon.com
AlexaAI
Viacheslav Klimkov vklimkov@amazon.com
AlexaAI
Daniel Korzekwa korzekwa@amazon.com
AlexaAI
Roberto Barra-Chicote
AlexaAI
Improving the expressiveness of neural vocoding with non-affine Normalizing Flows
16 Jun 2021arXiv:2106.08649v1 [eess.AS]Index Terms: Text To SpeechNeural vocoderNormalizing Flows
This paper proposes a general enhancement to the Normalizing Flows (NF) used in neural vocoding. As a case study, we improve expressive speech vocoding with a revamped Parallel Wavenet (PW). Specifically, we propose to extend the affine transformation of PW to the more expressive invertible nonaffine function. The greater expressiveness of the improved PW leads to better-perceived signal quality and naturalness in the waveform reconstruction and text-to-speech (TTS) tasks. We evaluate the model across different speaking styles on a multispeaker, multi-lingual dataset. In the waveform reconstruction task, the proposed model closes the naturalness and signal quality gap from the original PW to recordings by 10%, and from other state-of-the-art neural vocoding systems by more than 60%. We also demonstrate improvements in objective metrics on the evaluation test set with L2 Spectral Distance and Cross-Entropy reduced by 3% and 6‰ comparing to the affine PW. Furthermore, we extend the probability density distillation procedure proposed by the original PW paper, so that it works with any non-affine invertible and differentiable function. 1
Introduction
Text-to-speech (TTS) is a rapidly growing domain in artificial intelligence. TTS systems attract more attention every year as people use them for Voice Assistants, education, gaming, and much more. High quality and low latency systems are necessary to satisfy TTS customer needs. Most state-of-the-art Neural TTS systems address the problem of speech generation in two steps. The first step focuses on generating a low resolution intermediate speech representation [1,2,3]. The second step concentrates on transforming this acoustic representation into a high-fidelity high-quality acoustic waveform. Commonly, the model that refers to the second step is called the vocoder.
The state-of-the-art vocoders are generative Deep Neural Networks (DNN). The two most prominent classes of generative models are sequential and parallel architectures. Typically, sequential models achieve state-of-the-art results in audio, computer-vision, and textual domains [4]. In neural vocoding one of the best quality sequential architectures is WaveNet [5]. It generates waveforms autoregressively using a stack of dilated causal convolutions. However, due to a high number of computations per sample, and the high temporal resolution of the speech signal [6] it is not suited to real-time applications. More efficient sequential models were later proposed [7,8] that are an order of magnitude faster than WaveNet. Nevertheless, these models are sequential, so computations cannot be easily parallelized to fully utilize modern Deep Learning ASICs, or GPUs.
The above limitation has driven most of the recent research in neural vocoding towards parallel models. The two widely used neural vocoding parallel architectures involve Generative Adversarial Networks (GAN) [9,10,11,12] and Normalizing Flows (NF) [13,14,15,16,17,18]. The generator part of a GAN can typically be any function appropriate for transforming some random inputs into synthetic outputs. GAN-based vocoders enjoy great flexibility of architectural design and hence fast parallel generation. The state-of-the-art adversarialbased vocoders produce high-quality, natural-sounding speech but suffer from occasional audio glitches. These artifacts can significantly reduce the subjective score of such models [19]. This is a common problem related to the performance of GANs in generalizing to unseen data [20,21]. NF provides a general framework for defining probability distributions over continuous random variables. NF takes a base distribution and transforms it to the target probability density with sequential invertible and differentiable transformations. Normalizing Flows are compelling, in the context of vocoding, due to their efficient parallel synthesis procedure [13], and great generalization [19].
In this work, we focus on neural speech vocoding with Normalizing Flows (NF). The majority of NF used in neural vocoding implement a sequence of transformations as affine functions [13,14,15,16]. This type of transformation is known to be limited in its density modeling power [22,23,24,25]. In practice, we found that it adversely impacts perceived naturalness and signal quality, especially in vocoding scenarios involving highly expressive speech
The contributions of this work are: 1) We change the inexpressive affine flow transformation of PW [13] to a more expressive, non-affine function; 2) We extend the probability density distillation [26] procedure proposed by the original PW [13], so that it works with any non-affine invertible and differentiable function; and 3) We perform a detailed evaluation of the model across different speaking styles on a multi-speaker, multi-lingual dataset. We demonstrate that our network is qualitatively and quantitively preferred over the original PW in the waveform reconstruction and TTS tasks;
Normalizing Flows & Related Work
NF transforms some D-dimensional real vector of continuous random variables u into another D-dimensional real vector of continuous random variables x. Usually u is sampled from a simple base distribution (for example Logistic) pu(u). In the vocoding task x corresponds to audio signal that follows a probability density px(x). Conceptually, we can outline two blocks in the NF. One is the transformation function T , which has to be invertible and differentiable. The other is the conditioner neural network c that predicts the parametrization h for the transformation T .
x = T (u; h) u = T −1 (x; h) h = c(u)(1)
Given the invertible and differentiable nature of T , the density of x is well-defined and can be obtainable by a change of variables:
px(x) = pu(u)|detJT (u)| −1(2)
The Jacobian JT (u) is D x D matrix of all partial derivatives of T over u. In this section, we discuss the merits and limitations of different NF architectures and outline our model design.
There are two major paradigms of training NF. One paradigm is to fit NF to the data with Maximum Likelihood Estimation (MLE) [14,16,17,18]. In practice, it means that the model computes T −1 during training and T during the synthesis. Another paradigm assumes that we can evaluate the target data density, and we aim to train a NF to minimize the divergence loss. Commonly this is done with knowledge distillation [26], where the data density is estimated through a teacher network [13,15]. A notable example of this training in the context of vocoding is the use of a high-quality Wavenet [5] to train a NF-based PW [13]. This paradigm for training and synthesis requires only the forward transformation T . In both paradigms, to train the model, we have to compute the Jacobian determinant, which typically costs O(D 3 ). However, in many practical applications, we can reduce this complexity. An autoregressive conditioner network has the Jacobian that is a lower triangular matrix with determinant computable in O(D) [13,14,15,16,27,28]. It is shown [24,29] that under the assumption of enough capacity and data, an autoregressive conditioner with non-linear transformations can approximate any continuous distribution with any desired precision -a property called universal approximation. NF using such a conditioner can parallelize the forward transformation computation, but its inverse is sequential. This poses a challenge for the MLE paradigm training due to the high temporal resolution of speech data. Coupling layers are a common workaround for this problem [14,28,30]. Such an architecture allows efficient computation of both forward and inverse transformations. However, it may limit the expressivity of NF since a significant number of dimensions are left unchanged at each flow layer [31]. Because of the above argumentation, in this work, we decided to use Par-allelWavenet [13] which is a fully-autoregressive model trained with knowledge distillation that does not require any computation of the transformation inverse.
The NF transformation has to be invertible and differentiable. The most straightforward and common design choice is to implement the transformation as an affine function [13,14,15,16,27,28]. Such a design is attractive because of its simplicity and analytically tractability. However, the drawback of such a transformation is its limited expressivity. Specifically, the output of NF belongs to the same distribution family as its base. In some cases, this might negatively affect the capture of multimodal target distributions [23,24,25]. To overcome this limitation, the transformation might be implement as a composition or the weighted sum of monotonically increasing activation functions [23,24,25], the integral of some positive function [29], or a spline of analytically invertible monotonic functions [22,30,32]. All above transformations are Universal Approximators [24,29]. Another idea is to use constrained residual functions [33,34]. Unfortunately, for these methods, we either cannot efficiently compute the determinant of the Jacobian or the function has limited expressivity [31]. Finally, we might also construct the flow by defining an ordinary differential equation (ODE) that describes the evolution of NF in time instead of a finite sequence of transformations [17,18]. According to recent surveys [35], Normalizing Flows with finite composition of non-affine transformations outperform other flow-based methods. Considering the above pros and cons, we decide to enhance PW with a composition of monotonically increasing non-affine activation functions inspired by Flow++ [25].
Model description
Parallel Wavenet
The original Parallel Wavenet [13] uses conditional Inverse Autoregressive Flows [27] to shift and scale base a logistic distribution to model the probability density of audio waveforms. The procedure is as follows. First, to generate condtioning features m, we pass the sequence of Mel-spectrograms 2 through transposed convolutions that upsample it to match the audio wavefrom of length D. Then, we sample a D-long sequence of noise from Logistic distribution u ∼ L(0, 1). We aim to model the audio waveform x with affine transformations that shift and scale the input noise u. To predict the transformation scales α α α and shifts β β β we use residual gated causal convolutions (RGCNN) [13,36]. RGCNN takes as an input conditioning m and a sequence of the noise u. For the t-th time step the predicted audio sample xt is:
xt = αt · ut + βt αt, βt = RGCNN(u<t, m)(3)
Multiple instances of such Flows are stacked on top of each other to increase the expressivity of NF. Parallel Wavenet [13] is trained with probability density distillation. It is defined as KLD loss DKL between the teacher PT given student predictions, and student PS distributions. In general KLD can be defined as the difference between Cross Entropy H(PS, PT ) and Entropy H(PS) terms:
DKL(PS||PT ) = H(PS, PT ) − H(PS)(4)
In the original Parallel Wavenet student distribution follows a Logistic function, and we can compute student Entropy analytically.
H(PS) = E u∼L(0,1) D t=1 ln(βt) + 2D(5)
The Cross Entropy term is computed via Monte Carlo approximation. For every sample x we draw from the student pS, we compute all pT (xt|x<t) with the teacher, and then evaluate H(pS(xt|x<t), pT (xt|x<t)).
H(PS, PT ) = D t=1 E p S (x <t ) H(pS(xt|x<t), pT (xt|x<t))(6)
Sampling from the student does not require passing noise through the NF. In a single forward pass, we cache parametrization for the Logistic Distribution, and in Monte Carlo sampling, we apply the reparametrization trick [37].
Non-affine transformation
In the original PW, a student can only output a uni-modal Logistic distribution per time step, and therefore is not able to reconstruct a multi-modal mixture of Logistics (MoL) of a Wavenet teacher [5]. To overcome this limitation, we propose to extend the affine transformation of the original PW to a non-affine function. Inspired by the Flow++ [25], we implement transformation T as a cumulative distribution function (CDF) for a mixture of N logistics (MoL) followed by an inverse sigmoid (logit) σ −1 and an affine transformation. Such transformation is invertible and differentiable. The MoL CDF domain is (0, 1), so a logit of it always exists. Also, both MoL CDF and logit functions are monotonically increasing, though invertible. Logistics are parameterized by shifts µ µ µ, and scales s s s that are combined with mixing proportions π π π. The output of the logit is scaled by α and shifted by β. For the t-th time step, the predicted audio sample xt is: xt = σ −1 (MoLCDF(ut; πt πt πt, µt µt µt, st)) · αt + βt αt, βt, πt πt πt, µt µt µt, st st st = RGCNN(u<t, m)
Comparing to the affine function (equation 3), such a transformation is non-affine and can induce multimodality [24]. The computation of the Jacobian of the transformation is straightforward since the derivative of MoLCDF is the MoL probability density function (PDF). We also know the derivatives of logit and affine functions: where zti = ut − µti sti (9)
Generic and efficient training procedure
The student distribution with non-affine transformation is no longer a uni-modal Logistic. Because of that, we have to adapt the KLD computation in the training procedure. As in the original PW, described in section 3.1, we use Monte Carlo approximation to estimate KLD. However, in addition to computing predicted samples x with the reparametrization trick [37], which is required to estimate Cross-Entropy with equation 6, we also compute a Jacobian determinant with equation 8. We do that with cached transformation parameters for every noise sample sequence. The Jacobian allows us to evaluate the final student density pS with equation 2. We use this to estimate Entropy with:
H(PS) = E u∼L(0,1)
Experiments
Experimental setup
Training & Evaluation datasets.
All models used in evaluations were trained on internal studioquality recordings. The dataset used for training contains 22 male and 52 female voices speaking 27 languages and dialects in 10 different speaking styles. Data were balanced so that there are approximately 3000 utterances per speaker. The dataset we use has a diverse range of speech vocoding scenarios and is motivated by our assumption that the non-affine transformation improves modeling more expressive distributions. For evaluation, we extracted Mel-spectrograms from the original studio-quality recordings. The dataset contains 2700 recorded sentences covering 20 languages with 26 speakers in 10 different speaking styles. The dataset is balanced. There are at least 100 recordings per style and 50 per speaker. For the subset of 1950 utterances, we also generated Mel-spectrograms in a given style from the text with Tacotron-2 like systems. [2].
Evaluation setup
We run two types of evaluations. First is the subjective evaluation that compares the affine PW with the proposed nonaffine model. We execute it as the preference tests of the TTS and waveform reconstruction tasks between the two systems. To quantify differences we run a MUltiple Stimuli with Hidden Reference and Anchor (MUSHRA) [38] evaluation of the waveform reconstruction task. Apart from the two PW systems it also includes original recordings and two other state-of-theart neural vocoding models: WaveGlow [14] and ParallelWave-GAN [9]. Second is the objective evaluation, which compares affine and non-affine models in terms of Cross-Entropy between the teacher and student and L2 Spectral Distance of the reconstructed waveform.
For hypothesis testing with the objective metrics and MUSHRA we use a two-tailed t-test. For the preference test, we evaluate a one-tailed hypothesis with the Binomial test. We consider the difference between systems to be statistically significant if the p − value is lower than 0.05. All subjective tests are executed on the Clickworker platform [39]. Each of the screens is assessed by 40 native listeners in the preference tests and 20 in the MUSHRA tests.
Training setup & Model details
All of the PW [13] models were distilled from a high-quality Wavenet teacher [5]. The teacher network uses 24 layers with 4 dilation doubling cycles, 128 RGCNN channels, kernel size 3, and output distribution of 10 MoLs. For the student architecture, we use 2 flows with 10 and 30 RGCNN layers with 128 channels and dilation reset every 10 layers 3 . Non-affine PW uses a mixture of 10 logistics in the transformation MoL-CDF. Both models were trained on Mel-spectrogram conditioning corresponding to short audio clips, with the Adam optimizer [40] and a constant learning rate 10 −4 for 4 million iterations with KLD and power loss [13]. The teacher uses a batch size of 64 and audio clips of 0.3625s duration, while the student uses 16 and 0.85s respectively. WaveGlow [14] and ParallelWaveGAN [9] models were trained using open-source implementations 4 .
Objective evaluation
To objectively evaluate the differences between the non-affine and original PW, we propose two benchmarking metrics. The first is the L2 Spectral Distance between the original waveform x and a waveform reconstructed from a Mel-spectrogramx. We transform the signal to spectrum using the short-time Fourier transform (STFT) with hop-size of 256 samples and 1024 bins. The metric is computed as |ST F T (x)| − |ST F T (x)| 2 . The second is the Cross-Entropy between student and teacher under the student distribution. The latter can be interpreted as a negative log-likelihood, which is a common metric used to evaluate NF [35]. It is computed with a Monte Carlo approximation as outlined in equation 6. In Table 1, we present average results of objective metrics attained by affine and non-affine transformations across different speaking styles. The non-affine PW outperforms the original model on every style. The results are statistically significant for all styles except News Briefing. For more subjectively expressive styles, like Singing, we observe a bigger relative difference between affine and non-affine models than for less expressive ones, like Neutral.
Subjective evaluation
To understand the subjective preference of naive listeners between the proposed non-affine model, the original PW, and other state-of-the-art neural vocoding systems WaveGlow [14] and ParallelWaveGAN [9], we run three perceptual evaluations. To quantify differences between all of the systems we evaluate the waveform reconstruction task with MUSHRA test. We ask listeners to rate the voices in terms of their naturalness, paying attention to the quality of the audio signal and articulation clarity. 100 means the most natural and highest audio quality speech; 0 means the least natural and lowest audio quality speech. Overall the non-affine transformation outperforms affine ParallelWavenet with statistical significance p − val < 0.05. The non-affine transformation closes the gap from the original PW to recordings by 10%, and from other state-ofthe-art neural vocoding systems by more than 60%. Non-affine PW achieves 95.35% Relative MUSHRA, when compared to recordings, while affine PW has 94.83%. Detailed results are reported in Table 2.
To get a more sensitive subjective preference between affine and non-affine systems, we evaluate the waveform reconstruction task with a simple preference test. Listeners can select either preference towards one of the systems or no-preference. In case of no-preference, we split the votes equally between both systems. Overall preference towards the non-affine model is confirmed with statistical significance p − val < 0.05. Results across different speaking styles are statistically significant for Spelling, Singing, News briefing, and Jokes. For these, the nonaffine PW is preferred for all except News briefing. Results are presented in Table 3.
To understand if non-affine improvements hold for Melspectrograms synthesized with a Tacotron-2 like NTTS system we run an additional preference test. The non-affine system overall outperforms the affine one with statistical significance p − val < 0.05. Results for specific styles are mostly statistically insignificant, except Whisper which is better with the non-affine model. Results are reported in the Table 3
Conclusions & Future Work
In this work, we presented a general improvement to the probability density modeling power of Normalizing Flows (NF) used in neural vocoding. We enhanced Parallel Wavenet (PW) with a monotonically increasing non-affine activation function. The proposed model closed the naturalness and signal quality gap from the original PW to recordings by 10%, and from other state-of-the-art neural vocoding systems by more than 60%. It also reduced the L2 Spectral Distance and the Cross-Entropy computed on the multi-speaker, multi-lingual test set by 3% and 6‰ compared to the affine PW. For more expressive styles like Singing, we observe more improvements than for less expressive ones. This work motivates several possible directions for further research. 1) Non-affine NF might significantly reduce the memory footprint and the number of floating-point operations in the neural vocoding. It is reported in other domains [23,24] that non-affine models achieve the same or better quality than the affine ones with a much lower number of layers. 2) The nonaffine transformation might simplify the complex teacher selection process for knowledge distillation models. 3) Finally, it might help to improve vocoding quality of other NF-based neural vocoding architectures.
∂xt ∂ut = exp(αt + MoLPDF(ut, πt πt πt, µt µt µt, st) − ln(MoLCDF(ut, πt πt πt, µt µt µt, st)) − ln(1 − MoLCDF(ut, πt πt πt, µt µt µt, st)))(8)MoLCDF and MoLPDF are defined as:MoLCDF(ut, πt πt πt, µt µt µt, st) · (zti − ln(sti) − 2 ln(1 + e z ti ))
−
ln(pu(ut)|detJT (ut)| −1 )
Table 1 :
1Averageobjective metrics with confidence interval of 95% computed on
the test-set. Lower (better) numbers that are different with statistical signficance
(p − val < 0.05, two-tailed t-test) are in bold. Results are sorted by rela-
tive difference (RD) between affine and non-affine transformation on L2 Spectral
Distance.
L2 Spectral Distance
Cross Entropy
Style
affine
non-affine RD
affine non-affine RD
News briefing 0.083±0.007 0.078±0.001 -6.3% 4.96±0.07 4.92±0.07 -8.7‰
Singing
0.073±0.005 0.068±0.005 -6.1% 4.97±0.07 4.95±0.07 -4.3‰
Spelling
0.051±0.003 0.049±0.003 -4.2% 4.35±0.07 4.32±0.07 -6.2‰
Disc Jockey
0.058±0.003 0.055±0.003 -4.0% 4.71±0.06 4.68±0.06 -7.2‰
Jokes
0.055±0.003 0.053±0.003 -3.5% 4.27±0.06 4.25±0.05 -5.7‰
Long Form
0.056±0.004 0.055±0.004 -3.1% 4.67±0.08 4.64±0.08 -6.9‰
Emotional
0.072±0.005 0.070±0.005 -3.0% 4.88±0.04 4.86±0.04 -5.6‰
Whispering
0.314±0.008 0.305±0.007 -2.9% 5.56±0.06 5.55±0.06 -2.4‰
Conversational 0.078±0.005 0.077±0.005 -2.1% 5.02±0.05 4.98±0.05 -7.4‰
Neutral
0.066±0.003 0.065±0.003 -2.0% 4.62±0.02 4.60±0.02 -6.6‰
Overall
0.089±0.003 0.086±0.003 -2.8% 4.76±0.02 4.73±0.02 -6.1‰
Table 2 :
2The MUSHRA evaluation of naturalness and signal-quality. Systems are: affine (A-PW), non-affine (NA-PW) Parallel Wavenet, WaveGlow (WG), Parallel-WaveGAN (PWG), and recordings (Rec.). The highest (best) scores are in bold. '*' means that the difference between A-PW and NA-PW is statistically significant (p − val < 0.05, two-tailed t-test). Results are sorted by score of recordings.Style
Rec.
NA-PW
A-PW
PWG
WG
Singing
73.59
64.41
63.93
49.04
50.79
Spelling
72.39
69.34*
68.00
63.48
59.30
Jokes
71.10
68.89*
67.56
64.18
55.04
Neutral
70.73
67.63*
66.89
61.71
53.52
News briefing
70.00
68.04
68.17
64.81
61.97
Disc Jockey
68.64
66.91
66.59
63.56
61.85
Conversational
66.96
67.00
66.81
63.34
61.38
Emotional
66.78
65.56
66.34*
62.81
61.46
Long Form
66.42
65.90
66.49
63.57
62.84
Whispering
61.97
54.07
54.00
34.64
43.60
Overall
68.99
65.78*
65.42
59.01
55.38
Table 3 :
3The preference tests between affine and non-affine PW in the task of waveform reconstruction and TTS. Numbers correspond to the percent of votes towards the given system. No-preference votes are split between the two systems equally. The preferred system scores are in bold. '*' means that results are statistically significant (p − val < 0.05, one-tailed binomial test). Results are sorted by preference in reconstruction task.Reconstruction
TTS
Style
non-affine
affine
non-affine
affine
Spelling
51.92*
48.08
50.87
49.13
Singing
51.81*
48.19
-
-
Jokes
51.65*
48.35
49.45
50.55
Conversational
50.42
49.58
50.16
49.84
Emotional
50.35
49.65
50.33
49.67
Whispering
50.28
49.72
51.16*
48.84
Long Form
50.16
49.84
50.03
49.97
Neutral
50.12
49.88
50.42
49.58
Disc Jockey
49.08
50.92
49.77
50.23
News briefing
49.01
50.99
50.45
49.55
Overall
50.29*
49.71
50.42*
49.58
Audio samples will be made available on the amazon.science blog. We would like to thank Alexis Moinet, Vatsal Aggarwal and Bartosz Putrycz for insightful research discussions. And David McHardy with Jaime Lorenzo Trueba for constructive criticism of the manuscript.
Original PW uses linguistic features instead of acoustic signal representation such as Mel-spectrogram.
Original PW[13] uses 4 flows with 10, 10, 10, 30 RGCNN layers utilizing 64 channels. We use different hyperparameters that in our case improve the student quality.4 github.com/NVIDIA/waveglow github.com/kan-bayashi/ParallelWaveGAN
Tacotron: A fully end-to-end text-to-speech synthesis model. Y Wang, R J Skerry-Ryan, D Stanton, Y Wu, R J Weiss, N Jaitly, abs/1703.10135CoRR. Y. Wang, R. J. Skerry-Ryan, D. Stanton, Y. Wu, R. J. Weiss, N. Jaitly et al., "Tacotron: A fully end-to-end text-to-speech synthesis model," CoRR, vol. abs/1703.10135, 2017.
Natural TTS synthesis by conditioning wavenet on MEL spectrogram predictions. J Shen, R Pang, R J Weiss, M Schuster, N Jaitly, Z Yang, in ICASSP. J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang et al., "Natural TTS synthesis by conditioning wavenet on MEL spectrogram predictions," in ICASSP, 2018, pp. 4779-4783.
Melnet: A generative model for audio in the frequency domain. S Vasquez, M Lewis, arXiv:1906.01083arXiv preprintS. Vasquez and M. Lewis, "Melnet: A generative model for au- dio in the frequency domain," arXiv preprint arXiv:1906.01083, 2019.
Distribution augmentation for generative modeling. H Jun, R Child, M Chen, J Schulman, A Ramesh, A Radford, Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research. H. D. III and A. Singhthe 37th International Conference on Machine Learning, ser. Machine Learning ResearchPMLR119H. Jun, R. Child, M. Chen, J. Schulman, A. Ramesh, A. Radford et al., "Distribution augmentation for generative modeling," in Proceedings of the 37th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, H. D. III and A. Singh, Eds., vol. 119. PMLR, 13-18 Jul 2020, pp. 5006-5019.
Wavenet: A generative model for raw audio. A Van Den Oord, S Dieleman, H Zen, K Simonyan, O Vinyals, A Graves, The 9th ISCA Speech Synthesis Workshop. 125A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves et al., "Wavenet: A generative model for raw audio," in The 9th ISCA Speech Synthesis Workshop, 2016, p. 125.
Itu-t coders for wideband, superwideband, and fullband speech communication. R V Cox, S F D C Neto, C Lamblin, M H Sherif, IEEE Communications Magazine. 4710series editorialR. V. Cox, S. F. D. C. Neto, C. Lamblin, and M. H. Sherif, "Itu-t coders for wideband, superwideband, and fullband speech com- munication [series editorial]," IEEE Communications Magazine, vol. 47, no. 10, pp. 106-109, 2009.
Efficient neural audio synthesis. N Kalchbrenner, E Elsen, K Simonyan, S Noury, N Casagrande, E Lockhart, Proceedings of the 35th International Conference on Machine Learning ICML. the 35th International Conference on Machine Learning ICMLN. Kalchbrenner, E. Elsen, K. Simonyan, S. Noury, N. Casagrande, E. Lockhart et al., "Efficient neural audio synthesis," in Proceedings of the 35th International Conference on Machine Learning ICML, 2018, pp. 2415-2424.
Lpcnet: Improving neural speech synthesis through linear prediction. J.-M Valin, J Skoglund, ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEJ.-M. Valin and J. Skoglund, "Lpcnet: Improving neural speech synthesis through linear prediction," in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP). IEEE, 2019, pp. 5891-5895.
Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram. R Yamamoto, E Song, J Kim, ICASSP. R. Yamamoto, E. Song, and J. Kim, "Parallel wavegan: A fast waveform generation model based on generative adversarial net- works with multi-resolution spectrogram," in ICASSP, 2020, pp. 6199-6203.
Melgan: Generative adversarial networks for conditional waveform synthesis. K Kumar, R Kumar, T De Boissiere, L Gestin, W Z Teoh, J Sotelo, Advances in Neural Information Processing Systems. 32K. Kumar, R. Kumar, T. de Boissiere, L. Gestin, W. Z. Teoh, J. Sotelo et al., "Melgan: Generative adversarial networks for conditional waveform synthesis," in Advances in Neural Information Processing Systems 32, 2019, pp. 14 881-14 892.
Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis. J Kong, J Kim, J Bae, arXiv:2010.05646arXiv preprintJ. Kong, J. Kim, and J. Bae, "Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis," arXiv preprint arXiv:2010.05646, 2020.
Stylemelgan: An efficient high-fidelity adversarial vocoder with temporal adaptive normalization. A Mustafa, N Pia, G Fuchs, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEA. Mustafa, N. Pia, and G. Fuchs, "Stylemelgan: An efficient high-fidelity adversarial vocoder with temporal adaptive normal- ization," in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6034-6038.
Parallel wavenet: Fast high-fidelity speech synthesis. A Van Den Oord, Y Li, I Babuschkin, K Simonyan, O Vinyals, K Kavukcuoglu, Proceedings of the 35th International Conference on Machine Learning ICML. the 35th International Conference on Machine Learning ICMLA. van den Oord, Y. Li, I. Babuschkin, K. Simonyan, O. Vinyals, K. Kavukcuoglu et al., "Parallel wavenet: Fast high-fidelity speech synthesis," in Proceedings of the 35th International Conference on Machine Learning ICML, 2018, pp. 3915-3923.
Waveglow: A flow-based generative network for speech synthesis. R Prenger, R Valle, B Catanzaro, ICASSP. R. Prenger, R. Valle, and B. Catanzaro, "Waveglow: A flow-based generative network for speech synthesis," in ICASSP, 2019, pp. 3617-3621.
Clarinet: Parallel wave generation in end-to-end text-to-speech. W Ping, K Peng, J Chen, International Conference on Learning Representations (ICLR). W. Ping, K. Peng, and J. Chen, "Clarinet: Parallel wave generation in end-to-end text-to-speech," in International Conference on Learning Representations (ICLR), 2019.
Flowavenet: A generative flow for raw audio. S Kim, S.-G Lee, J Song, J Kim, S Yoon, arXiv:1811.02155arXiv preprintS. Kim, S.-G. Lee, J. Song, J. Kim, and S. Yoon, "Flowavenet: A generative flow for raw audio," arXiv preprint arXiv:1811.02155, 2018.
Waveffjord: Ffjord-based vocoder for statistical parametric speech synthesis. N Wu, Z Ling, ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). N. Wu and Z. Ling, "Waveffjord: Ffjord-based vocoder for statis- tical parametric speech synthesis," in ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), 2020, pp. 7214-7218.
Wavenode: A continuous normalizing flow for speech synthesis. H Kim, H Lee, W H Kang, S J Cheon, B J Choi, N S Kim, arXiv:2006.04598arXiv preprintH. Kim, H. Lee, W. H. Kang, S. J. Cheon, B. J. Choi, and N. S. Kim, "Wavenode: A continuous normalizing flow for speech syn- thesis," arXiv preprint arXiv:2006.04598, 2020.
Universal neural vocoding with parallel wavenet. Y Jiao, A Gabryś, G Tinchev, B Putrycz, D Korzekwa, V Klimkov, ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEY. Jiao, A. Gabryś, G. Tinchev, B. Putrycz, D. Korzekwa, and V. Klimkov, "Universal neural vocoding with parallel wavenet," in ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2021, pp. 6044- 6048.
Generalization and equilibrium in generative adversarial nets (gans). S Arora, R Ge, Y Liang, T Ma, Y Zhang, International Conference on Machine Learning. S. Arora, R. Ge, Y. Liang, T. Ma, and Y. Zhang, "Generalization and equilibrium in generative adversarial nets (gans)," in Inter- national Conference on Machine Learning. PMLR, 2017, pp. 224-232.
Generalization in generative adversarial networks: A novel perspective from privacy protection. B Wu, S Zhao, C Chen, H Xu, L Wang, X Zhang, arXiv:1908.07882arXiv preprintB. Wu, S. Zhao, C. Chen, H. Xu, L. Wang, X. Zhang et al., "Gen- eralization in generative adversarial networks: A novel perspec- tive from privacy protection," arXiv preprint arXiv:1908.07882, 2019.
Neural importance sampling. T Müller, B Mcwilliams, F Rousselle, M Gross, J Novák, ACM Transactions on Graphics (TOG). 385T. Müller, B. McWilliams, F. Rousselle, M. Gross, and J. Novák, "Neural importance sampling," ACM Transactions on Graphics (TOG), vol. 38, no. 5, pp. 1-19, 2019.
Block neural autoregressive flow. N De Cao, W Aziz, I Titov, Uncertainty in Artificial Intelligence. PMLR. N. De Cao, W. Aziz, and I. Titov, "Block neural autoregressive flow," in Uncertainty in Artificial Intelligence. PMLR, 2020, pp. 1263-1273.
Neural autoregressive flows. C Huang, D Krueger, A Lacoste, A C Courville, abs/1804.00779CoRR. C. Huang, D. Krueger, A. Lacoste, and A. C. Courville, "Neural autoregressive flows," CoRR, vol. abs/1804.00779, 2018.
Flow++: Improving flow-based generative models with variational dequantization and architecture design. J Ho, X Chen, A Srinivas, Y Duan, P Abbeel, International Conference on Machine Learning. PMLRJ. Ho, X. Chen, A. Srinivas, Y. Duan, and P. Abbeel, "Flow++: Improving flow-based generative models with variational dequan- tization and architecture design," in International Conference on Machine Learning. PMLR, 2019, pp. 2722-2730.
Distilling the knowledge in a neural network. G Hinton, O Vinyals, J Dean, arXiv:1503.02531arXiv preprintG. Hinton, O. Vinyals, and J. Dean, "Distilling the knowledge in a neural network," arXiv preprint arXiv:1503.02531, 2015.
Improving variational inference with inverse autoregressive flow. D P Kingma, T Salimans, M Welling, abs/1606.04934CoRR. D. P. Kingma, T. Salimans, and M. Welling, "Improving variational inference with inverse autoregressive flow," CoRR, vol. abs/1606.04934, 2016.
D P Kingma, P , arXiv:1807.03039Glow: Generative flow with invertible 1x1 convolutions. arXiv preprintD. P. Kingma and P. Dhariwal, "Glow: Generative flow with invertible 1x1 convolutions," arXiv preprint arXiv:1807.03039, 2018.
Sum-of-squares polynomial flow. P Jaini, K A Selby, Y Yu, International Conference on Machine Learning. PMLRP. Jaini, K. A. Selby, and Y. Yu, "Sum-of-squares polynomial flow," in International Conference on Machine Learning. PMLR, 2019, pp. 3009-3018.
Neural spline flows. C Durkan, A Bekasov, I Murray, G Papamakarios, arXiv:1906.04032arXiv preprintC. Durkan, A. Bekasov, I. Murray, and G. Papamakarios, "Neural spline flows," arXiv preprint arXiv:1906.04032, 2019.
Normalizing flows for probabilistic modeling and inference. G Papamakarios, E Nalisnick, D J Rezende, S Mohamed, B Lakshminarayanan, arXiv:1912.02762arXiv preprintG. Papamakarios, E. Nalisnick, D. J. Rezende, S. Mohamed, and B. Lakshminarayanan, "Normalizing flows for probabilistic mod- eling and inference," arXiv preprint arXiv:1912.02762, 2019.
Cubicspline flows. C Durkan, A Bekasov, I Murray, G Papamakarios, arXiv:1906.02145arXiv preprintC. Durkan, A. Bekasov, I. Murray, and G. Papamakarios, "Cubic- spline flows," arXiv preprint arXiv:1906.02145, 2019.
Invertible residual networks. J Behrmann, W Grathwohl, R T Chen, D Duvenaud, J.-H Jacobsen, International Conference on Machine Learning. PMLR. J. Behrmann, W. Grathwohl, R. T. Chen, D. Duvenaud, and J.-H. Jacobsen, "Invertible residual networks," in International Confer- ence on Machine Learning. PMLR, 2019, pp. 573-582.
Residual flows for invertible generative modeling. R T Chen, J Behrmann, D Duvenaud, J.-H Jacobsen, arXiv:1906.02735arXiv preprintR. T. Chen, J. Behrmann, D. Duvenaud, and J.-H. Jacob- sen, "Residual flows for invertible generative modeling," arXiv preprint arXiv:1906.02735, 2019.
Normalizing flows: An introduction and review of current methods. I Kobyzev, S Prince, M Brubaker, IEEE Transactions on Pattern Analysis and Machine Intelligence. I. Kobyzev, S. Prince, and M. Brubaker, "Normalizing flows: An introduction and review of current methods," IEEE Transactions on Pattern Analysis and Machine Intelligence, p. 1-1, 2020.
Conditional image generation with pixelcnn decoders. A V Oord, N Kalchbrenner, O Vinyals, L Espeholt, A Graves, K Kavukcuoglu, arXiv:1606.05328arXiv preprintA. v. d. Oord, N. Kalchbrenner, O. Vinyals, L. Espeholt, A. Graves, and K. Kavukcuoglu, "Conditional image generation with pixelcnn decoders," arXiv preprint arXiv:1606.05328, 2016.
Auto-encoding variational bayes. D P Kingma, M Welling, arXiv:1312.6114arXiv preprintD. P. Kingma and M. Welling, "Auto-encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013.
BS. 1534-1. method for the subjective assessment of intermediate sound quality (MUSHRA). I Recommendation, International Telecommunications Union. I. Recommendation, "BS. 1534-1. method for the subjective as- sessment of intermediate sound quality (MUSHRA)," Interna- tional Telecommunications Union, Geneva, 2001.
. Clickworker, Clickworker, "https://www.clickworker.com/machine-learning- ai-artificial-intelligence/," August, 2020.
Adam: A method for stochastic optimization. D P Kingma, J Ba, 3rd International Conference on Learning Representations ICLR. D. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," in 3rd International Conference on Learning Representations ICLR, 2015.
| [] |
[
"Collaborative Filter-ing with Topic and Social Latent Factors Incorporating Implicit Feedback",
"Collaborative Filter-ing with Topic and Social Latent Factors Incorporating Implicit Feedback"
] | [
"Guang-Neng Hu \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Nanjing Univeristy \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Xin-Yu Dai \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Nanjing Univeristy \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Feng-Yu Qiu \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Nanjing Univeristy \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Rui Xia \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Tao Li \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Shu-Jian Huang \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Nanjing Univeristy \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Jia-Jun Chen \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Guang-Neng Hu \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Xin-Yu Dai \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Feng-Yu Qiu \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Rui Xia \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Tao Li \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Shu-Jian Huang \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n",
"Jia-Jun Chen \nNanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n\n"
] | [
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n",
"Nanjing University of Science and Technology\nNanjing University of Posts and Telecommunications\nFlorida International University\nNanjing Univeristy\n"
] | [
"ACM Trans. Knowl. Discov. Data"
] | Recommender systems (RSs) provide an effective way of alleviating the information overload problem by selecting personalized items for different users. Latent factors based collaborative filtering (CF) has become the popular approaches for RSs due to its accuracy and scalability. Recently, online social networks and user-generated content provide diverse sources for recommendation beyond ratings. Although social matrix factorization (Social MF) and topic matrix factorization (Topic MF) successfully exploit social relations and item reviews, respectively; both of them ignore some useful information. In this paper, we investigate the effective data fusion by combining the aforementioned approaches. First, we propose a novel model MR3 to jointly model three sources of information (i.e., ratings, item reviews, and social relations) effectively for rating prediction by aligning the latent factors and hidden topics. Second, we incorporate the implicit feedback from ratings into the proposed model to enhance its capability and to demonstrate its flexibility. We achieve more accurate rating prediction on real-life datasets over various state-of-the-art methods. Furthermore, we measure the contribution from each of the three data sources and the impact of implicit feedback from ratings, followed by the sensitivity analysis of hyperparameters. Empirical studies demonstrate the effectiveness and efficacy of our proposed model and its extension. | 10.1145/3127873 | [
"https://arxiv.org/pdf/1803.09551v1.pdf"
] | 3,410,613 | 1803.09551 | 3bf15dea87fb06ec5e4d3d5d61ec804060d60e54 |
Collaborative Filter-ing with Topic and Social Latent Factors Incorporating Implicit Feedback
2016. March 2018
Guang-Neng Hu
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Nanjing Univeristy
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Xin-Yu Dai
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Nanjing Univeristy
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Feng-Yu Qiu
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Nanjing Univeristy
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Rui Xia
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Tao Li
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Shu-Jian Huang
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Nanjing Univeristy
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Jia-Jun Chen
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Guang-Neng Hu
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Xin-Yu Dai
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Feng-Yu Qiu
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Rui Xia
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Tao Li
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Shu-Jian Huang
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Jia-Jun Chen
Nanjing University of Science and Technology
Nanjing University of Posts and Telecommunications
Florida International University
Nanjing Univeristy
Collaborative Filter-ing with Topic and Social Latent Factors Incorporating Implicit Feedback
ACM Trans. Knowl. Discov. Data
12232016. March 201810.1145/312787323CCS Concepts: r Information systems → Recommender systemsr Human-centered computing → Collaborative filter- ingAdditional Key Words and Phrases: Recommendation Systems, Collaborative Filtering, Implicit Feedback, Hidden Topics, Latent Social Factors ACM Reference Format:
Recommender systems (RSs) provide an effective way of alleviating the information overload problem by selecting personalized items for different users. Latent factors based collaborative filtering (CF) has become the popular approaches for RSs due to its accuracy and scalability. Recently, online social networks and user-generated content provide diverse sources for recommendation beyond ratings. Although social matrix factorization (Social MF) and topic matrix factorization (Topic MF) successfully exploit social relations and item reviews, respectively; both of them ignore some useful information. In this paper, we investigate the effective data fusion by combining the aforementioned approaches. First, we propose a novel model MR3 to jointly model three sources of information (i.e., ratings, item reviews, and social relations) effectively for rating prediction by aligning the latent factors and hidden topics. Second, we incorporate the implicit feedback from ratings into the proposed model to enhance its capability and to demonstrate its flexibility. We achieve more accurate rating prediction on real-life datasets over various state-of-the-art methods. Furthermore, we measure the contribution from each of the three data sources and the impact of implicit feedback from ratings, followed by the sensitivity analysis of hyperparameters. Empirical studies demonstrate the effectiveness and efficacy of our proposed model and its extension.
INTRODUCTION
For all the benefits of the information abundance and communication technology, the "information overload" is one of the digital-age dilemmas we are confronted with. Recommender systems (RSs) are instrumental in tackling this problem [Sarwar et al. 2001;Linden et al. 2003;Adomavicius and Tuzhilin 2005;Mnih and Salakhutdinov 2007;Koren et al. 2009]. They help offer potential like HFT (hidden factors and topics, one of the Topic MF methods) [McAuley and Leskovec 2013] to exploit item reviews and LOCABAL (local and global, one of the Social MF methods) [Tang et al. 2013b] to exploit social relations. Second, they did not mine the data sources more deeply like incorporating implicit feedback from ratings, though RSs can indeed benefit from implicit information; and this benefit has already been demonstrated like in the SVD++ [Koren 2008;Koren et al. 2009] and the TrustSVD [Guo et al. 2015] models. In this paper, we attempt to overcome these two drawbacks.
Contributions. First, we investigate the effectiveness of fusing social relations and reviews to rating prediction in a novel way, inspired by the complementarity of the two independent sources for recommendation. The core idea is the alignment between latent factors found by Social MF and hidden topics found by Topic MF to form a unified model. Through latent factors and hidden topics we can learn the representations of user entity and item entity from heterogenous data sources; and the connections among data sources can be reflected in the dependencies among these representations. In this way, we can gain the maximum benefits from all of the information effectively.
Second, we mine the data more deeply by incorporating implicit feedback from ratings into the proposed model to enhance its capability and to demonstrate its flexibility. The core idea is to learn an extra implicit feature matrix to consider the influence of rated items. Due to the sparseness of data, users who have few data will have latent features close to the average, leading to their predicted ratings close to the items' average. Through the implicit features, the users' rated items will have an a priori impact on their ratings with respect to unseen items.
Our main contributions are outlined as follows.
-Proposing a novel model MR3 to jointly model user-item ratings, social network structure, and item reviews for rating prediction and along with an extended Social MF method which exploits the ratings and social relations more tightly by capturing the graph structure of neighbors; the MR3 model integrates two effective components. (Section 4) -Extending the proposed model to obtain a new model MR3++ by incorporating implicit feedback from ratings to enhance its capability and to demonstrate its flexibility; the extension model mines the limited information more deeply by introducing implicit features which captures the influence of rated items. (Section 5) -Adapting an alternating optimization algorithm to learn the proposed models which contain different kinds of parameters. (Section 6) -Evaluating the proposed models extensively on two real-world datasets to demonstrate their performance and to understand their working. (Section 7)
A preliminary version of the work, i.e. MR3, has been published in [Hu et al. 2015]. In this journal submission, we have extended our previous conference paper from the following aspects. First, we propose a new model MR3++ by incorporating implicit feedback from ratings to mine the limited information more deeply (Section 5). Second, we refine the preliminaries (Section 3) and abstract the learning processing (Section 6) which are suitable for both MR3 and MR3++. Third, we evaluate the newly proposed model extensively on two real-world datasets, including: (i) prediction performance (Section 7.3.2), (ii) impact of implicit feedback (Section 7.4.3) and its comparison with contribution of more data sources (Section 7.4.4), and (iii) the hyperparameters analysis (Section 7.5.2). Finally, we give further analysis of contributions from auxiliary sources of data (Section 7.4.2).
The organization of this paper is as follows. We first review related works in Section 2. We introduce the notation and preliminaries in Section 3. We present details of the proposed model in Section 4, and its extension to incorporate implicit feedback in Section 5. Their learning processes are described in Section 6. In Section 7, we demonstrate our methods empirically on two real-life datasets. We give concluding remarks and discussion of some future work in Section 8.
RELATED WORKS
We review some related works on collaborative filtering based recommender systems, which are divided into four categories according to information sources they exploit among ratings, reviews, and social relations. Implicit feedback from ratings is also reviewed in the corresponding category.
Collaborative Filtering. Collaborative filtering (CF) recommender approaches have two types: memory-based CF and model-based CF [Adomavicius and Tuzhilin 2005;Ekstrand et al. 2011]. They both assume that if users rated items similarly in the past, then they will be likely to rate other items similarly in the future. Memory-based CF methods are grouped into user-based CF and item-based CF. The former predicts the rating of an active user based on the ratings of other similar users on the item [Breese et al. 1998], while the latter based on the ratings of other similar items given by the same user [Sarwar et al. 2001;Linden et al. 2003]. And the two kinds of memory-based CF can be unified by similarity fusion in a generative probabilistic model [Wang et al. 2006]. The similarity between two users or two items can be measured by Pearson correlation or cosine similarity from past rating history. Model-based CF learns a model from past ratings and then uses the learned model to predict unseen ratings. Latent semantics models [Hofmann 2004] are some early representative model-based CF. And latent factors CF or matrix factorization based CF, which learns a latent vector of preferences for each user and a latent vector of attributes for each item, gains popularity and becomes the standard model for recommender due to its accuracy and scalability [Koren et al. 2009;Mnih and Salakhutdinov 2007]. These matrix factorization (MF) techniques include maximum margin MF [Rennie and Srebro 2005], nonnegative MF [Gu et al. 2010], and collective MF [Singh and Gordon 2008]. A relaxation on the assumption that interprets ratings as numerical values is to interpret them as ordinal ones [Koren and Sill 2011].
Implicit feedback from ratings can readily be added into the basic matrix factorization based CF models [Koren 2008;Mnih and Salakhutdinov 2007;Rendle 2010]. The intuition behind the implicit information from ratings is that users who have rated the same/similar items are more likely to have similar preferences than those who have not, in an a priori sense. Another way to exploit implicit feedback from ratings is to deduce the preference-confidence pairs from the raw observed ratings [Hu et al. 2008]. The observed rating data is treated as an indication of positive and negative preferences associated with varying confidence levels. The resulting objective function sums over all the full matrix entries rather than over the only observed ones [Liu et al. 2010]. CF models, however, suffer from data sparsity and the imbalance of ratings; and they perform poorly on cold users and cold items for which there are no or few data. Currently, there are mainly two threads to alleviate these problems: topic matrix factorization integrating reviews text information and social matrix factorization integrating social network information.
Topic Matrix Factorization. One research thread, which we call topic matrix factorization (Topic MF), is to integrate ratings with item contents or reviews text [Wang and Blei 2011;Gopalan et al. 2014;McAuley and Leskovec 2013;Ling et al. 2014;Bao et al. 2014b]. Early works [Jakob et al. 2009;Ganu et al. 2009] extract the fine-grained ratings of item aspects from online reviews and then use them as content-based features for collaborative filtering. Aspects of a movie have actors, genres and visual effects; aspects of a restaurant have price, cleanness and service; and aspects of a hotel have location, cleanliness, and room view. The extraction of aspects needs domain knowledge and some amount of manual interaction. Besides item reviews, topic modeling approaches have been incorporated into recommender systems to find the hidden topics of documents like scientific articles [Blei et al. 2003;Wang and Blei 2011;Gopalan et al. 2014]. Such methods are belonging to One-Class CF [Pan et al. 2008] where the dimensions they discover are not correlated with ratings.
In recent works, some authors adopt latent Dirichlet allocation (LDA) [McAuley and Leskovec 2013] or nonnegative matrix factorization (NMF) [Bao et al. 2014b] to learn latent topic factors from item reviews and meanwhile adopt a matrix factorization model to exploit the ratings. To bridge the topic-specific factors and rating-specific factors, softmax transformations/exponential functions are proposed to link the two. These methods assume that the dimensionality discovered in ratings is the same as that found in reviews. Ling et al [Ling et al. 2014] replaced the matrix factorization model with a mixture of Gaussian to avoid the difficult choice of the transformations; Xu et al [Xu et al. 2014] used the co-clustering of user community and item group to generate the rating distributions and topic distributions, allowing the different dimensionality of user factors and item factors; and Diao et al [Diao et al. 2014] introduced the aspect-based model to link the interest distribution of users and the content distribution for movies, allowing the dimensionality of the two to be different. In general, Topic MF methods combine latent factors in ratings with latent topics in item reviews. Nevertheless, Topic MF ignores some useful information, e.g., social relations.
Social Matrix Factorization. Another research thread, which we call social matrix factorization (Social MF), is to combine ratings with social relations [Ma et al. 2008;Ma et al. 2011;Jamali and Ester 2011;Tang et al. 2013b;Bao et al. 2014a;Guo et al. 2015]. Early work [Massa and Avesani 2007] computed the trust value between users from the trust network to replace the similarity weight in user-based CF. Empirical results show that recommender systems incorporating trust information are effective in terms of accuracy while alleviating the cold-user problem. It's a memory-based CF or trust-related neighborhood model and doesn't study the rating matrix and the trust network systematically.
In recent works, the authors [Ma et al. 2008;Chaney et al. 2015;Tang et al. 2013b;Yang et al. 2013;Bao et al. 2014a;Guo et al. 2015] factorize the rating matrix and the social matrix simultaneously assuming that they share the common latent user space. They exploit the social relations from multiple views: 1) local and global (LOCABAL) view [Tang et al. 2013b] where local perspective reveals the correlations between the user and her local neighbors while global perspective indicates the reputation of users in the global network; 2) trustee and truster (TrustMF) view [Yang et al. 2013] where the trustee model captures how others follow the rating of a user while the truster model captures how other users affect the rating of a user; and 3) decomposed trust view [Bao et al. 2014a] where four trust aspects (benevolence, integrity, competence, and predictability) are formulated to predict the trust values and the predicted trust is combined with the similarity of the latent user feature vectors to get the total trust between two users.
Another way to exploit social network information is to use it as a social regularization (SoReg) which constrains that the latent factors of users should be close to the average of their trusted neighbors [Ma et al. 2011]. A similar work to this is also proposed in [Jamali and Ester 2011] to allow trust propagation. Nevertheless, Social MF ignores some useful information, e.g., reviews text.
Hybrid Recommender Approaches. There is a tendency towards hybrid approaches. The authors in [Fang and Si 2011] propose the matrix co-factorization techniques to exploit the user and the item side information for one-class collaborative filtering [Pan et al. 2008]; their objective is to minimize the reconstruction loss of both user-word and item-word TFIDF weight matrix. A recent work to exploit the three types of information for recommendation [Chen et al. 2014] adopts CTR (collaborative topic regression, one of the Topic MF methods) [Wang and Blei 2011] to exploit ratings and reviews, and adopts SoReg (social regularization, one of the Social MF methods) [Ma et al. 2011] to exploit ratings and social relations. Experimental results show better performance compared to the two individual components. Similar methods are also proposed for tag recommendation [Wang et al. 2013], celebrity recommendation [Ding et al. 2013], and article recommendation [Purushotham et al. 2012]. However, these models have two drawbacks. First, the two components they used are not effective and some better components were proposed like the HFT (hidden factors and topics, one of the Topic MF methods) model [McAuley and Leskovec 2013] and the LOCABAL (local and global, one of the Social MF methods) model [Tang et al. 2013b]. Second, they did not mine the data sources more deeply like exploiting implicit feedback from ratings, though RSs can indeed benefit from implicit information as demonstrated by the SVD++ model and the TrustSVD model. In this paper, we attempt to overcome these two drawbacks.
PRELIMINARIES
Before proposing our models, we review briefly the representative approaches to exploit the three types of information individually. To this end, we first introduce the notations related to the three data sources shown in Figure 1. Notations. Suppose there are M users P = {u 1 , ..., u M } and N items Q = {i 1 , ..., i N }. We reserve u,v,w to index the users, and i,j,k to index the items. Let R ∈ R M ×N denote the rating matrix, where the entry R u,i is the rating of user u on item i, and we mark a zero if it is unknown. The task of rating prediction is to predict the unknown/missing ratings from the observed data.
In addition to this explicit rating information, other side information sources may exist. One such source is social relations. Users connect to others in a social network, where a link between two users indicates their friendship or trust relation. We use T ∈ R M ×M to indicate the user-user social relations, where the entry T u,v = 1 if user u has a relation to user v or zero otherwise. Another side data source is item reviews. Items have affiliated content information, e.g., reviews commented by users. The observed review data d u,i is a piece of text of item i written by user u, often along with a rating score R u,i . 1 Notations used throughout the paper are summarized in Table I.
Rating Information. For the information source of ratings, matrix factorization based latent factor models [Mnih and Salakhutdinov 2007;Koren et al. 2009] are mainly to find the latent user-specific feature matrix P = [P 1 , ..., P M ] ∈ R F ×M and item-specific feature matrix Q = [Q 1 , ..., Q N ] ∈ R F ×N to approximate the observed rating matrix in the least-squares sense (more precisely, regularized least squares or ridge regression), obtained by solving the following problem
min P,Q Ru,i =0 (R u,i −R u,i ) 2 + λ( P 2 F ro + Q 2 F ro ),(1)
where λ is the regularization parameter to avoid over-fitting and the predicted ratings, and . 2 F ro denotes the Frobenius norm.R
u,i = µ + b u + b i + P T u Q i .(2)
The parameters µ, b u and b i are the mean of ratings, bias of the user and bias of the item, respectively. The F -dimensional feature vectors P u and Q i represent users' preferences and items' characteristics. The dot products capture their interaction or match degree. Social Information. For the information source of social relations, social matrix factorization methods [Zhu et al. 2007;Tang et al. 2013b] are mainly to find the latent social-specific feature matrix P and social correlation matrix H ∈ R F ×F to approximate the observed social similarity the set of items rated by the user u set Tu the set of users trusted by the user u set R u,i , R b u,i rating of item i by user u, and its implicit binary rating
R ∈ R M ×N , R b ∈ {0, 1} M ×N Tu,v
social relation between user u and v T ∈ N M ×M w d,n the n th word in doc d doc-term matrix, w ∈ N N ×L W u,i weight on the rating of item i given by user u pre-computed, W ∈ R N ×N Cu,v social strength (trust value) between user u and v pre-computed, C ∈ R M ×M Su,v social similarity between user u and v pre-computed, S ∈ R M ×M F dimensionality of latent factors/topics hyper-parameter, scalar Pu F -dimensional feature vector for user u parameters,
P ∈ R F ×M Q i F -dimensional feature vector for item i parameters, Q ∈ R F ×N Y j F -dimensional implicit feature vector for item j parameters, Y ∈ R F ×N θ i F -dimensional topic distribution for item i parameters, θ ∈ ∆ F ×N φ f , ψ f
word distribution for topic f , and the unnormalized one parameters, φ ∈ ∆ L×F H social correlation matrix parameters, H ∈ R F ×F matrix S ∈ R M ×M in the least-squares sense, by solving the following problem 2
(3) min
P,H Tu,v =0 (S u,v − P T u HP v ) 2 + λ( P 2 F ro + H 2 F ro ),
where S u,v is the social similarity between user u and her trustee v, defined as the cosine similarity between their rating vectors
(4) S u,v = i R u,i · R v,i i R 2 u,i · i R 2 v,i .
The assumption to make the above method work is that the latent social-specific matrix is shared with the latent user-specific matrix; both are referred as P here. Namely, users have dual identity: one is involved in rating behavior and the other is involved in social behavior.
Review Information. For the information source of reviews, topic modeling approaches are used to find the item properties and hidden topics [Blei et al. 2003;McAuley and Leskovec 2013]. The negative log-likelihood (NLL) of the reviews collection is defined as 3
− N d=1 n∈N d log θ z d,n + log φ z d,n ,w d,n ,(5)
where the parameters θ and φ are the topic and word distributions, respectively; w d,n and z d,n are the word and the corresponding topic in doc d. Reviews explain the ratings why the users rate in that way. Intuitively, a certain property of the item is probably discussed by a specific distribution of words which corresponds to a certain topic. The following softmax transformation sharpens this intuition and also bridges the gap between the real-valued parameters Q i ∈ R F associated with ratings and the corresponding probabilistic ones θ i ∈ ∆ F associated with reviews.
(6) θ i,f = exp(κQ i,f ) F f =1 exp(κQ i,f ),
where the parameter κ controls 'peakiness' of the transformation and can be jointly learned with other parameters. The dependencies among data matrices and parameter matrices in three different recommendation approaches are shown in Figure 2. The left subfigure is latent factors CF approach to exploit the rating information; the middle subfigure is social matrix factorization approach to integrate ratings with social relations; and the right subfigure is topic matrix factorization approach to combine ratings with item reviews.
THE PROPOSED MODEL
In this section, we propose a model to solve the following problem. We call it Problem 1 which requires to model three data sources simultaneously.
Problem 1
Problem 1. Rating Prediction with Social Relations and Reviews.
Input: 1) a rating matrix R, 2) a social network among users T , 3) a reviews collection along with the ratings D, 4) a user u in the user set P, and 5) an item i in the item set Q.
Output: the predicted preference of user u on item i, where u ∈ P and i ∈ Q.
In Problem 1, to predict the unknown ratings, we have three types of information to exploit: ratings, social relations, and item reviews.
MR3: A Model of Ratings, Item Reviews and Social Relations
In Section 3, we have reviewed the three kinds of approaches to exploit the three kinds of information sources individually (see Figure 2), i.e., matrix factorization based collaborative filtering for the information source of ratings (see Eq.(1), or Figure 2(a)), social matrix factorization for the information source of social relations (see Eq.(3), or Figure 2(b)), and the topic matrix factorization for the information source of item reviews (see Eq.(5), or Figure 2(c)). With these preliminaries in mind, we can present our solution to the Problem 1, exploiting all of data sources simultaneously.
For Problem 1, the main challenge is how to fuse effectively the three heterogenous data sources to form a unified model. We tackle this challenge by combining the two parts, i.e. Social MF and Topic MF, described in the following. The core idea is, based on the collaborative filtering, the alignment between latent factors and hidden topics found by the above two parts.
In one part, the LOCABAL (local and global, one of the Social MF methods) model [Tang et al. 2013b] exploits the ratings and social relations by incorporating social latent factors into collaborative filtering. The goals to achieve are modeling ratings accurately and also capturing the social context, by solving the following problem: 4
(7) min P,Q,H Ru,i =0 W u,i (R u,i −R u,i ) 2 + λ rel Tu,v =0 (S u,v − P T u HP v ) 2 + λΩ(Θ),
where the rating weights W u,i = 1/(1 + log r u ) are computed from the PageRank scores of users in the social network, representing the global perspective of social context [Tang et al. 2013b], where r u is the rank of user u in decreasing order, i.e, the top-ranked users having high ranking scores. The regularization parameter λ rel controls the contribution from social relations, parameters Θ = {P, Q, H}, and the regularization term is given by Ω(Θ) = P 2 F ro + Q 2 F ro + H 2 F ro . In another part, the HFT (hidden factors and topics, one of the Topic MF methods) model [McAuley and Leskovec 2013] exploits the ratings and item reviews by incorporating topic latent factors into collaborative filtering. The goals to achieve are both modeling ratings accurately and generating reviews likely, by solving the following problem:
(8) min P,Q,Φ Ru,i =0 (R u,i −R u,i ) 2 − λ rev N d =1 n ∈N d log θ z d,n + log φ z d,n ,w d,n ,
where λ rev controls the contribution from reviews, and parameters Φ = {θ, φ}. The connection between ratings and reviews is the coupling between parameters Q i and θ i as shown in Eq.(6). Based on collaborative filtering, we can exploit the three data sources simultaneously by combining the above two parts (i.e., Eq. (7) and Eq. (8)). By aligning latent factors and topics, we connect Social MF and Topic MF through the proposed model MR3 (model of rating, relation, and review), which minimizes the following problem [Hu et al. 2015]:
(9) L(Θ, Φ, z, κ) Ru,i =0 W u,i (R u,i −R u,i ) 2 − λ rev N d =1 n ∈N d log θ z d,n + log φ z d,n ,w d,n + λ rel Tu,v =0 C u,v (S u,v − P T u HP v ) 2 + λΩ(Θ),
Beyond the LOCABAL model, we borrow the idea trust values from the SoRec method [Ma et al. 2008] to exploit the ratings and social relations more tightly
C u,v = d − v /(d + u + d − v ),(10)
where the outdegree d + u represents the number of users whom u trusts, while the indegree d − v denotes the number of users who trust v. The trust values are used to capture the graph structure of neighbors, representing the social influence locality, i.e., user behaviors are mainly influenced by close/direct friends in their ego networks [Zhang et al. 2013], Before we delve into the learning algorithm, a brief discussion on Eq.(9) is in order. On the right hand, the first term is rating squared-error weighted by user reputation in the social network; the second term is the negative log likelihood of item reviews corpus; the third term is local social context factorization weighted by trust values among users; and the last term is Frobenius norm penalty of parameters to control over-fitting. The contributions from item reviews and social relations are controlled by the two hyper-parameters λ rev and λ rel . The connection between ratings and social relations is through the shared user latent feature space P , ratings and reviews are linked through the transformation involving Q and θ as shown in Eq.(6) where parameter κ controls the peakiness of the transformation, and all of the three data sources are jointly modeling based on collaborative filtering with topic and social latent factors. The dependencies among the parameter and data matrices in the proposed model are depicted in Figure 3. Note that the dotted line between Q and θ indicates the fixed transformation between them (see Eq.(6)) and does not exhibit any distribution dependency; hence the figure does not describe a truly Bayesian generative model on the whole and the purpose is to clearly display the relationship among matrices of parameters and data. eSMF. We separate the following part from the proposed model MR3 and denote it as the extended Social MF (eSMF) model [Hu et al. 2015] (11)
L(Θ) Ru,i =0 W u,i (R u,i −R u,i ) 2 + λ rel Tu,v =0 C u,v (S u,v − P T u HP v ) 2 + λΩ(Θ).
The eSMF model is the Social MF component of MR3, which extends the LOCABAL model [Tang et al. 2013b] by capturing the graph structure of neighbors via trust values [Ma et al. 2008] representing the social influence locality [Zhang et al. 2013].
AN EXTENSION OF THE PROPOSED MODEL
In this section, we incorporate the implicit feedback from ratings into the proposed model to enhance its capability and demonstrate its flexibility, leading to a solution to the following problem. We call it Problem 2 which requires to mine the limited information more deeply.
Problem 2
For item ratings information, they tell us how a user rated an item, i.e., an explicit rating score (e.g. from 1 to 5) to indicate her preference degree. Moreover, implicit feedback is always associated with these explicit rating scores, telling us which items the user rated. In detail, a binary matrix R b can be constructed from the rating matrix, where the entry has one if the corresponding one in the rating matrix is observed and zero if it is unseen. Users chose to indicate their preferences implicitly by voting a rating, leaving the rating high or low alone. In another way, users who have rated the same/similar items are more likely to have similar preferences than those who have not, in an a priori sense.
With the above additional consideration, we extend the Problem 1 to reach the following problem.
Problem 2. Problem 1 with Implicit Feedback from Ratings. Input: 1) a rating matrix R and its implicit matrix R b , 2) a social network among users T , 3) a reviews collection along with the ratings D, 4) a user u in the user set P, and 5) an item i in the item set Q.
Output: the predicted preference of user u on item i, where u ∈ P and i ∈ Q.
R Q θ z w P S H φ Y M M N L N F Fig. 4.
The Extension of the Proposed Model (MR3++). Shaded nodes are data and others are parameters. Implicit feature matrix Y ∈ R F ×N is added to consider the influence of rated items. The total preferences of a user on an item now decompose into two parts: one part indicates 'intrinsic preference' reflecting in her latent feature vector; and another part shows the 'influence of her rated items' captured by their implicit feature vectors.
By Problem 2, we define the meaning of "mine the limited information more deeply" as "exploit the implicit feedback constructed from explicit ratings". That is, besides learning the latent user factors and item factors from ratings, we also learn an implicit feature vector for each item. As we will see in Eq. (12), the implicit feedback from ratings can be used as the prior preferences of users.
MR3++: An Extension of the Proposed Model Incorporating Implicit Feedback from Ratings
In Section 4.2, we have introduced the solution to the Problem 1. The solution is the model MR3 (see Eq. (9)), which exploits the three types of information sources simultaneously by connecting the Social MF approach and the Topic MF approach. In this subsection, we extend the proposed model. leading a solution to Problem 2. The core idea of mining ratings deeply is to learn an extra implicit feature matrix Y ∈ R F ×N to consider the influence of rated items. Due to the sparseness of data, users who have few data will have latent features close to the average, leading to their predicted ratings close to the items' average. Through the implicit features, the users' rated items will have an a priori impact on their ratings with respect to unseen items. In more detail, the total preferences of user u on item i decompose into two parts: one part indicates some 'intrinsic preference' reflecting in her latent feature vector P u ; and another part shows the 'influence of her rated items' captured by their implicit feature vectors Y j , where the index j denotes the set of the user's rated items. These ideas are shaped in the SVD++ model [Koren 2008] and the constrained PMF model [Mnih and Salakhutdinov 2007], where the predicted ratings are now computed by (rather than by Eq. (2)
) (12) R * u,i = P T u Q i + |N u | − 1 α j∈Nu Y j T Q i + µ + b u + b i ,
where N u is the set of items rated by the user u, i.e., N u = {j : R b u,j = 1} and Y j is the implicit feature vector for item j. In the SVD++ model, α = 2; in the constrained PMF model, α = 1. We can see, for example, if user u has rated the same items as those rated by user v, i.e., N u = N v , then in an a priori sense, these two users are likely to have similar preferences.
The extended model contains two components: one component is to combine three kinds of data sources (ratings, social relations, and reviews), i.e., the MR3 model; and another is to incorporate implicit feedback from ratings and hence mining the rating source more deeply. Hence we call the extended model MR3++, where '++' stands for extending the MR3 model by exploiting implicit feedback from ratings. MR3++ minimizes the following problem:
(13) L(Θ * , Φ, z, κ) Ru,i =0 W u,i (R u,i −R * u,i ) 2 − λ rev N d =1 n ∈N d log θ z d,n + log φ z d,n ,w d,n + λ rel Tu,v =0 C u,v (S u,v − P T u HP v ) 2 + λΩ(Θ * ),
whereR * u,i is given by Eq.(12) (rather than Eq.(2)), and Ω(Θ * ) = P 2 F ro + Q 2 F ro + H 2 F ro + Y 2 F ro . The dependencies among the parameter and data matrices in the extension model are depicted in Figure 4, where the implicit feature matrix is added compared to the proposed model.
Implicit Feature Matrix Y . Note that the idea of learning feature matrix Y originally (to the best of our knowledge) comes from the NSVD model [Paterek 2007] to decrease the number of parameters, where Y is used to replace the learning of latent user feature matrix P ; hence the complexity of parameters is from O(M K + N K) to O(N K) and does not depend on the scale of users. Besides capturing implicit feedback, Y has the effect of learning the item-item similarity sim(i, j) = Y T j Q i . The predicted rating is computed by R u,i = j∈Nu sim(i, j) + b u + b i . This method is modified to exclude the known rating information for a given user-item pair when estimating the similarity matrix in the FISM model [Kabbur et al. 2013]. The estimated rating is computed as R u,i = j∈Nu\{i} sim(i, j)
+ b u + b i .
To mine the implicit feedback from ratings, we adopt the SVD++ model idea in our proposed model MR3++: learning both the item-item similarity sim(i, j) = Y T j Q i and the useritem similarity sim(u, i) = P T u Q i . And hence the predicted ratings are computed by R u,i = sim(u, i) + |N u | − 1 2 j∈Nu\{i} sim(i, j) + µ + b u + b i , which is just the same with Eq.(12). Combined with the FISM model idea, we can also adopt the following rating predictor in our MR3++ model:
R u,i = sim(u, i) + (|N u |−1) −α j∈Nu\{i} sim(i, j) + µ + b u + b i ,
where α is an adjustable hyper-parameter between 0 and 1.
No matter what kind of implicit predictor is used (NSVD, SVD++, or FISM), the idea of ming the limited data more deeply is worth keeping in mind. And this is a meaningful point for extending the MR3 model to the MR3++ model.
MODEL LEARNING
We give the optimization algorithms to learn the models (namely MR3 and MR3++) proposed in the above two sections (MR3 in Section 4 and MR3++ in Section 5). Their learning processes are the same except the minor difference with respect to gradients of two parameters.
Learning Process
Our objective is to search
(14) arg min Θ,Φ,z,κ L(Θ, Φ, z, κ).
Observe that parameters Θ = {P, Q, H} and Φ = {θ, φ} are coupled through the transformation between Q and θ as shown in Eq.(6) (the dotted line shown in Figure 3). The former parameters Θ are associated with ratings and social relations, which can be found by gradient descent methods; while the latter parameters Φ are associated with reviews text, which can be found by Gibbs sampling [Griffiths and Steyvers 2004]. So similar to the HFT (hidden factors and topics, one of the Topic MF methods) model [38], we design a procedure alternating between the following two steps:
update Θ new , Φ new , κ new = arg min Θ,Φ,κ L(Θ, Φ, z old , κ); (15a) sample z new d,n with probability p(z new d,n = f ) = φ new f,w d,n .(15b)
For the first step as shown in Eq.(15a), topic assignments z d,n for each word in reviews corpus are fixed; then we update the terms Θ, Φ, and κ by gradient descent (GD). Recall that θ and Q depend on each other; we fit only Q and then determine θ by Eq. (6). This is the same as in the standard gradient-based MF for recommender [Mnih and Salakhutdinov 2007] except that we have to compute more gradients, which will be given later separately.
For the second step as shown in Eq.(15b), parameters associated with reviews corpus θ and φ are fixed; then we sample topic assignments z d,n by iterating through all docs d and each word within, setting z d,n = f with probability proportion to θ d,f φ f,w d,n . This is similar to updating z via LDA [Blei et al. 2003] except that topic proportions θ are not sampled from a Dirichlet prior, but instead are determined in the first step through Q.
Finally, the two steps are repeated until a local optimum is reached. In practice, we sample topic assignments every 5 GD epoches and this is called a pass; usually it is enough to run 50 passes to find a local minima. The experimental settings are detailed in the experimental section.
For MR3++, the learning process is the same with that of MR3, except that MR3++ has more gradients to compute and other gradients related to the implicit feature matrix Y have to modify accordingly. The learning algorithm is summarize in Algorithm 1.
Gradients of the Parameters
We now give gradients used in Eq.(15a). (Gradients of biases are omitted; rating mean is not fitted because ratings are centered before training.) Besides the notations given in Table I, The user-specific feature matrix P exists in three terms: the first is in the rating prediction, the second is in the local social context, and the third is in the Frobenius norm penalty.
(16) 1 2 ∂L ∂P u = i:Ru,i =0 W u,i (R u,i − R u,i )Q i + λP u + λ rel v:Tu,v =0 C u,v (P T u HP v − S u,v )HP v + λ rel v:Tv,u =0 C v,u (P T v HP u − S u,v )H T P v .
The item-specific feature matrix Q exists in three terms: the first is in the rating prediction, the second is in the item reviews, and the third is in the Frobenius norm penalty.
(17)
∂L ∂Q i = 2 u:Ru,i =0 W u,i (R u,i − R u,i )P u − λ rev κ M i − m i z i exp (κQ i ) + 2λQ i .
The social correlation matrix H exists in two terms: one is in the local social context, and the other is in the Frobenius norm penalty.
(18) 1 2 ∂L ∂H = λ rel Tu,v =0 C u,v (P T u HP v − S u,v )P u P T v + λH.
The parameter κ controls the peakiness between Q and θ as shown in Eq.6. The differentiation of L over κ is followed by the chain rule: L over θ, and then θ over κ.
∂L ∂κ = −λ rev i,f Q if M if − m i z i exp (κQ if ) .(19)
Note that φ f is a stochastic vector, so we optimize the corresponding unnormalized vector ψ f and then get φ f w = exp (ψ f w )/z f . The unnormalized word distributions exist in the likelihood term of the reviews corpus.
∂L ∂ψ f w = −λ rev N f w − n f z f exp (ψ f w ) .(20)
For MR3++, the gradients of the Y and Q are given below, and others are the same. The gradients of the added implicit feature parameters Y j are computed by
1 2 ∂L ∂Y j = Ru,i =0 W u,i |N u | − 1 2 (R * u,i − R u,i )Q i + λY j , ∀j ∈ N u .(21)
The gradients of the original parameters Q i are now computed by (an extra term with respect to implicit ratings Y is added to P , compared with that in Eq. (17)
) (22) ∂L ∂Q i = 2 u:Ru,i =0 W u,i (R * u,i − R u,i ) P u + |N u | − 1 2 j∈Nu Y j − λ rev κ M i − m i z i exp (κQ i ) + 2λQ i .
EXPERIMENTS
In the above Section 4.2 and Section 5.2, we have introduced the solutions to Problem 1 and to Problem 2 respectively. The solution to Problem 1 is the result of the proposed model MR3 (See Eq.(9)), which exploits all three types of information simultaneously. The solution to Problem 2 is the model MR3++ (see Eq.(13)), which extends the MR3 model by incorporating implicit feedback from ratings.
In this section, we first compare our proposed eSMF model with a state-of-the-art Social MF method to show the benefit of exploiting the graph structure of neighbors via trust values which capture the social influence locality. Second, we demonstrate the effectiveness of the proposed model MR3 and the improvement of its extension MR3++ over various different recommender approaches. Further, we design experiments to see the contribution of each data source to the proposed model and the impact of the implicit feedback, followed by sensitivity analysis of our models to three meta parameters.
Dataset and Metric
In this section, we first introduce the two datasets used to evaluate the recommendation performance, including the simple preprocessing and basic statistics. And then we describe the performance metric and the evaluation protocol. We evaluate our models on two datasets: Epinions and Ciao. 5 They are both knowledge sharing and review sites, in which users can rate items, connect to others, and give reviews on products (see Figure 1). We remove stop words 6 and then select top L = 8000 frequent words as vocabulary; to avoid noise we reserve users who rated more than three times and remove items that were rated by only once or twice. The items indexed in rating matrix are aligned to documents in doc-term matrix, that is, we aggregate all reviews of a particular item as a 'doc'. Statistics of datasets are given in Table II. We see that the rating matrices of both datasets are very sparse, and the average length of documents is short on Epinions.
Note that the number of average words per item on Ciao is 42 times longer than that on Epinions, and the social density on Ciao is 12 times denser than that on Epinions. So Ciao contains richer and higher quality information in social relations and reviews. Hence we expect that social relations and reviews contribute much more to the proposed model on the Ciao dataset. We will see the quantitative results later (Section 7.4.4) which are consistent with this observation. 7.1.2. Evaluation Protocol and Metric. There are three kinds of parameters to be set. For optimization-related parameters, we use mini-batch gradient descent method with momentum to optimize the objective functions. Reference to empirical rules (e.g. [Mnih and Salakhutdinov 2007]), we set momentum = 0.8, batchSize = #training / #numOfBatches, and learning rate = 0.0007; we randomly shuffle the training data prior to each epoch. For regularization-related parameters, we use norm regularization to avoid over-fitting. According to related bibliographies (e.g. [Guo et al. 2015]), we set norm penalty λ = 0.5. For models-related parameters, these are F that is the number of latent factors, λ rev which controls the contribution from reviews, and λ rel which controls the contribution from social relations; see Section 7.5 in detail.
We randomly select x% as the training set and report the prediction performance on the remaining 1 -x% testing set, where x usually varies in {20, 50, 80, 90} percent. The reported results are the average values over five times independent random selection, being similar to 5-fold cross validation. All comparing methods abide by the same evaluation protocol.
The metric root-mean-square error (RMSE) for rating prediction task is defined as
RM SE T = (u,i)∈T (R u,i −R u,i ) 2 |T |,(23)
where T is the test set. Compared with the metric mean absolute error (MAE) [Herlocker et al. 1999], RMSE puts more emphasis on large deviation than MAE [Herlocker et al. 2004]. For instance, an error deviation of 2 points increases the total sum of error by 4 under the metric RMSE, while by 2 under the metric MAE. A smaller RMSE or MAE means a better prediction performance. A small improvement regarding RMSE could have a significant impact on the quality of recommendation and ten percent RMSE improvement led to $1M Grand Prize [Bennett and Lanning 2007;Koren 2008].
There are some other metrics used for evaluation in recommendation systems. For example, in real-world systems we may care about a few top-items for each user and hence we use the metric recall@K to evaluate the performance of recommending the top K items to the target user. In this paper, We think RMSE is a better metric for our task of rating prediction. In rating prediction, we not only care about the positive preference of ratings above 4 which means the users like the item so much, but also care about the negative preference of ratings under 1 which means that the users do not like the item so much. However, recall@K only care about the positive preference for real recommendation. We focus on the RMSE results and also report the recall results briefly in Section 7.6.
For each user's ratings, we randomly reserve one into the validation set, and randomly select 1 or 5 into the training set, and put the rest into the test set. The former setting is called sparse setting and the latter is dense. We report the test result where the corresponding validation result is the best.
Note that, for the MR3++ model extended from the proposed model MR3, the predicted ratingŝ R u,i (see Eq.
(2)) should be replaced byR * u,i accordingly (see Eq. (12)). [Hu et al. 2015].
Comparing Social MF Methods
In this section, we first compare our proposed eSMF model with LOCABAL [Tang et al. 2013b], a state-of-the-art Social MF method, to show the benefit of exploiting the graph structure of neighbors. The motivation for the comparison is two-fold: 1) to demonstrate that exploiting ratings and social relations more tightly can further improve the performance of social RSs; 2) to form an effective component of the proposed model MR3, which we evaluate in the following section (Section 7.3.1).
We use grid search to determine λ rel which controls the contribution from social relations and report the best RMSE on the testing set over 50 passes. The eventually reported results are the average values over five times independent random selection of training set, being similar to fivefold cross validation. For both LOCABAL and eSMF, we get the best RMSE when λ rel = 0.1. Parameters Θ = {P, Q, H} are randomly initialized from the normal distribution N (0, 0.01).
The results are demonstrated in Figure 5, with varying percentage of the training set = {20, 30, 40, 50, 60, 70, 80, 90, 99} and we have the following observation:
-Exploiting ratings and social relations tightly (i.e., exploiting the graph structure of neighbors via trust values capturing social influence locality) can further improve recommender performance in terms of RMSE on both datasets. For example, eSMF obtains 1.18%, 0.89%, and 0.72% relative improvement compared with a state-of-the-art Social MF method LOCABAL on Epinions with 20%, 50%, and 70% as the training set respectively under the RMSE metric.
Comparing Different Recommender Systems
In this section, we first demonstrate the effectiveness of the proposed model MR3 with different recommendation approaches to show the benefit of modeling three types of data sources simultaneously. And then we demonstrate the improvement of its extension MR3++ to show the benefit of incorporating implicit feedback from ratings. Mean. This method predicts the rating always using the average, i.e. µ in Eq.
(2), across all training ratings. This is the best constant predictor in terms of RMSE.
PMF. This method performs matrix factorization on rating matrix as shown in Eq.(1) [Mnih and Salakhutdinov 2007]. It is the representative of latent factors CF (see Figure 2(a)). It only uses the rating source.
LOCABAL. This method is based on matrix factorization and exploits local and global social context as shown in Eq.(7) [Tang et al. 2013b]. It is the representative of Social MF (see Figure 2(b)) 7 . It only uses ratings and relations.
HFT. This method combines latent factors in ratings with hidden topics in reviews as shown in Eq.(8) [McAuley and Leskovec 2013]. It is the representative of Topic MF (see Figure 2(c)). It only uses ratings and reviews.
HFT+LOCABAL. This is the simple hybrid method of HFT and LOCABAL, which linear weights their results:R u,i = αR HF T u,i
+ (1 − α)R LOCABAL u,i
, where α ∈ (0, 1). 7 Since we want to compare the performance of MR3 with the state of the art recommenders, so we compare it with LOCABAL and not with our proposed eSMF in Table III. Instead, the comparison between eSMF and MR3 is conducted in Section 7.4.1.
We use the source code PMF 8 and HFT 9 provided by their authors. For the hyperparameter λ rev which controls the contribution from item reviews, it is determined by grid search. For HFT, λ rev = 0.1; and for MR3, λ rel = 0.001 and λ rev = 0.05. More details about the sensitivity to meta parameters of MR3 will be discussed later (Section 7.5.1).
In our experiments, we leave out the comparison with HFT+LOCABAL. Because the core idea of MR3 is the alignment between latent factors found by LOCABAL and hidden topics found by HFT to form a unified model. Theoretically, MR3 is more elegant than linear combination which is out of our motivations. And practically, we do not intent to show the superior of our MR3 over the linear combination. We just show the unified model MR3 is better than its individual components.
The results of the comparison are summarized in Table III, with varying percentage of the training set = {20, 50, 80, 90} and we have the following observations.
-Exploiting social relations and reviews beyond ratings can both significantly improve recommender performance in terms of RMSE on the two datasets. For example, HFT and LOCABAL obtain 4.95% and 5.60% relative improvement compared with PMF on Epinions respectively, with 80% as the training set. -Our proposed model MR3 always achieves the best results. Compared with HFT and LOCABAL, MR3 averagely gains 0.0466 and 0.0217 absolute RMSE improvement on Epinions and 0.0392 and 0.0165 on Ciao respectively. The main reason is that MR3 jointly models all three types of information effectively. The contribution from each component of data source to MR3 is discussed in the later subsection (Section 7.4.1).
Comparing the Extension Model or MR3++ with Different Recommender Systems.
We then compare the extension of the proposed model MR3++ introduced in Section 5.2 (see Eq.(13)) to show the benefit of incorporating implicit feedback from ratings.
PMF. This method is investigated in the above subsection (Section 7.3.1) and we list it here to see the impact of implicit feedback from ratings by comparing it with the following SVD++ method.
SVD++. This method exploits implicit feedback from ratings besides performing matrix factorization on rating matrix as shown in Eq.(12) [Koren 2008]. We can know the impact of implicit feedback from ratings by comparing it with the PMF method.
MR3. This method is investigated in the above subsection (Section 7.3.1) and we list it here to see the performance of its extension more clearly. It doesn't mine the data source more deeply , i.e., without incorporating implicit feedback from ratings.
MR3++. This method extends the proposed model MR3 by incorporating implicit feedback from ratings.
We adopt the source code of SVD++ provided in the LibRec.net Java Recommender System Library. For the hyperparameter λ rev which controls the contribution from item reviews and λ rel which controls the contribution from social relations, they are determined by grid search. For MR3++, λ rel = 0.001 and λ rev = 0.005. More details about the sensitivity to meta parameters of MR3++ will be discussed later.
The results 10 of the comparison are summarized in Table IV and we have the following observations.
-Integrating three data sources and exploiting implicit feedback can both improve the accuracy of rating prediction in terms of RMSE on two datasets. For example, SVD++ and MR3 obtain 7.94% and 8.02% relative improvement compared with PMF on Epinions with 80% as the training set respectively. -The extension model MR3++ almost achieves the slightly better results. Compared with SVD++ and MR3, MR3++ averagely gains 5.94% and 0.31% relative RMSE improvement on Ciao. The main reason is that MR3++ can jointly models all three types of information and meanwhile can mine the rating source deeply. The contributions from the two components of MR3++ is discussed in the following subsection.
Contribution of Data Sources and Impact of Implicit Feedback
In this section, we first measure the contribution from item reviews and social relations in the proposed model MR3, and then measure the impact of implicit feedback from ratings in the extension MR3++ model. 7.4.1. Contribution of Data Sources from Reviews and Social Relations. We have shown the effectiveness of integrating ratings with social relations and reviews in our proposed model MR3. We now investigate the contribution of each data source to the proposed model by eliminating the contribution of social relations and reviews from MR3 respectively: MR3\content: Eliminating the impact of reviews by setting λ rev = 0 in Eq.(9), which is equivalent to eSMF as shown in Eq.(11).
MR3\social: Eliminating the impact of social relations by setting λ rel = 0 in Eq.(9), which is equivalent to HFT as shown in Eq.(8).
MR3\content\social: Eliminating the impact of both reviews and social relations by setting λ rev = 0 and λ rel = 0 in Eq.(9), which is equivalent to PMF as shown in Eq.(1).
The predictive results of MR3 and its three components on Epinions dataset are shown in Figure 6. The performance degrades when either social relations or reviews are eliminated. In detail, MR3\content, MR3\social, and MR3\content\social averagely reduce 1.19%, 4.29%, and 7.99% relative RMSE performance on Epinions respectively, suggesting that both reviews and social relations contain essential information for recommender. 7.4.2. Further Analysis of Contribution from Auxiliary Sources. We have shown that MR3\content\social degrades 7.99% performance of total relative RMSE; and intuitively we want to know how the additional sources of information help MR3 improve recommendation. We address this issue from the richness perspective of social relations and reviews via contrastive analysis. In detail, we first collect the (user, item) pairs such that the PMF method gets worst prediction while the MR3 method gets the best; we set the difference threshold between PMF and MR3 to 1. For example, given that the prediction error of PMF on (u, i) is
e 1 = |R P M F u,i − R u,i | and MR3 is e 2 = |R M R3 u,i − R u,i |,
if (e 1 − e 2 ) >= 1 then we reserve this pair. We then calculate the average social relations among these users and the average review words among these items to measure the quality and richness of the additional information.
With 80% training on the Epinions dataset, we collect total 4,382 such pairs where the number of users is 3,627 and the number of items is 2,875; they represent 2.80%, 7.33%, and 3.88% on total ratings, total users, and total items, respectively. The average social relations among these users are 14.18 and the average review words among these items are 1129.23. Note that these two measures are much larger than the corresponding total average values which are 8.78 and 30.3 respectively (see Table II). These results intuitively confirm the significant contributions from additional data sources to some extent. 7.4.3. Impact of Implicit Feedback from Ratings. We have investigated the contribution of each data source to the proposed model by eliminating the impact of social relations and reviews from MR3 respectively, showing that extra data sources (item reviews and social relations) are both useful for improving the recommendation performance in terms of RMSE metric. We now investigate the impact of implicit feedback from ratings in the extension model MR3++. The MR3++ model contains two kinds of components: one is to integrate three data sources (ratings, social relations and item reviews) and the other is to mine a single data source deeply (incorporating implicit feedback from ratings). In detail, we investigate the performance of MR3++ by eliminating the impact of data sources and implicit feedback from it in turn:
MR3++\sources: Eliminating the impact of more data sources (i.e., remove social relations and item reviews) by setting λ rev = 0 and λ rel = 0 in Eq.(13), which is equivalent to the SVD++ model as shown in Eq.(12).
MR3++\implicit: Eliminating the impact of implicit feedback from ratings by setting Y j = 0 in Eq.(13), which is equivalent to the MR3 model as shown in Eq.(9).
MR3++\sources\implicit: Eliminating the impact of both more data sources and implicit feedback by setting λ rev = 0, λ rel = 0 and Y j = 0 in Eq.(13), which is equivalent to the PMF model as shown in Eq.(1).
The predictive results of MR3++ and its two components on Epinions and Ciao datasets are shown in Figure 7. The histograms show that the performance degrades somewhat when implicit feedback are eliminated on both datasets. For example, MR3++\implicit reduces 0.23% and 0.60% relative RMSE performance on Epinions and Ciao datasets with 20% and 80% as the training set, respectively. 7.4.4. Integrating more data sources vs. Mining limited data deeply. Revisit Table IV and Figure 7; we can see clearly that integrating more data sources can improve a bigger margin than only mining a single data source deeply in some cases (here, on the Ciao dataset, in terms of average relative RMSE, the margin greater than 5.61%), but not always (here, on the Epinions dataset, in terms of average relative RMSE, the margin less than 0.23%; and when the training percent is 90, i.e. the star entry in Table IV, mining the rating source deeply even outperforms integrating social relation and review data sources).
Consider the quality of social relations and reviews. As we mentioned previously in Section 7.1.1 (see Table II), the number of average words per item on Ciao is 42 times longer than that on Epinions, and the social density on Ciao is 12 times denser than that on Epinions. So Ciao contains richer and higher quality information in social relations and reviews. Intuitively and empirically, more data sources may improve performance better than mining single data source deeply when extra data sources contain richer information.
The results on the Epinions dataset in Figure 7 are worth further thinking. The removing of either implicit feedback component or auxiliary sources component has little effect on the rating prediction performance, but the removing of both components has big impact. This observation is different from the results in Figure 6 and even from the results on the Ciao dataset in Figure 7. One possible explanation for these findings is that the contributions from the two component are not linear-additive. Different datasets show diverse effect between implicit feedback and auxiliary sources.
Sensitivity to Meta Parameters: F , λ rel and λrev
We analyze the sensitivity of the proposed model and its extension model to the three important hyperparameters: one controls the contribution from social relations, one controls the contribution from item reviews, and another determines the dimensionality of latent representations. 7.5.1. Sensitivity Analysis of the Proposed Model or MR3. The model MR3 has three important parameters: 1) the number of latent factors F ; 2) the hyperparameter λ rev that controls the contribution from reviews; and 3) the hyperparameter λ rel that controls the contribution from social relations. We investigate the sensitivity of MR3 to these parameters by varying one of them while fixing the other two.
First, we fix F = 10 and study how the reviews associated hyperparameter λ rev and the social relations associated one λ rel affect the whole performance of MR3. As shown in Figure 8, we have some observations: 1) the prediction performance degrades when either λ rel = 0 or λ rev = 0; (RMSE is 1.1502 when both are zero.) 2) MR3 is relatively stable and not sensitive to λ rel and λ rev when they are small (e.g., from 0.0001 to 0.1), so we choose the reasonable values 0.001 and 0.05 for them respectively.
Next, we fix λ rel = 0.001 and λ rev = 0.05, and vary the number of latent factors F = {5, 10, 15, 20, 30, 50, 70, 100} with 20%, 50%, 80% as the training set respectively. As shown in Figure 9, MR3 is relatively stable and not sensitive to F , so we choose the reasonable value 10 as default. 7.5.2. Sensitivity Analysis of the Extension Model or MR3++. We repeat the above process to investigate the sensitivity of MR3++ to these three meta parameters.
First, we fix F = 10 and study how the reviews associated parameter λ rev and the social relations associated one λ rel affect the whole performance of MR3++. As shown in Figure 10 (on the right), we have some observations: 1) the prediction performance degrades when either λ rel = 0 or λ rev = 0; (RMSE is 1.0139 when both are zero.) 2) MR3++ is relatively stable and not sensitive to λ rel and λ rev when they are small (e.g., from 0.0001 to 0.01), so we choose the reasonable values 0.001 and 0.005 for them respectively. Note that the length of RMSE range interval is within 0.002.
Next, we fix λ rel = 0.001 and λ rev = 0.005, and vary the number of latent factors F = {5, 10, 15, 20, 30, 50} with 20%, 50%, 80% as the training set respectively. As shown in Figure 10 (on the left), MR3++ is relatively stable and not sensitive to F , so we choose the reasonable value 10 as default.
Comparing Different Methods with Recall Metric
In this paper, we choose RMSE as the main metric for evaluation because we focus on the rating prediction. However, recall is a more practical metric for real top-N recommender system. Recall cares more about the positive preferences.
Inspired by top-N recommendations [Kabbur et al. 2013], we define the set of all 5 ratings of a specific user u as positive instances, denoted as G u for the ground truth and |G u | is the number of his exact 5 ratings. For each user, we select top variant K u = |G u | predicted ratings, denoted as G u for evaluation. Thus, the recall, or average hit rate metric is defined as
recall = u |G u ∩ G u | u |G u | .(24)
Hit rate demonstrates the performance of recalling the test items in size |G u | recommendations list of user u. Inversely, if we take the ratings of 1 as negative instances, we can see the performance of predicting dislikes of users. The results of the comparison using recall metric is summarized in Table V. The setting of meta parameters is the same with Section 7.3.1 and Section 7.3.2. MR3++ achieves a little improvement over HFT and LOCABAL on two datasets, hitting on either rating 5 or rating 1. Overall, MR3 and MR3++ models got the competitive results. We think it is reasonable that our proposed methods did not achieve the significant improvement with the recall metric. HFT, LOCABAL and our methods focus on getting closer predicted ratings, rather than the best ranking of items. In other words, we define the problem as a prediction regression task, leading us naturally to the standard RMSE metric. The recall metric cares more about the exact match of top-N recommendations where the models are investigated to pick out favorite things of users. These models always put more weight on higher ratings in training set, aiming at recalling the top preferred items. MR3 and MR3++, including HFT, are not designed for this. The emphasis is closer and precise predictions all over 1 to 5 ratings instead.
Running Time Analysis
Practically, the running time of models is a great concern in real-world recommender system. We conduct several rounds of experiments with the same settings, that is, the number of latent factors F = 10, training percent = 80% and testing the remaining 20%. We set learning rate = 0.0001 for all models and nEpoch = 5 in Algorithm 1, meaning we sample and update z d,n every 5 epoches of updating Θ, Φ, κ, which is not required in LOCABAL model.
We compare the training time and the predicting time, respectively, as shown in Table VI. We would like to explain the comparing results as follows. We measure the time cost on a PC machine with Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz, 8G RAM, GCC 5.3.0 installed. Time cost of SVD++ is not listed for we adopt the source code from LibRec.net Java Recommender System Library, and the implementation is quite different from HFT, LOCABAL, MR3 and MR3++.
-As for the time of one training epoch, MR3 is close to HFT and LOCABAL, while MR3 combines more sources of information. MR3++ will include many large-scale implicit feedbacks, so it takes much longer to train through one epoch, especially on the dataset of Epinions which has more users and complicated social networks. We also compare the total time cost for training with different methods. We adopt the early stop on the best valid result to avoid overfitting and we count that time as total. The total time cost of MR3++ is about ten times of that of MR3, because of the around ten times cost for each epoch. The total time of LOCABAL is extremely long as it converges after many more iterations than other methods especially on the larger Epinions dataset. -As for the predicting time, it is almost the same comparing MR3 with HFT and LOCABAL.
MR3++ costs a little more for predicting because of the extra implicit feedbacks included.
In a word, from Table VI, we can conclude that the empirical running time of our proposed models is acceptable compared to other methods. Figure 11 demonstrates the training process by iterations and RMSE on the valid set. In most cases, these methods become convergent within about 150 rounds of iterations while the valid RMSE of LOCABAL method can still decrease after that. More specifically, about 500 iterations are required on Ciao and even more on Epinions with the conservatively small learning rate. From Figure 11(a) and 11(c), MR3 and MR3++ only take few more iterations until convergence yet yield better performance. Interestingly, by Figure 11(b) and 11(d), MR3++ continues to converge on a lower valid RMSE value than MR3, for MR3++ takes many implicit feedbacks into account.
CONCLUSIONS AND FUTURE WORK
Heterogenous recommendation information sources beyond explicit ratings including social relations and item reviews present both opportunities and challenges for conventional recommender systems. We investigated how to fuse these three kinds of information tightly and effectively for recommendation by aligning the topic and social latent factors. Furthermore, we mine the limited data source more deeply by incorporating implicit feedback from ratings.
We first proposed a novel model MR3 which jointly models ratings, social relations, and item reviews by aligning latent factors and hidden topics to perform social matrix factorization and topic matrix factorization simultaneously for effective rating prediction. Moreover, an extended Social MF method eSMF was obtained by capturing the graph structure of neighbors via trust values to exploit the ratings and social relations more tightly. We then enhanced the proposed model by incorporating implicit feedback from ratings, resulting in the extension model MR3++, to demonstrate its capability and flexibility. The core idea of mining ratings deeply is to learn an extra implicit feature matrix to consider the influence of rated items.
Empirical results on two real-world datasets demonstrated that our proposed models lead to improved predictive performance compared with various different kinds of recommendation approaches. Furthermore, we designed experiments to understand the contribution of each data source and the impact of the implicit feedback from ratings; we also compare the contribution of integrating more data sources with the impact of mining the limited data source deeply from the perspective of the richness of auxiliary information. We finally analyzed the sensitive analysis of our models to the three meta parameters. We focused on the rating prediction task in this paper and hence we evaluated the performance of recommender systems under the metric RMSE. However, more metrics should be explored in the future work [McNee et al. 2006].
Fig. 1 .
1Three types of recommendation information sources. On the Ciao and Epinions datasets, there are three kinds of data sources, i.e., rating scores, item reviews, and social relations.
Fig. 2 .
2Illustrations of the dependencies among data matrices and parameter matrices in three different recommendation approaches. (a) Latent factors CF exploits ratings, (b) Social MF integrates ratings with social relations; and (c) Topic MF integrates ratings with reviews text. Shaded nodes are data and others are parameters.
Fig. 3 .
3Relationship among matrices of parameters and data of the proposed model (MR3). Shaded nodes are data (R: rating matrix, S: social similarity matrix, and D (here it is embodied by words w): doc-term matrix of reviews). Others are parameters (P : matrix of latent user factors, Q: matrix of latent item factors, H: social correlation matrix, θ: doctopic distributions, and φ: topic-word distributions). The double connections between P and S are indicated by the term (S − P T HP ) in Eq.(3). The dotted line between Q and θ indicates their coupling through the fixed transformation as shown in Eq.(6).
more notations are required [Griffiths and Steyvers 2004] to learn the models. -For each item i (i.e. doc i, generated from all reviews commented on this item): 1) M i is an Fdimensional count vector, in which each component is the number of times each topic occurs for it; 2) m i is the number of words in it; and 3) z i = f exp (κQ if ) is a normalizer. -For each word w (in the prescribed word vocabulary): 1) N w is an F -dimensional count vector, in which each component is the number of times it has been assigned to each topic; 2) n f is the number of times topic f occurs for it; and 3)z f = w exp (ψ f w ) is a normalizer. ALGORITHM 1: Learning process of the proposed models Input: Ratings R, reviews w, and relations T , number of latent factors F , maximum number of iterations maxIter, number of epoches nEpoch. Output: User features P , item features Q, social relation matrix H, implicit features Y , topic proportions θ, and topic distributions φ. Pre-compute W , S, and C by Eq.(4.2), Eq.(4), and Eq.(10); Initialize P, Q, H, Y randomly from N (0, 0.01); iter = 1; repeat for epoch = 1; epoch < nEpoch; epoch ++ do update Θ new , Φ new , κ new = arg min Θ,Φ,κ L(Θ, Φ, z old , κ) by SGD using gradients Eqs.(16 -22); end sample z new d,n with probability p(z new d,n = f ) = φ new f,w d,n ; iter ++; until convergence (e.g., iter > maxIter or |L iter −L iter+1 | |L iter | < 10 −6 ); ACM Transactions on Knowledge Discovery from Data, Vol. 12, No. 2, Article 23, Publication date: March 2018.
7. 3 . 1 .
31Comparing the Proposed Model or MR3 with Different Recommender Systems. We first compare the proposed model MR3 introduced in Section 4.2 (see Eq.(9) or Figure 3) with the following different types of recommendation approaches to show the benefit of modeling three data sources simultaneously:
Fig. 6 .
6Predictive performance of MR3 compared with its three components. Left: Epinions; Right: Ciao. The figures are copied from[Hu et al. 2015].
Fig. 7 .
7Predictive performance of MR3++ compared with its two components. Left: Epinions. Right: Ciao. The figures are copied from[Hu et al. 2015].
Fig. 9 .
9Predictive performance of MR3 by varying the number of latent factors F . Fixing λ rel = 0.001 and λrev = 0.05. Left: Epinions; Right: Ciao. The figures are copied from[Hu et al. 2015].
Fig. 10 .
10Predictive performance of MR3++ by varying hyperparameters. Left: varying λ rel and λrev by fixing F = 10; RMSE is 1.0139 when both are zero; Percent of Training is 80%. Right: varying F by fixing λ rel = 0.001 and λrev = 0.005. Dataset: Ciao.
Fig. 11 .
11Training process of all mentioned methods by iterations. Figure (a) and (b) on dataset Epinions, (c) and (d) on dataset Ciao. (a) and (c) compare HFT, LOCABAL and MR3 on two datasets while (b) and (d) compare MR3 with MR3++ respectively.
Table I .
INotationsSymbol
Meaning
Form
M , N
the number of users, and of items
scalar
L
the size of the word vocabulary
scalar
P, Q
the set of users, and of items
set
N d
the set of words in doc d
set
Nu
Table II .
IIStatistics of the DatasetsStatistics
Epinions
Ciao
Total
# of Users
49,454
7,340
56,794
# of Items
74,154
22,472
96,626
# of Ratings/Reviews
790,940
183,974
974,914
# of Social Relations
434,680
112,942
547,622
# of Words
2,246,837 28,874,000
31,120,837
Rating Density
0.00022
0.0011
-
Social Density
0.00018
0.0021
-
Average Words Per Item
30.3
1284.9
-
Average Relations Per User
8.78
15.38
-
7.1.1. Datasets and Statistics.
Fig. 5. Comparisons of eSMF with a Social MF method on two datasets. Left: Epinions; Right: Ciao. The figures are copied from20
30
40
50
60
70
80
90
99
1.07
1.08
1.09
1.1
1.11
1.12
1.13
Percent of Training
RMSE
LOCABAL
eSMF
20
30
40
50
60
70
80
90
99
0.94
0.96
0.98
1
1.02
1.04
Percent of Training
RMSE
LOCABAL
eSMF
Table III .
IIIRMSE Comparisons of the Proposed Model MR3 with Different Methods (F = 10)Datasets
Training
Methods
Improvement of MR3 vs.
Mean
PMF
HFT
LOCABAL
MR3
PMF
HFT
LOCABAL
Epinions
20%
1.2265 1.2001 1.1857
1.1222
1.1051
8.60% 7.29%
1.55%
50%
1.2239 1.1604 1.1323
1.1055
1.0809
7.35% 4.76%
2.28%
80%
1.2225 1.1502 1.0960
1.0892
1.0648
8.02% 2.93%
2.29%
90%
1.2187 1.1484 1.0867
1.0840
1.0634
7.99% 2.19%
1.94%
Ciao
20%
1.1095 1.0877 1.0439
1.0287
1.0142
7.25% 2.93%
1.43%
50%
1.0964 1.0536 1.0379
0.9930
0.9740
8.17% 6.56%
1.95%
80%
1.0899 1.0418 0.9958
0.9709
0.9521
9.42% 4.59%
1.97%
90%
1.0841 1.0391 0.9644
0.9587
0.9451
9.95% 2.04%
1.44%
Average
8.34% 4.16%
1.86%
This table is copied from [Hu et al. 2015].
Table IV .
IVRMSE Comparisons of the Extended Model MR3++ with Different Methods (F = 10).Datasets
Training
Methods
PMF
SVD++
MR3
MR3++
Epinions
20%
1.2001
1.1159
1.1051
1.1026
50%
1.1604
1.0816
1.0809
1.0785
80%
1.1502
1.0655
1.0648
1.0641
90%
1.1484 1.0601* 1.0634
1.0618
Ciao
20%
1.0877
1.0555
1.0142
1.0132
50%
1.0536
1.0276
0.9740
0.9711
80%
1.0418
1.0139
0.9521
0.9464
90%
1.0391
1.0055
0.9451
0.9425
The columns of PMF and MR3 are copied from Table III to show the
contributions of auxiliary sources and the impact of implicit feedback
more clearly. Refer to Section 7.4.4 for the explanation of the star entry.
Fig. 8. Predictive performance of MR3 by varying λ rel and λrev. Both vary in {0, 0.001, 0.005, 0.01, 0.05, 0.1}. RMSE is 1.1502 when both are zero. Fixing F = 10. Percent of training set = 80. Dataset: Epinions. The figure is copied from[Hu et al. 2015].0
0.001
0.005
0.01
0.05
0.1
1.065
1.07
1.075
1.08
1.085
1.09
1.095
λrev
RMSE
λrel = 0
λrel = 0.001
λrel = 0.005
λrel = 0.01
λrel = 0.05
λrel = 0.1
5
10
15
20
30
50
70
100
1.06
1.07
1.08
1.09
1.1
1.11
1.12
F
RMSE
80% as training
50% as training
20% as training
5
10
15
20
30
50
70
100
0.95
0.96
0.97
0.98
0.99
1
1.01
1.02
1.03
F
RMSE
80% as training
50% as training
20% as training
Table V .
VRecall Comparisons of Model MR3/MR3++ with Different Methods.Datasets
Ratings
Methods
HFT
LOCABAL
MR3
MR3++
Epinions
5
74.7127%
73.9329%
74.6411% 74.6455%
1
59.8015%
59.2960%
60.1507% 60.1232%
Ciao
5
73.4361%
73.9042%
73.8051% 74.0804%
1
52.3657%
53.1330%
53.1330% 53.6445%
Table VI .
VIEmpirical Running Time (seconds) Comparison of the Proposed MR3/MR3++ with Different MethodsDatasets
Steps
Methods
HFT
LOCABAL
MR3
MR3++
Epinions
Average Epoch
1.4168
1.1251
1.549368
18.714360
Total (best valid)
77.8941
1746.1273
212.9282
2298.6840
Predicting
0.2483
0.2500
0. 274319
2.117205
Ciao
Average Epoch
0.3490
0.2656
0.416025
1.622635
Total (best valid) 431.0403
152.2076
601.7270
913.3129
Predicting
0.0469
0.0531
0.062847
0.098469
ACM Transactions on Knowledge Discovery from Data, Vol. 12, No. 2, Article 23, Publication date: March 2018.
We organize these reviews as a document-term matrix w ∈ N N ×L where the entry w d,n is the occurrence of token n in doc d. For convenience, we also refer the review as d u,i .ACM Transactions on Knowledge Discovery from Data, Vol. 12, No. 2, Article 23, Publication date: March 2018.
We omit the term about the ratings, which is the same as that in Eq.(1), to show clearly how to exploit the social relations.3 We aggregate all reviews of a particular item as a 'doc'; so the indices of docs are corresponding to those of items.
The global social context W u,i is omitted here for clarity, and given in Eq.(9) instead. ACM Transactions on Knowledge Discovery from Data, Vol. 12, No. 2, Article 23, Publication date: March 2018.
http://www.public.asu.edu/ ∼ jtang20/ 6 http://www.ranks.nl/stopwords
http://www.cs.toronto.edu/ ∼ rsalakhu/ 9 http://cseweb.ucsd.edu/ ∼ jmcauley/ 10 The standard deviations are all less than 10 −5 . Although, the results inTable III and Table IVcan be merged, we split the whole results into the present two tables to show the contributions of auxiliary sources (i.e., the MR3 model) and the impact of implicit feedback (i.e., the MR3++ model) more clearly.ACM Transactions on Knowledge Discovery from Data, Vol. 12, No. 2, Article 23, Publication date: March 2018.
ACKNOWLEDGMENTSThis work is supported by the NSFC (61472183, 61333014) and the 863 program (2015AA015406). The authors thank Jiliang Tang for providing the datasets. The authors would also like to thank the reviewers for their profound comments and valuable suggestions to improve the quality of this paper.
Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. Gediminas Adomavicius, Alexander Tuzhilin, IEEE Transactions on. 17Knowledge and Data EngineeringGediminas Adomavicius and Alexander Tuzhilin. 2005. Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. Knowledge and Data Engineering, IEEE Transactions on 17, 6 (2005), 734-749.
Leveraging decomposed trust in probabilistic matrix factorization for effective recommendation. Y Bao, J Fang, Zhang, Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI). the 28th AAAI Conference on Artificial Intelligence (AAAI)350Y Bao, H Fang, and J Zhang. 2014a. Leveraging decomposed trust in probabilistic matrix factorization for effective recom- mendation. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI). 350.
TopicMF: Simultaneously Exploiting Ratings and Reviews for Recommendation. Yang Bao, Hui Fang, Jie Zhang, AAAI. Yang Bao, Hui Fang, and Jie Zhang. 2014b. TopicMF: Simultaneously Exploiting Ratings and Reviews for Recommenda- tion.. In AAAI. 2-8.
The netflix prize. James Bennett, Stan Lanning, Proceedings of KDD cup and workshop. KDD cup and workshop35James Bennett and Stan Lanning. 2007. The netflix prize. In Proceedings of KDD cup and workshop. 35.
. M David, Blei, Y Andrew, Michael I Jordan Ng, Latent dirichlet allocation. the Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research 3 (2003), 993-1022.
Empirical analysis of predictive algorithms for collaborative filtering. David John S Breese, Carl Heckerman, Kadie, Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. the Fourteenth conference on Uncertainty in artificial intelligenceMorgan Kaufmann Publishers IncJohn S Breese, David Heckerman, and Carl Kadie. 1998. Empirical analysis of predictive algorithms for collaborative filter- ing. In Proceedings of the Fourteenth conference on Uncertainty in artificial intelligence. Morgan Kaufmann Publishers Inc., 43-52.
A probabilistic model for using social networks in personalized item recommendation. J B Allison, David M Chaney, Tina Blei, Eliassi-Rad, Proceedings of the 9th ACM Conference on Recommender Systems. the 9th ACM Conference on Recommender SystemsACMAllison JB Chaney, David M Blei, and Tina Eliassi-Rad. 2015. A probabilistic model for using social networks in personal- ized item recommendation. In Proceedings of the 9th ACM Conference on Recommender Systems. ACM, 43-50.
Context-aware collaborative topic regression with social matrix factorization for recommender systems. Chaochao Chen, Xiaolin Zheng, Yan Wang, Fuxing Hong, Zhen Lin, Twenty-Eighth AAAI Conference on Artificial Intelligence. Chaochao Chen, Xiaolin Zheng, Yan Wang, Fuxing Hong, and Zhen Lin. 2014. Context-aware collaborative topic regression with social matrix factorization for recommender systems. In Twenty-Eighth AAAI Conference on Artificial Intelligence.
Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). Qiming Diao, Minghui Qiu, Chao-Yuan, Alexander J Wu, Jing Smola, Chong Jiang, Wang, Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining. the 20th ACM SIGKDD international conference on Knowledge discovery and data miningACMQiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexander J Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (jmars). In Proceedings of the 20th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining. ACM, 193-202.
Celebrity Recommendation with Collaborative Social Topic Regression. Xuetao Ding, Xiaoming Jin, Yujia Li, Lianghao Li, IJCAI. Xuetao Ding, Xiaoming Jin, Yujia Li, and Lianghao Li. 2013. Celebrity Recommendation with Collaborative Social Topic Regression.. In IJCAI.
Collaborative filtering recommender systems. D Michael, John T Ekstrand, Joseph A Riedl, Konstan, Foundations and Trends in Human-Computer Interaction. 4Michael D Ekstrand, John T Riedl, and Joseph A Konstan. 2011. Collaborative filtering recommender systems. Foundations and Trends in Human-Computer Interaction 4, 2 (2011), 81-173.
Matrix co-factorization for recommendation with rich side information and implicit feedback. Yi Fang, Luo Si, Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems. ACM. the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems. ACMYi Fang and Luo Si. 2011. Matrix co-factorization for recommendation with rich side information and implicit feedback. In Proceedings of the 2nd International Workshop on Information Heterogeneity and Fusion in Recommender Systems. ACM, 65-69.
Beyond the Stars: Improving Rating Predictions using Review Text Content. Gayatree Ganu, Noemie Elhadad, Amélie Marian, WebDB. 9Gayatree Ganu, Noemie Elhadad, and Amélie Marian. 2009. Beyond the Stars: Improving Rating Predictions using Review Text Content.. In WebDB, Vol. 9. Citeseer, 1-6.
Content-based recommendations with Poisson factorization. Laurent Prem K Gopalan, David Charlin, Blei, Advances in Neural Information Processing Systems. Prem K Gopalan, Laurent Charlin, and David Blei. 2014. Content-based recommendations with Poisson factorization. In Advances in Neural Information Processing Systems. 3176-3184.
Finding scientific topics. L Thomas, Mark Griffiths, Steyvers, Proceedings of the National Academy of Sciences. 1011Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences 101, suppl 1 (2004), 5228-5235.
Collaborative Filtering: Weighted Nonnegative Matrix Factorization Incorporating User and Item Graphs. Quanquan Gu, Jie Zhou, Chris Hq Ding, SDM. SIAM. Quanquan Gu, Jie Zhou, and Chris HQ Ding. 2010. Collaborative Filtering: Weighted Nonnegative Matrix Factorization Incorporating User and Item Graphs.. In SDM. SIAM, 199-210.
TrustSVD: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings. Guibing Guo, Jie Zhang, Neil Yorke-Smith, AAAI. Guibing Guo, Jie Zhang, and Neil Yorke-Smith. 2015. TrustSVD: Collaborative Filtering with Both the Explicit and Implicit Influence of User Trust and of Item Ratings.. In AAAI. 123-129.
An algorithmic framework for performing collaborative filtering. L Jonathan, Joseph A Herlocker, Al Konstan, John Borchers, Riedl, Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. the 22nd annual international ACM SIGIR conference on Research and development in information retrievalACMJonathan L Herlocker, Joseph A Konstan, Al Borchers, and John Riedl. 1999. An algorithmic framework for performing collaborative filtering. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and devel- opment in information retrieval. ACM, 230-237.
Evaluating collaborative filtering recommender systems. L Jonathan, Joseph A Herlocker, Loren G Konstan, John T Terveen, Riedl, ACM Transactions on Information Systems (TOIS). 22Jonathan L Herlocker, Joseph A Konstan, Loren G Terveen, and John T Riedl. 2004. Evaluating collaborative filtering recommender systems. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 5-53.
Latent semantic models for collaborative filtering. Thomas Hofmann, ACM Transactions on Information Systems (TOIS). 22Thomas Hofmann. 2004. Latent semantic models for collaborative filtering. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 89-115.
A Synthetic Approach for Recommendation: Combining Ratings, Social Relations, and Reviews. Guang-Neng Hu, Xin-Yu Dai, Yunya Song, Shu-Jian Huang, Jia-Jun Chen, Twenty-Fourth International Joint Conference on Artificial Intelligence. Guang-Neng Hu, Xin-Yu Dai, Yunya Song, Shu-Jian Huang, and Jia-Jun Chen. 2015. A Synthetic Approach for Recom- mendation: Combining Ratings, Social Relations, and Reviews. In Twenty-Fourth International Joint Conference on Artificial Intelligence.
Collaborative filtering for implicit feedback datasets. Yifan Hu, Yehuda Koren, Chris Volinsky, Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on. Ieee. Yifan Hu, Yehuda Koren, and Chris Volinsky. 2008. Collaborative filtering for implicit feedback datasets. In Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on. Ieee, 263-272.
Beyond the stars: exploiting freetext user reviews to improve the accuracy of movie recommendations. Niklas Jakob, Stefan Hagen Weber, Mark Christoph Müller, Iryna Gurevych, Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion. the 1st international CIKM workshop on Topic-sentiment analysis for mass opinionACMNiklas Jakob, Stefan Hagen Weber, Mark Christoph Müller, and Iryna Gurevych. 2009. Beyond the stars: exploiting free- text user reviews to improve the accuracy of movie recommendations. In Proceedings of the 1st international CIKM workshop on Topic-sentiment analysis for mass opinion. ACM, 57-64.
A Transitivity Aware Matrix Factorization Model for Recommendation in Social Networks. Mohsen Jamali, Martin Ester, Twenty-Second International Joint Conference on Artificial Intelligence. Mohsen Jamali and Martin Ester. 2011. A Transitivity Aware Matrix Factorization Model for Recommendation in Social Networks. In Twenty-Second International Joint Conference on Artificial Intelligence.
Fism: factored item similarity models for top-n recommender systems. Santosh Kabbur, Xia Ning, George Karypis, Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. the 19th ACM SIGKDD international conference on Knowledge discovery and data miningACMSantosh Kabbur, Xia Ning, and George Karypis. 2013. Fism: factored item similarity models for top-n recommender systems. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 659-667.
Factorization meets the neighborhood: a multifaceted collaborative filtering model. Yehuda Koren, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningACMYehuda Koren. 2008. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 426-434.
Matrix factorization techniques for recommender systems. Yehuda Koren, Robert Bell, Chris Volinsky, Computer. 8Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 8 (2009), 30-37.
OrdRec: an ordinal model for predicting personalized item rating distributions. Yehuda Koren, Joe Sill, Proceedings of the fifth ACM conference on Recommender systems. the fifth ACM conference on Recommender systemsACMYehuda Koren and Joe Sill. 2011. OrdRec: an ordinal model for predicting personalized item rating distributions. In Pro- ceedings of the fifth ACM conference on Recommender systems. ACM, 117-124.
Recommending users and communities in social media. Lei Li, Wei Peng, Saurabh Kataria, Tong Sun, Tao Li, ACM Transactions on Knowledge Discovery from Data (TKDD). 1017Lei Li, Wei Peng, Saurabh Kataria, Tong Sun, and Tao Li. 2015. Recommending users and communities in social media. ACM Transactions on Knowledge Discovery from Data (TKDD) 10, 2 (2015), 17.
Greg Linden, Brent Smith, Jeremy York, Amazon. com recommendations: Item-to-item collaborative filtering. Internet Computing. 7Greg Linden, Brent Smith, and Jeremy York. 2003. Amazon. com recommendations: Item-to-item collaborative filtering. Internet Computing, IEEE 7, 1 (2003), 76-80.
Ratings meet reviews, a combined approach to recommend. Guang Ling, Irwin Michael R Lyu, King, Proceedings of the 8th ACM Conference on Recommender systems. the 8th ACM Conference on Recommender systemsACMGuang Ling, Michael R Lyu, and Irwin King. 2014. Ratings meet reviews, a combined approach to recommend. In Proceed- ings of the 8th ACM Conference on Recommender systems. ACM, 105-112.
Unifying explicit and implicit feedback for collaborative filtering. N Nathan, Evan W Liu, Min Xiang, Qiang Zhao, Yang, Proceedings of the 19th ACM international conference on Information and knowledge management. the 19th ACM international conference on Information and knowledge managementACMNathan N Liu, Evan W Xiang, Min Zhao, and Qiang Yang. 2010. Unifying explicit and implicit feedback for collaborative filtering. In Proceedings of the 19th ACM international conference on Information and knowledge management. ACM, 1445-1448.
Learning to recommend with social trust ensemble. Hao Ma, Irwin King, Michael R Lyu, Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. the 32nd international ACM SIGIR conference on Research and development in information retrievalACMHao Ma, Irwin King, and Michael R Lyu. 2009. Learning to recommend with social trust ensemble. In Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval. ACM, 203-210.
Sorec: social recommendation using probabilistic matrix factorization. Hao Ma, Haixuan Yang, Irwin Michael R Lyu, King, Proceedings of the 17th ACM conference on Information and knowledge management. the 17th ACM conference on Information and knowledge managementACMHao Ma, Haixuan Yang, Michael R Lyu, and Irwin King. 2008. Sorec: social recommendation using probabilistic matrix factorization. In Proceedings of the 17th ACM conference on Information and knowledge management. ACM, 931-940.
Recommender systems with social regularization. Hao Ma, Dengyong Zhou, Chao Liu, Irwin Michael R Lyu, King, Proceedings of the fourth ACM international conference on Web search and data mining. the fourth ACM international conference on Web search and data miningACMHao Ma, Dengyong Zhou, Chao Liu, Michael R Lyu, and Irwin King. 2011. Recommender systems with social regulariza- tion. In Proceedings of the fourth ACM international conference on Web search and data mining. ACM, 287-296.
Trust-aware recommender systems. Paolo Massa, Paolo Avesani, Proceedings of the 2007 ACM conference on Recommender systems. the 2007 ACM conference on Recommender systemsACMPaolo Massa and Paolo Avesani. 2007. Trust-aware recommender systems. In Proceedings of the 2007 ACM conference on Recommender systems. ACM, 17-24.
Hidden factors and hidden topics: understanding rating dimensions with review text. Julian Mcauley, Jure Leskovec, Proceedings of the 7th ACM conference on Recommender systems. the 7th ACM conference on Recommender systemsACMJulian McAuley and Jure Leskovec. 2013. Hidden factors and hidden topics: understanding rating dimensions with review text. In Proceedings of the 7th ACM conference on Recommender systems. ACM, 165-172.
Being accurate is not enough: how accuracy metrics have hurt recommender systems. John Sean M Mcnee, Joseph A Riedl, Konstan, CHI'06 extended abstracts on Human factors in computing systems. ACMSean M McNee, John Riedl, and Joseph A Konstan. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI'06 extended abstracts on Human factors in computing systems. ACM, 1097-1101.
Probabilistic matrix factorization. Andriy Mnih, Ruslan Salakhutdinov, Advances in neural information processing systems. Andriy Mnih and Ruslan Salakhutdinov. 2007. Probabilistic matrix factorization. In Advances in neural information pro- cessing systems. 1257-1264.
One-class collaborative filtering. Rong Pan, Yunhong Zhou, Bin Cao, N Nathan, Rajan Liu, Martin Lukose, Qiang Scholz, Yang, Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on. IEEE. Rong Pan, Yunhong Zhou, Bin Cao, Nathan N Liu, Rajan Lukose, Martin Scholz, and Qiang Yang. 2008. One-class collab- orative filtering. In Data Mining, 2008. ICDM'08. Eighth IEEE International Conference on. IEEE, 502-511.
Improving regularized singular value decomposition for collaborative filtering. Arkadiusz Paterek, Proceedings of KDD cup and workshop. KDD cup and workshopArkadiusz Paterek. 2007. Improving regularized singular value decomposition for collaborative filtering. In Proceedings of KDD cup and workshop, Vol. 2007. 5-8.
Collaborative topic regression with social matrix factorization for recommendation systems. Sanjay Purushotham, Yan Liu, C-C Jay Kuo, Proceedings of the 29th International Conference on Machine Learning. the 29th International Conference on Machine LearningICML-12Sanjay Purushotham, Yan Liu, and C-C Jay Kuo. 2012. Collaborative topic regression with social matrix factorization for recommendation systems. In Proceedings of the 29th International Conference on Machine Learning (ICML-12). 759- 766.
Factorization machines. Steffen Rendle, IEEE 10th International Conference on. IEEE. Data Mining (ICDM)Steffen Rendle. 2010. Factorization machines. In Data Mining (ICDM), 2010 IEEE 10th International Conference on. IEEE, 995-1000.
Fast maximum margin matrix factorization for collaborative prediction. D M Jasson, Nathan Rennie, Srebro, Proceedings of the 22nd international conference on Machine learning. the 22nd international conference on Machine learningACMJasson DM Rennie and Nathan Srebro. 2005. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the 22nd international conference on Machine learning. ACM, 713-719.
Item-based collaborative filtering recommendation algorithms. Badrul Sarwar, George Karypis, Joseph Konstan, John Riedl, Proceedings of the 10th international conference on World Wide Web. the 10th international conference on World Wide WebACMBadrul Sarwar, George Karypis, Joseph Konstan, and John Riedl. 2001. Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web. ACM, 285-295.
Relational learning via collective matrix factorization. P Ajit, Geoffrey J Singh, Gordon, Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. the 14th ACM SIGKDD international conference on Knowledge discovery and data miningACMAjit P Singh and Geoffrey J Gordon. 2008. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 650-658.
Exploiting homophily effect for trust prediction. Jiliang Tang, Huiji Gao, Xia Hu, Huan Liu, Proceedings of the sixth ACM international conference on Web search and data mining. the sixth ACM international conference on Web search and data miningACMJiliang Tang, Huiji Gao, Xia Hu, and Huan Liu. 2013a. Exploiting homophily effect for trust prediction. In Proceedings of the sixth ACM international conference on Web search and data mining. ACM, 53-62.
Exploiting Local and Global Social Context for Recommendation. Jiliang Tang, Xia Hu, Huiji Gao, Huan Liu, Jiliang Tang, Xia Hu, Huiji Gao, and Huan Liu. 2013b. Exploiting Local and Global Social Context for Recommendation..
IJCAI. In IJCAI. 264-269.
Collaborative topic modeling for recommending scientific articles. Chong Wang, M David, Blei, Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. the 17th ACM SIGKDD international conference on Knowledge discovery and data miningACMChong Wang and David M Blei. 2011. Collaborative topic modeling for recommending scientific articles. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 448-456.
Collaborative Topic Regression with Social Regularization for Tag Recommendation. Hao Wang, Binyi Chen, Wu-Jun Li, IJCAI. Hao Wang, Binyi Chen, and Wu-Jun Li. 2013. Collaborative Topic Regression with Social Regularization for Tag Recom- mendation.. In IJCAI.
Unifying user-based and item-based collaborative filtering approaches by similarity fusion. Jun Wang, Arjen P De Vries, Reinders, Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. the 29th annual international ACM SIGIR conference on Research and development in information retrievalACMJun Wang, Arjen P De Vries, and Marcel JT Reinders. 2006. Unifying user-based and item-based collaborative filtering approaches by similarity fusion. In Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 501-508.
Collaborative filtering incorporating review text and co-clusters of hidden user communities and item groups. Yinqing Xu, Wai Lam, Tianyi Lin, Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. the 23rd ACM International Conference on Conference on Information and Knowledge ManagementACMYinqing Xu, Wai Lam, and Tianyi Lin. 2014. Collaborative filtering incorporating review text and co-clusters of hidden user communities and item groups. In Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management. ACM, 251-260.
. Bo Yang, Yu Lei, Dayou Liu, Jiming Liu, Social Collaborative Filtering by Trust. Bo Yang, Yu Lei, Dayou Liu, and Jiming Liu. 2013. Social Collaborative Filtering by Trust. (2013), 2747-2753.
Social Influence Locality for Modeling Retweeting Behaviors. Jing Zhang, Biao Liu, Jie Tang, Ting Chen, Juanzi Li, IJCAI. 13Jing Zhang, Biao Liu, Jie Tang, Ting Chen, and Juanzi Li. 2013. Social Influence Locality for Modeling Retweeting Behav- iors.. In IJCAI, Vol. 13. 2761-2767.
Combining content and link for classification using matrix factorization. Shenghuo Zhu, Kai Yu, Yun Chi, Yihong Gong, Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. the 30th annual international ACM SIGIR conference on Research and development in information retrievalACMShenghuo Zhu, Kai Yu, Yun Chi, and Yihong Gong. 2007. Combining content and link for classification using matrix factorization. In Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval. ACM, 487-494.
| [] |
[
"OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection",
"OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection"
] | [
"Kanashiro Lis kanashiro.pereira@ocha.ac.jp1 \nOchanomizu University\n\n",
"Pereira \nOchanomizu University\n\n",
"Ichiro Kobayashi \nOchanomizu University\n\n"
] | [
"Ochanomizu University\n",
"Ochanomizu University\n",
"Ochanomizu University\n"
] | [
"Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)"
] | We propose a multilingual adversarial training model for determining whether a sentence contains an idiomatic expression. Given that a key challenge with this task is the limited size of annotated data, our model relies on pre-trained contextual representations from different multilingual state-of-the-art transformer-based language models (i.e., multilingual BERT and XLM-RoBERTa), and on adversarial training, a training method for further enhancing model generalization and robustness. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model achieved competitive results and ranked 6 th place in SubTask A (zeroshot) setting and 15 th place in SubTask A (oneshot) setting. | 10.18653/v1/2022.semeval-1.27 | [
"https://www.aclanthology.org/2022.semeval-1.27.pdf"
] | 249,431,367 | 2206.03025 | 8b705579122d52dc5e031dac2d50a6628c6bd933 |
OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection
July 14-15, 2022
Kanashiro Lis kanashiro.pereira@ocha.ac.jp1
Ochanomizu University
Pereira
Ochanomizu University
Ichiro Kobayashi
Ochanomizu University
OCHADAI at SemEval-2022 Task 2: Adversarial Training for Multilingual Idiomaticity Detection
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
the 16th International Workshop on Semantic Evaluation (SemEval-2022)July 14-15, 2022
We propose a multilingual adversarial training model for determining whether a sentence contains an idiomatic expression. Given that a key challenge with this task is the limited size of annotated data, our model relies on pre-trained contextual representations from different multilingual state-of-the-art transformer-based language models (i.e., multilingual BERT and XLM-RoBERTa), and on adversarial training, a training method for further enhancing model generalization and robustness. Without relying on any human-crafted features, knowledge bases, or additional datasets other than the target datasets, our model achieved competitive results and ranked 6 th place in SubTask A (zeroshot) setting and 15 th place in SubTask A (oneshot) setting.
Introduction
Large-scale pre-trained language models such as BERT (Devlin et al., 2019) have achieved great success in a wide range of natural language processing (NLP) tasks. However, more recent studies show that even such contextual models have a limited ability to capture idiomaticity (Garcia et al., 2021). Idiomatic expressions denote a group of words that behave as single words to some extent. Their linguistic behavior cannot be inferred from the characteristics of their components, and still pose a challenge to natural language processing (NLP) systems. This paper describes the system developed by the OCHADAI team for SemEval-2022 Task 2 -Multilingual Idiomaticity Detection and Sentence Embedding (Tayyar Madabushi et al., 2022). Given that a key challenge in this task is the limited size of annotated data, we follow best practices from recent work on enhancing model generalization and robustness and propose a model ensemble that leverages multilingual pre-trained representations and adversarial training. Our model ranked 6th on SubTask A (zero-shot), and 15th on SubTask A (one-shot).
Task Description
SemEval-2021 Task 2 SubTask A consists of a binary classification task that requires classifying sentences with a target multiword expression (MWE) into either "Idiomatic" or "Literal" across English, Portuguese and Galician (Tayyar Madabushi et al., 2021). Further, it is subdivided into two settings to better test models' ability to generalize: zero-shot and one-shot. In the zero-shot" setting, multiword expressions (potentially idiomatic phrases), in the training set are completely disjoint from those in the test and development sets. In the "one-shot" setting, one positive and one negative training examples are included for each MWE in the test and development sets. Note that the actual examples in the training data are different from those in the test and development sets in both settings. Only the datasets provided by the organizers are allowed to train the models. Participants can use only the data provided for the zero-shot setting to train the zeroshot model. However, participants were allowed to use data provided for both settings to train models in the one-shot setting. The statistics of the corpus are presented in Table 1. Our team submitted results for both settings, and the next section outlines the overview of our model.
Setting Language Sentence
Target MWE Label zero-shot English
This song is about unconditionally supporting someone you love. This is a love song. Let's be there for each other. love song 1 one-shot Portuguese Estamos honrando o teto e construindo as paredes", afirmou. Em outro momento, voltando ao tema do impasse do Orçamento, ele reforçou que a busca é por uma solução, que a briga tem "mérito" e que "sempre, em grande rebanho, tem uma ovelha negra"."Mas não é o Congresso nem o grosso do ministério", completou.
ovelha negra 0 one-shot Galician Non podemos abandonalos á súa sorte, porque sen mozos e mozas no campo e sen a súa actividade agraria suporía, entre outras cousas, a deslocalización da produción e a dependencia alimentaria Máis alá das políticas comunitarias, ¿cres que poden desenvolverse alternativas para a xente nova a partir de medidas municipais, autonómicas ou estatais? Estase a facer?
xente nova 0 Table 2: Example sentences and labels for Subtask A. Note that "Idiomatic" is assigned the label 0 in the dataset and "non-idiomatic" (including proper nouns) are assigned the label 1.
System Overview
We focus on exploring different training techniques using BERT and RoBERTa, given their superior performance on a wide range of NLP tasks. Each text encoder and training method used in our model are detailed below. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. XLM-R has been shown to perform particularly well on low-resource languages, such as Swahili and Urdu. We use the XLM-R LARGE model released by the authors.
Text
Training Procedures
Standard fine-tuning: This is the standard finetuning procedure where we fine-tune BERT and RoBERTa on each training setting-specific data. where the inner maximization can be solved by projected gradient descent (Madry et al., 2017).
Recently, adversarial training has been successfully applied to NLP as well (Zhu et al., 2019;Pereira et al., 2020). In our experiments, we use SMART , which instead regularizes the standard training objective using virtual adversarial training (Miyato et al., 2018):
min θ E (x,y)∼D [l(f (x; θ), y)+ α max δ l(f (x + δ; θ), f (x; θ))](2)
Effectively, the adversarial term encourages smoothness in the input neighborhood, and α is a hyperparameter that controls the trade-off between standard errors and adversarial errors.
Experiments
Implementation Details
Our model implementation is based on the MT-DNN framework (Liu et al., 2019a(Liu et al., , 2020b. We use BERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2019) as the text encoders. We used ADAM (Kingma and Ba, 2015) as our optimizer with a learning rate in the range ∈ {8 × 10 −6 , 9 × 10 −6 , 1 × 10 −5 } and a batch size ∈ {8, 16, 32}. The maximum number of epochs was set to 10. A linear learning rate decay schedule with warmup over 0.1 was used, unless stated otherwise. To avoid gradient exploding, we clipped the gradient norm within 1. All the texts were tokenized using wordpieces and were chopped to spans no longer than 512 tokens. During adversarial training, we follow and set the perturbation size to 1 × 10 −5 , the step size to 1 × 10 −3 , and to 1 × 10 −5 the variance for initializing the perturbation. The number of projected gradient steps and the α parameter (Equation 2) were both set to 1.
We follow (Devlin et al., 2019) and (Liu et al., 2019b), and set the first token as the [CLS] token and the <s> token, respectively, when encoding the input on BERT and RoBERTa, respectively. We separate the input sentence and the target expression with the special token [SEP] and </s> for BERT and RoBERTa, respectively. e.g.
[CLS] Ben Salmon is a committed night owl with an undying devotion to discovering new music.,"He lives in the great state of Oregon, where he hosts a killerradio show and obsesses about Kentucky basketball from afar.
[SEP] night owl [SEP].
For both settings (zero-shot and one-shot), we used the dev dataset released by organizers to finetune the model's hyperparameters.
Main Results
Submitted systems were evaluated in terms of F1score. The systems were ranked from highest F1score score to lowest. We built several models that use different text encoders and different training methods, as described in Section 3. See Table 3 for the results.
First, we observe that models that use adversarial training obtained better performance overall, without using any additional knowledge source, and without using any additional dataset other than the target task datasets. These results suggest that adversarial training leads to a more robust model and helps generalize better on unseen data. For the zero-shot setting, the model that uses XLM-R as the text encoder and adversarial training performed better than M-BERT on the development set. Thus, we decided to submit this model's results on the test set. It obtained a test set F1-score of 0.7457, and ranked 6 th among all participating systems. On the other hand, on the one-shot setting, M-BERT performed better than XLM-R on the development set. Again, M-BERT with adversarial training performed better than vanilla fine-tuning. This model obtained an F1-score of 0.6573 on the test set, and ranked 15 th among all participating systems.
Conclusion
We proposed a simple and efficient model for multilingual idiomaticity detection. Our experiments demonstrated that it achieves competitive results on both zero-shot and one-shot settings, without relying on any additional resource other than the target task dataset. Although in this paper we focused on the multilingual idiomaticity detection task, our model can be generalized to solve other downstream tasks as well, and we will explore this direction as future work.
Encoders M-BERT (Devlin et al., 2019): We use the M-BERT BASE model released by the authors. It is pre-trained on the top 104 languages with the largest Wikipedia using a masked language modeling (MLM) objective. This model is case sensitive: it makes a difference between english and English. XLM-R (Conneau et al., 2019): XLM-RoBERTa (XLM-R) is a multilingual version of RoBERTa.
Adversarial training (ADV): Adversarial training has proven effective in improving model generalization and robustness in computer vision(Madry et al., 2017; Goodfellow et al., 2014) and more recently in NLP(Zhu et al., 2019; Liu et al., 2020a;Pereira et al., 2020). It works by augmenting the input with a small perturbation that maximizes the adversarial loss:min θ E (x,y)∼D [max δ l(f (x + δ; θ), y)] (1)
Table 1 :
1Summary of the SemEval 2022 Task 2 Subtask A dataset. Note that the dev, eval and test sets are used in both settings.
Table 3 :
3Comparison of standard and adversarial training in zero-shot evaluation on various natural language inference datasets, where the standard BERT BASE model is fine-tuned on the MNLI training data.
AcknowledgementsWe thank the reviewers for their helpful feedback. This work has been supported by the project KAK-ENHI ID: 21K17802.
Robust neural machine translation with doubly adversarial inputs. Yong Cheng, Lu Jiang, Wolfgang Macherey, Yong Cheng, Lu Jiang, and Wolfgang Macherey. 2019. Robust neural machine translation with doubly adver- sarial inputs.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1911.02116arXiv preprintAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. arXiv preprint arXiv:1911.02116.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.
Probing for idiomaticity in vector space models. Marcos Garcia, Tiago Kramer Vieira, Carolina Scarton, Marco Idiart, Aline Villavicencio, Proceedings of the 16th conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics (ACL). the 16th conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics (ACL)Marcos Garcia, Tiago Kramer Vieira, Carolina Scarton, Marco Idiart, and Aline Villavicencio. 2021. Probing for idiomaticity in vector space models. In Proceed- ings of the 16th conference of the European Chapter of the Association for Computational Linguistics. As- sociation for Computational Linguistics (ACL).
J Ian, Goodfellow, arXiv:1412.6572Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adversarial examples. arXiv preprintIan J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2014. Explaining and harnessing adver- sarial examples. arXiv preprint arXiv:1412.6572.
Smart: Robust and efficient fine-tuning for pretrained natural language models through principled regularized optimization. Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, Tuo Zhao, arXiv:1911.03437arXiv preprintHaoming Jiang, Pengcheng He, Weizhu Chen, Xi- aodong Liu, Jianfeng Gao, and Tuo Zhao. 2019. Smart: Robust and efficient fine-tuning for pre- trained natural language models through princi- pled regularized optimization. arXiv preprint arXiv:1911.03437.
Adam: A method for stochastic optimization. ICLR (Poster). P Diederik, Jimmy Kingma, Ba, Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. ICLR (Poster) 2015.
Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, arXiv:2004.08994Hoifung Poon, and Jianfeng Gao. 2020a. Adversarial training for large neural language models. arXiv preprintXiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, and Jianfeng Gao. 2020a. Adversarial training for large neural language models. arXiv preprint arXiv:2004.08994.
Multi-task deep neural networks for natural language understanding. Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsXiaodong Liu, Pengcheng He, Weizhu Chen, and Jian- feng Gao. 2019a. Multi-task deep neural networks for natural language understanding. Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4487-4496.
Xiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, arXiv:2002.07972Guihong Cao, and Jianfeng Gao. 2020b. The microsoft toolkit of multitask deep neural networks for natural language understanding. arXiv preprintXiaodong Liu, Yu Wang, Jianshu Ji, Hao Cheng, Xueyun Zhu, Emmanuel Awa, Pengcheng He, Weizhu Chen, Hoifung Poon, Guihong Cao, and Jian- feng Gao. 2020b. The microsoft toolkit of multi- task deep neural networks for natural language un- derstanding. arXiv preprint arXiv:2002.07972.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019b. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, Adrian Vladu, arXiv:1706.06083Towards deep learning models resistant to adversarial attacks. arXiv preprintAleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and Adrian Vladu. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
Virtual adversarial training: a regularization method for supervised and semisupervised learning. Takeru Miyato, Masanori Shin-Ichi Maeda, Shin Koyama, Ishii, IEEE transactions on pattern analysis and machine intelligence. 41Takeru Miyato, Shin-ichi Maeda, Masanori Koyama, and Shin Ishii. 2018. Virtual adversarial training: a regularization method for supervised and semi- supervised learning. IEEE transactions on pattern analysis and machine intelligence, 41(8):1979-1993.
Lis Pereira, Xiaodong Liu, Fei Cheng, Masayuki Asahara, Ichiro Kobayashi, arXiv:2005.08156Adversarial training for commonsense inference. arXiv preprintLis Pereira, Xiaodong Liu, Fei Cheng, Masayuki Asa- hara, and Ichiro Kobayashi. 2020. Adversarial train- ing for commonsense inference. arXiv preprint arXiv:2005.08156.
SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding. Edward Harish Tayyar Madabushi, Marcos Gow-Smith, Carolina Garcia, Marco Scarton, Aline Idiart, Villavicencio, Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). the 16th International Workshop on Semantic Evaluation (SemEval-2022)Association for Computational LinguisticsHarish Tayyar Madabushi, Edward Gow-Smith, Marcos Garcia, Carolina Scarton, Marco Idiart, and Aline Villavicencio. 2022. SemEval-2022 Task 2: Multilin- gual Idiomaticity Detection and Sentence Embedding. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). Association for Computational Linguistics.
AStitchInLanguageModels: Dataset and methods for the exploration of idiomaticity in pre-trained language models. Edward Harish Tayyar Madabushi, Carolina Gow-Smith, Aline Scarton, Villavicencio, Findings of the Association for Computational Linguistics: EMNLP 2021. Punta Cana, Dominican RepublicAssociation for Computational LinguisticsHarish Tayyar Madabushi, Edward Gow-Smith, Car- olina Scarton, and Aline Villavicencio. 2021. AStitchInLanguageModels: Dataset and methods for the exploration of idiomaticity in pre-trained lan- guage models. In Findings of the Association for Computational Linguistics: EMNLP 2021, Punta Cana, Dominican Republic. Association for Com- putational Linguistics.
Chen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Goldstein, Jingjing Liu, arXiv:1909.11764Freelb: Enhanced adversarial training for language understanding. arXiv preprintChen Zhu, Yu Cheng, Zhe Gan, Siqi Sun, Thomas Gold- stein, and Jingjing Liu. 2019. Freelb: Enhanced ad- versarial training for language understanding. arXiv preprint arXiv:1909.11764.
| [] |
[
"Langage et cognition spatiale",
"Langage et cognition spatiale"
] | [
"Michel Aurnague ",
"Laure Vieu ",
"Andrée Borillo "
] | [] | [] | 3 -dont les propriétés spatiales n'ont rien de distinct (impossible de savoir s'il s'agit de points, de surfaces ou de volumes). Pour atteindre la précision que sont susceptibles d'apporter certaines relations binaires, il est nécessaire de cumuler les informations et donc d'avoir recours à des énoncés plus extensifs et plus complexes : La voiture est entre le camion et l'autobus peut signifier "devant le camion et derrière l'autobus" ou "à droite du camion et à gauche de l'autobus"...L'imprécision du langage naturel : gêne ou avantage ?Cette imprécision, tout à fait flagrante pour les relations autres que binaires, pose déjà un problème si l'on considère bon nombre de relations n'engageant que deux référents spatiaux. Bien sûr, on peut, si on le veut, être très précis et recourir à des coordonnées stables et à des valeurs numériques fixant des points sur ces coordonnées (données obtenues par la mesure des latitudes et des longitudes, par la mesure des degrés sur les axes d'orientation terrestre...) mais le plus souvent ces relevés très précis ne s'avèrent réellement nécessaires que pour des besoins de type scientifique ou technique (aéronautique, marine, météorologie...). Pour un usage plus courant, les expressions que nous pouvons forger à partir des ressources lexicales de la langue permettent de formuler des énoncés relativement clairs et informatifs, répondant la plupart du temps aux besoins et aux objectifs de la situation de communication telle que la conçoivent locuteur et destinataire.Il est certain qu'un prédicat comme être à côté de, être derrière, être devant pris isolément, produit un sens très vague et très imprécis. Un énoncé comme X est devant Y ne dit pas à quelle distance X est de Y, ni avec quel décalage par rapport à son axe frontal. Cependant, si l'on connaît les entités physiques mises en jeu, la distance peut déjà trouver d'elle-même un certaine échelle de mesure. Si l'on dit :le verre est devant la bouteille , on peut imaginer qu'il s'agit d'une distance de quelques dizaines centimètres tandis que pour : la voiture est devant l'immeuble, on l'évaluera sans doute en mètres ou en dizaines de mètres. Pour ce dernier exemple, les choses peuvent encore se préciser, si au lieu du verbe être très général, on a recours à un verbe comme être garé ou être rangé : la voiture est rangée devant l'immeuble. Dans ce cas, notre connaissance du monde nous aide à affiner notre appréciation de devant en termes de position (calculée à partir des dimensions frontales du site) et de distance (que nous évaluerons en l'occurrence à quelques mètres, tout au plus).La même stratégie de relativisation contextuelle est à l'oeuvre avec la plupart des prépositions de lieu de type relationnel:La mouche tourne au-dessus de l'assiette / l'avion tourne au-dessus de la ville; La cuillère est à côté du couteau /Muret est à côté de Toulouse. Si dans des cas comme ceux-ci, l'imprécision peut être réduite grâce à des données de type pragmatique que nous savons intégrer et faire jouer pour l'interprétation, il n'en reste pas moins que les marqueurs linguistiques sont rarement capables de fournir l'information nécessaire pour déterminer sans erreur la position exacte d'un référent à localiser. Même lorsqu'on veut rendre compte de positions bien tranchées que l'on essaie de déterminer linguistiquement à l'aide d'adverbes de type exactement, (tout) juste, carrément..., il n'est pas sûr que l'énoncé soit totalement dénué d'ambiguïté pour celui qui le reçoit. Si l'on dit : l'arrêt (de l'autobus) est juste devant la gare , veut-on vraiment signifier que l'arrêt se trouve exactement sur l'axe frontal tracé à partir du point figurant le centre de la gare ? rien n'est moins sûr, et le destinataire du message aurait sans doute tort de le prendre au pied de la lettre.Mais la communication linguistique dans ce qu'elle a de plus courant et de plus banalement fonctionnel, ne nécessite pas une si grande rigueur dans la précision des données. Dans la plupart des cas, les indications approximatives que nous produisons à travers les très nombreuses expressions langagières dont nous disposons suffisent pour guider la compréhension et aider à la construction d'une représentation de la situation d'ensemble. A la limite, des indications trop affinées et trop précises pourraient s'avérer inefficaces car elles chargeraient inutilement le message et risqueraient de ralentir et de gêner le processus de production et d'interprétation. Parmi les principes gricéens de coopération, on se souvient que figure la maxime de quantité : "ne pas dire plus ou moins qu'il n'est nécessaire pour l'efficacité du message". Il peut être contre-productif de fournir | null | [
"https://arxiv.org/pdf/1003.4894v1.pdf"
] | 27,082,621 | 1003.4894 | 53b159e58812e1b54cf404cb93e5fa1b40c32120 |
Langage et cognition spatiale
MassonCopyright Masson1997
Michel Aurnague
Laure Vieu
Andrée Borillo
Langage et cognition spatiale
ParisMasson1997
3 -dont les propriétés spatiales n'ont rien de distinct (impossible de savoir s'il s'agit de points, de surfaces ou de volumes). Pour atteindre la précision que sont susceptibles d'apporter certaines relations binaires, il est nécessaire de cumuler les informations et donc d'avoir recours à des énoncés plus extensifs et plus complexes : La voiture est entre le camion et l'autobus peut signifier "devant le camion et derrière l'autobus" ou "à droite du camion et à gauche de l'autobus"...L'imprécision du langage naturel : gêne ou avantage ?Cette imprécision, tout à fait flagrante pour les relations autres que binaires, pose déjà un problème si l'on considère bon nombre de relations n'engageant que deux référents spatiaux. Bien sûr, on peut, si on le veut, être très précis et recourir à des coordonnées stables et à des valeurs numériques fixant des points sur ces coordonnées (données obtenues par la mesure des latitudes et des longitudes, par la mesure des degrés sur les axes d'orientation terrestre...) mais le plus souvent ces relevés très précis ne s'avèrent réellement nécessaires que pour des besoins de type scientifique ou technique (aéronautique, marine, météorologie...). Pour un usage plus courant, les expressions que nous pouvons forger à partir des ressources lexicales de la langue permettent de formuler des énoncés relativement clairs et informatifs, répondant la plupart du temps aux besoins et aux objectifs de la situation de communication telle que la conçoivent locuteur et destinataire.Il est certain qu'un prédicat comme être à côté de, être derrière, être devant pris isolément, produit un sens très vague et très imprécis. Un énoncé comme X est devant Y ne dit pas à quelle distance X est de Y, ni avec quel décalage par rapport à son axe frontal. Cependant, si l'on connaît les entités physiques mises en jeu, la distance peut déjà trouver d'elle-même un certaine échelle de mesure. Si l'on dit :le verre est devant la bouteille , on peut imaginer qu'il s'agit d'une distance de quelques dizaines centimètres tandis que pour : la voiture est devant l'immeuble, on l'évaluera sans doute en mètres ou en dizaines de mètres. Pour ce dernier exemple, les choses peuvent encore se préciser, si au lieu du verbe être très général, on a recours à un verbe comme être garé ou être rangé : la voiture est rangée devant l'immeuble. Dans ce cas, notre connaissance du monde nous aide à affiner notre appréciation de devant en termes de position (calculée à partir des dimensions frontales du site) et de distance (que nous évaluerons en l'occurrence à quelques mètres, tout au plus).La même stratégie de relativisation contextuelle est à l'oeuvre avec la plupart des prépositions de lieu de type relationnel:La mouche tourne au-dessus de l'assiette / l'avion tourne au-dessus de la ville; La cuillère est à côté du couteau /Muret est à côté de Toulouse. Si dans des cas comme ceux-ci, l'imprécision peut être réduite grâce à des données de type pragmatique que nous savons intégrer et faire jouer pour l'interprétation, il n'en reste pas moins que les marqueurs linguistiques sont rarement capables de fournir l'information nécessaire pour déterminer sans erreur la position exacte d'un référent à localiser. Même lorsqu'on veut rendre compte de positions bien tranchées que l'on essaie de déterminer linguistiquement à l'aide d'adverbes de type exactement, (tout) juste, carrément..., il n'est pas sûr que l'énoncé soit totalement dénué d'ambiguïté pour celui qui le reçoit. Si l'on dit : l'arrêt (de l'autobus) est juste devant la gare , veut-on vraiment signifier que l'arrêt se trouve exactement sur l'axe frontal tracé à partir du point figurant le centre de la gare ? rien n'est moins sûr, et le destinataire du message aurait sans doute tort de le prendre au pied de la lettre.Mais la communication linguistique dans ce qu'elle a de plus courant et de plus banalement fonctionnel, ne nécessite pas une si grande rigueur dans la précision des données. Dans la plupart des cas, les indications approximatives que nous produisons à travers les très nombreuses expressions langagières dont nous disposons suffisent pour guider la compréhension et aider à la construction d'une représentation de la situation d'ensemble. A la limite, des indications trop affinées et trop précises pourraient s'avérer inefficaces car elles chargeraient inutilement le message et risqueraient de ralentir et de gêner le processus de production et d'interprétation. Parmi les principes gricéens de coopération, on se souvient que figure la maxime de quantité : "ne pas dire plus ou moins qu'il n'est nécessaire pour l'efficacité du message". Il peut être contre-productif de fournir
Introduction
Dans ce chapitre, nous faisons l'hypothèse que l'étude systématique de la sémantique des marqueurs spatiaux de la langue permet de mettre en évidence certaines propriétés et concepts fondamentaux caractérisant les représentations conceptuelles de l'espace. Nous proposons un système formel rendant compte des propriétés révélées par les analyses linguistiques, et nous utilisons ces outils pour représenter le contenu sémantique de plusieurs relations spatiales du français. Le choix d'un système formel de représentation -en l'occurrence une théorie axiomatique dans le cadre de la logique des prédicatsrépond à deux objectifs. D'une part, nous considérons qu'une sémantique doit prendre en compte la dimension déductive de la communication en langue naturelle et, de ce point de vue, la logique s'avère particulièrement adaptée à la formalisation du raisonnement. D'autre part, du fait de son caractère explicite, une représentation formelle fournit une base théorique adéquate à la réalisation d'un système de compréhension automatique de l'expression spatiale dans la langue.
L'utilisation de la langue comme base empirique permettant d'accéder à certains types de représentations conceptuelles et la prise en compte des aspects déductifs et inférentiels inscrivent bien ce travail dans une perspective cognitive.
La première partie du chapitre expose les principales propriétés révélées par l'analyse sémantique de l'expression de l'espace en français. Ces propriétés sont autant de contraintes que devront vérifier les représentations formelles élaborées. Nous présentons ensuite les diverses composantes du système de représentation formelle : les fondements théoriques d'une géométrie cognitive ou de sens commun (deuxième partie) sont d'abord esquissés, puis divers concepts fonctionnels et pragmatiques nécessaires au traitement de l'espace linguistique sont introduits (troisième et quatrième parties). Nous nous attachons à montrer comment ces concepts permettent de représenter le contenu sémantique de certaines prépositions du français (sur, dans, devant...) et à illustrer l'adéquation inférentielle de ces représentations. 1 Quelques propriétés de l'espace linguistique 1
.1 Comment s'opère la localisation ?
Les langues naturelles sont généralement très riches en éléments lexicaux spécifiant la localisation des entités physiques dans l'espace, richesse non seulement en termes de nombre mais également pour la diversité des catégories lexicales concernées. En français, par exemple, toutes les catégories lexicales participent à l'expression des relations de type spatial:
• Certains noms et adjectifs fournissent la caractérisation des dimensions spatiales. Ils trouvent leur valeur au sein d'un système triaxial (vertical, frontal, latéral) fondé sur des données physiques du monde: gravité et verticalité, niveau du sol et horizontalité... (longueur, largeur, hauteur, profond, élevé, étroit). D'autres intègrent des données perceptuelles et orientationnelles fournies par la situation de discours : position canonique du locuteur, définition de points de référence, choix d'évaluation égocentrique (droite, gauche, avant, arrière, loin, proche). Une catégorie particulière de noms et d'adjectifs (dits de localisation interne) ont pour fonction de préciser la référence spatiale des entités physiques [11]. La portion d'espace qu'occupe une entité déterminée peut être découpée en zones différenciées, sur la base des relations spatiales qui les rattachent au tout -et par conséquent qui les lient entre elles au sein de ce tout : le haut, le bas; le bord, le centre; l'intérieur, l'extérieur; (zone) supérieure, (partie) centrale... [4]. Ces noms et adjectifs fondés sur des traits de nature diverse (dimensionnels, morphologiques, fonctionnels) permettent de donner une plus grande précision à la description spatiale des entités :
Le haut de l'armoire est décoré ; la caisse a été endommagée dans sa partie supérieure • D'autres catégories lexicales, prépositions, verbes et adverbes, sont plus nettement spécialisés dans l'expression des relations spatiales, qui sont à la base même du principe de localisation. Ces relations ont un caractère statique lorsque les référents spatiaux, qu'ils soient immobiles ou en mouvement, sont considérés ponctuellement, dans la position qu'ils occupent à un instant donné :
La voiture est devant le camion ; le deuxième cheval est loin derrière le premier. Elles sont de caractère dynamique, s'il y a déplacement de l'un ou des deux référents, le déplacement induisant entre eux des modifications de leur relation spatiale :
La voiture passe devant le camion ; le deuxième cheval rattrape le premier. En français, très peu de prépositions de lieu sont spécialisées dans un emploi statique ou dynamique [12]. La plupart s'emploient aussi bien avec un verbe d'état -être sur, se trouver à, se situer dans, être posé contre...-qu'avec un verbe de mouvement -aller à , monter sur, passer dans, buter contre...-de sorte que la différence ne peut s'opérer qu'en présence du verbe avec lequel la préposition se construit :
Le chat est couché / se précipite dans le jardin. Les spectateurs sont assis /déambulent autour de la scène En revanche, pour les verbes, un partage assez tranché se fait entre ceux qui dénotent des relations spatiales de type statique, des verbes d'état en association avec des prépositions de lieu : se trouver à , être placé dans, être posé sur, être appuyé contre... et ceux de type dynamique qui eux se construisent avec ou sans préposition de lieu : traverser, contourner, longer, entrer dans, passer sous, se glisser derrière... Cependant, cette distinction entre statique et dynamique, dont on attendrait qu'elle se fonde sur des propriétés définies et stables, ne peut se faire parfois qu'au vu de l'énoncé auquel participe le verbe. Un même verbe prend une acception de type dynamique ou au contraire de type statique selon la nature et les propriétés du sujet de la phrase avec lequel il se construit, l'acception statique étant généralement liée à des facteurs d'ordre divers : attribution d'une valeur métaphorique, action de facteurs perceptuels, effet du traitement égocentrique de l'espace... Ainsi, on peut dire : La souris/ la glycine court le long du mur; les promeneurs/ les rochers descendent jusqu'à la mer; le train/ le sentier s'enfonce dans la forêt La localisation à laquelle contribuent la plupart des prépositions et des verbes porte sur deux référents -entités physiques, lieux, portions d'espace plus ou moins délimitéesqui se définissent l'un par rapport à l'autre ( relation binaire dans laquelle cible et site trouvent chacun leur fonction) mais il existe également quelques cas de marqueurs relationnels mettant en jeu des référents dont le nombre est supérieur à deux :
• relation ternaire exprimée par des prépositions comme entre, à l'intersection de, au confluent de, à équidistance de..., par des verbes comme séparer, relier, s'intercaler, s'interposer... : le radiateur est entre la porte et l'armoire ; un mur sépare le jardin de la rue .
• relation affectant plus de trois entités exprimée également par entre mais aussi par parmi, au milieu de (+pluriel) : prendre place parmi les invités; avancer au milieu des voitures. Il est clair qu'au-delà de deux référents, la relation spatiale que peuvent exprimer un verbe et/ou une préposition de lieu perd beaucoup de sa précision ; elle est tout au plus une vague indication de la localisation d'un référent , la cible, par rapport à des repères une trop grande spécification, qui peut se révéler inutile et être ressentie comme superflue par rapport à la situation particulière dans laquelle elle figure . Que penser par exemple de l'énoncé suivant :
Le vase est posé sur l'étagère, dans l'angle de droite, à 10 cm du bord du côté droit, à 15 cm du bord avant et à 5 cm du mur qui est derrière ? Cependant, si le besoin d'une information plus stricte est ressentie par le locuteur ou par le destinataire, des indications complémentaires peuvent s'ajouter pour spécifier certains traits de la configuration situationnelle. A. Le titre en haut de la page B. Où ? à quel niveau ? A. A gauche, décalé de 1 cm par rapport à la marge et avec un espace de trois interlignes par rapport à la première ligne du texte. On notera que ces indications sont nécessairement d'ordre relationnel, quel que soit le degré de précision que l'on tente d'introduire : à 3 cm du haut de la page, à 3 cm du bord supérieur de la page et à 5 cm du bord latéral.
Définition des propriétés spatiales et notion de point de vue : l e choix d'une granularité variable.
Il semble normal de vouloir définir les référents spatiaux à partir de leurs propriétés dimensionnelles de base, c'est-à-dire les définir comme des points, des lignes, des surfaces ou des volumes. Cependant, la vision que nous avons de ces référents, le type d'intérêt que nous leur portons lorsque nous en parlons fait que nous pouvons les considérer sous des angles différents et donc leur attribuer momentanément des propriétés dimensionnelles différentes. On peut dire du même objet : cette boîte est carrée/ronde, si l'on s'arrête plus particulièrement à un aspect saillant de sa forme ou cette boîte contient 2 kg de miel si l'on s'intéresse à sa contenance. De même, on peut parler de la mer ou du soleil comme d'une surface : la mer est plate aujourd'hui, le disque solaire touche l'horizon , ou encore, on peut décrire une colline ou une montagne selon la forme de sa ligne de contour : des collines douces; un massif aux arêtes vives, une chaîne très découpée.
A la limite, une même entité peut être vue successivement comme un point, une surface ou un volume selon la distance à laquelle elle est perçue, selon la situation dans laquelle elle s'inscrit, selon la fonction à laquelle on s'intéresse. S'agissant d'une ville, on peut la voir comme un point, et la figurer comme telle sur une carte ou dans notre représentation mentale lorsque nous imaginons une vue d'ensemble à une très petite échelle, mais on peut également en parler comme d'une surface : la ville s'étend sur un rayon de 10 km ; la ville couvre une superficie de 100 km 2
Au-delà de la géométrie
Il apparaît déjà dans cette analyse de l'expression de la localisation spatiale en français ainsi que dans nombre de travaux antérieurs sur la sémantique des marqueurs linguistiques de l'espace (plus particulièrement [32,8]) que les notions géométriques présentes dans la langue ne permettent pas à elles seules de représenter la sémantique de l'espace. Le contexte d'énonciation est, comme nous l'avons vu, un facteur que l'on ne peut négliger.
En outre, de nombreux phénomènes liés aux propriétés dites "fonctionnelles" des entités doivent être pris en compte. En effet, la description des dimensions et des formes de certaines entités et des relations géométriques entre ces formes, même contextualisée, ne suffit pas à déterminer l'applicabilité d'une expression spatiale. Si, par exemple, la sémantique de la préposition sur était représentée sur la base de la seule relation de contact, il ne serait pas possible de distinguer les configurations spatiales décrites par les phrases suivantes :
L'affiche est sur le mur (*contre le mur) La planche est contre le mur (*sur le mur)
De la même manière, si l'inclusion de la cible dans la fermeture convexe du site décrivait complètement la préposition dans, on ne pourrait expliquer la raison pour laquelle la phrase suivante ne peut être utilisée pour décrire la situation correspondant à la figure 1 (exemple inspiré de [19]) :
L'abeille est dans le vase C'est parce qu'elles laissent de côté des concepts fonctionnels aussi fondamentaux que le support et la contenance que les définitions géométriques évoquées plus haut ne parviennent pas à rendre compte de la sémantique des prépositions sur et dans. De manière plus générale, la sémantique des marqueurs linguistiques de l'espace fait largement appel aux caractéristiques fonctionnelles des entités et des relations entre ces entités. Ces propriétés fonctionnelles peuvent appartenir au champ de la "physique naïve" (telle que définie dans [18]) comme dans le cas du support et de la contenance; elles peuvent également relever du domaine de l'orientation, les propriétés fonctionnelles liées à l'usage des entités jouant un rôle décisif dans l'interprétation d'expressions telles que le haut de la bouteille, le devant de l'armoire ou l'aile gauche de la voiture. Il existe enfin un certain nombre de propriétés fonctionnelles se rapportant à la structure interne des entités et spécifiant le caractère continu ou discret ou bien encore la structuration en parties ou composition. Ces dernières notions reposent sur une classification ontologique des entités qui est elle-même de nature essentiellement fonctionnelle.
Ces remarques, ainsi que d'autres observations du même type, nous ont amenés à adopter une approche en trois niveaux pour l'analyse et la représentation de la signification des expressions spatiales. Nous considérons tout d'abord un niveau géométrique représentant l'espace objectif décrit par le texte analysé ; ce niveau constitue la base du système. Un niveau fonctionnel prend ensuite en considération les propriétés des entités introduites par le texte et les relations non géométriques entre entités. Enfin, à un niveau pragmatique, sont introduites les conventions et les principes sous-jacents à une "bonne" communication [17] qui s'appuient pour l'essentiel sur des informations extérieures au texte lui-même telles que le contexte ou la connaissance du monde. Loin d'être indépendants, ces trois niveaux forment entre eux une structure hiérarchique : le second niveau introduit des informations fonctionnelles en se fondant sur des données géométriques (ainsi la contenance implique l'inclusion dans l'intérieur) et permet dès lors de représenter la sémantique "brute" des expressions spatiales. De son côté, le niveau pragmatique modifie les résultats obtenus au second niveau de manière à adapter cette sémantique à la situation "réelle".
Les éléments manipulés aux différents niveaux de notre système sont de nature distincte. Les phénomènes pragmatiques et fonctionnels prennent en compte les entités avec toutes leurs caractéristiques, leurs propriétés, leurs fonctions alors que les relations géométriques sont indépendantes de la couleur, la substance, l'usage... Il est ainsi possible de distinguer plusieurs entités décrivant la même portion d'espace-temps. Par exemple, l'eau dans le verre et l'intérieur du verre sont des entités bien différentes mais entretiennent les mêmes relations géométriques avec les autres entités, si bien qu'elles ne peuvent être distinguées au niveau géométrique. Les niveaux fonctionnel et pragmatique traitent donc directement des entités introduites par le texte alors que le niveau géométrique traite des référents spatiaux de ces entités, c'est-à-dire, les portions d'espacetemps qu'elles déterminent. On nommera les éléments du niveau géométrique à l'aide des termes d'"individus" ou de "corps", et ces éléments seront dénotés, à partir de la section 3 par des termes de la forme sref(x) où x est une entité du niveau fonctionnel, introduite par le texte.
Esquisse d'une géométrie cognitive
Dans la section précédente, nous avons pu voir que la langue décrit l'espace de façon relationnelle. Une représentation formelle de même type semble donc plus naturelle que l'emploi d'une géométrie de type cartésien où les individus sont situés au moyen de coordonnées. Mais ce n'est pas seulement plus naturel, c'est aussi plus efficace. En effet, les deux autres caractéristiques inhérentes aux expressions spatiales que nous avons décrites, leur imprécision et la variabilité de la "granularité" rendent l'usage direct d'un espace de points situés par coordonnées impossible. Il faudrait introduire des systèmes raffinés de gestion de l'imprécision et de l'incomplétude de l'information, ainsi que d'interprétation des points de vue dimensionnels. Une représentation directement relationnelle entre individus étendus est non seulement plus proche de l'expression langagière, mais satisfait pratiquement d'emblée les contraintes évoquées, la précision et la granularité s'ajustant à la richesse de la description en termes de relations et d'individus.
Nous allons voir dans cette section comment une telle représentation formelle peut être construite sur la base d'une nouvelle théorie de l'espace, la géométrie classique n'ayant pas développé cette voie. Même si la géométrie euclidienne est relationnelle, elle se fonde sur des éléments abstraits que sont les points, droites et plans, et ne peut traiter directement d'individus étendus. En suivant l'ordre de complexité attesté par des études en psychologie sur l'acquisition des concepts spatiaux par l'enfant [25], nous introduirons tout d'abord les relations du domaine de la méréologie et de la topologie, comme l'inclusion et le contact, puis celles relevant du domaine de la distance, et enfin celles concernant les principes d'orientation.
Méréologie et topologie
Un petit nombre de théories formalisant les concepts de la topologie sur des individus étendus ont été proposées par des logiciens [34,30,15], avec un actuel regain d'intérêt dans le domaine du raisonnement spatial qualitatif [28]. Elles sont toutes basées ou inspirées de la méréologie, la théorie de la relation d'inclusion exempte de la notion ensembliste d'élément [20,29].
Notre théorie est une adaptation et extension de celle présentée dans [15]. Ce système méréo-topologique est construit à partir d'une primitive unique de "connexion", notée C. Deux individus sont connectés s'ils ont une partie en commun ou s'ils sont joints par une partie de leur surface. Le sens précis de cette relation est donné par les axiomes suivants :
La connexion est réflexive et symétrique :
Α1 ∀x C(x,x) A2 ∀x ∀y (C(x,y) → C(y,x))
Deux individus sont (spatialement) égaux lorsqu'ils sont connectés aux mêmes individus (extensionnalité) : A3 ∀x ∀y (∀z (C(z,x) ↔ C(z,y)) → x= s y) Plusieurs relations méréologiques peuvent alors être définies : D1 P(x,y) ≡ def ∀z (C(z,x) → C(z,y)) "x est une partie de (est inclus dans) y" D2 PP(x,y) ≡ def P(x,y) ∧ ¬P(y,x) "x est une partie propre de y" D3 O(x,y) ≡ def ∃z (P(z,x) ∧ P(z,y)) "x recouvre y"
La relation de jonction, également appelée "connexion externe", celle de partie tangentielle et celle de partie non tangentielle sont des relations qui ne relèvent pas de la méréologie. Leur définition, ainsi que celle des notions topologiques qui en sont dérivées, a été permise par le choix de la connexion comme primitive, au lieu de la relation d'inclusion P souvent choisie en méréologie classique : D4 EC(x,y) ≡ def C(x,y) ∧ ¬O(x,y) "x est extérieurement connecté à y" D5 TP(x,y) ≡ def P(x,y) ∧ ∃z (EC(z,x) ∧ EC(z,y)) "x est une partie tangentielle de y" D6 NTP(x,y) ≡ def P(x,y) ∧ ¬∃z (EC(z,x) ∧ EC(z,y)) "x est une partie non tangentielle de y" Ces relations peuvent être schématisées par la La méréologie classique ainsi que la théorie de Clarke utilisent un opérateur général de fusion pour faire la "somme" d'une collection quelconque d'individus, principalement pour définir les opérateurs booléens d'union, intersection et complément. Il est en fait possible d'introduire axiomatiquement les opérateurs booléens directement, ce qui présente l'avantage important d'obtenir une théorie qui reste du premier ordre : A4 ∀x ∀y ∃z ∀u (C(u,z) ↔ (C(u,x) ∨ C(u,y))) A4 et A3 impliquent, pour tout x et y, l'existence et l'unicité de leur somme, que l'on notera x+y.
A5 ∃x ∀u C(u,x) A5 et A3 impliquent l'existence et l'unicité d'un individu universel, noté a*. A6 ∀x (∃y ¬C(y,x) → ∃z ∀u (C(u,z) ↔ ∃v (¬C(v,x) ∧ C(v,u)))) A6 et A3 impliquent, pour tout x≠a*, l'existence et l'unicité de son complément, noté - x. A7 ∀x ∀y (O(x,y) → ∃z ∀u (C(u,z) ↔ ∃v (P(v,x) ∧ P(v,y) ∧ C(v,u))))
A7 et A3 impliquent, pour tout x et y se recouvrant, l'existence et l'unicité de leur intersection (non vide), notée x•y.
Les concepts purement topologiques d'intérieur, d'ouvert et de fermeture sont définis par les axiomes ci-dessous 1 :
A8 ∀x ∃y ∀u (C(u,y) ↔ ∃v (NTP(v,x) ∧ C(v,u))) A8 et A3 impliquent, pour tout x, l'existence et l'unicité de son intérieur, noté ix. D7 cx = def -i(-x)
Ainsi définie, la fermeture de x, cx, existe seulement si x≠a* (du fait de la présence de l'opérateur de complément). Cet opérateur c devient une fonction en ajoutant A9 : A9 c(a*)=a* D8 OP(x) ≡ def x= s ix "x est ouvert" D9 CL(x) ≡ def x= s cx "x est fermé" D'autres concepts topologiques comme la connexité d'un individu peuvent également être introduits : D10 Sp(x,y) ≡ def ¬C(cx,cy) "x et y sont séparés" D11 Con(x) = def ¬∃y ∃z (x= s y+z ∧ Sp(y,z)) "x est connexe" Enfin, l'axiome suivant est nécessaire pour achever de spécifier la notion d'ouvert :
A10 (OP(x) ∧ OP(y) ∧ O(x,y)) → OP(x•y)
Il nous semble important de pouvoir établir une comparaison formelle entre les théories mathématiques connues et celle que nous proposons après les travaux de Clarke. Nous avons montré dans [3] que les modèles de cette théorie sont basés sur un espace topologique "classique", et en particulier que les concepts d'ouvert et de fermé correspondent bien à ceux de la topologie.
Si l'importance cognitive de ces concepts n'est peut-être pas claire à première vue, il semble qu'on puisse affirmer que les individus déterminés par des objets matériels et les "morceaux d'espace" (voir plus loin la section 3.1) qui les entourent sont de nature différente, notamment en ce qui concerne leurs surfaces, ce qui peut précisément s'analyser en termes de fermés et d'ouverts. De plus, ces concepts permettent de définir la notion de contact dans toute sa variété. La connexion externe peut être considérée comme une sorte de contact "fort" qui apparaît par exemple entre parties adjacentes d'une même entité connexe : la main / le poignet. On peut définir en outre le contact "intermédiaire", un contact sans connexion et toutefois total puisqu'on ne peut rien glisser entre les deux individus, comme entre un verre et son intérieur, ainsi que le contact "faible", un contact entre deux objets matériels non connectés, comme entre un livre et la table sur laquelle il est posé, probablement le prototype du contact. D12 ICont(x,y) ≡ def ¬C(x,y) ∧ C(cx,cy) "x et y sont en contact intermédiaire" D13 WCont(x,y) ≡ def ¬C(cx,cy) ∧ ∀z ((P(x,z) ∧ OP(z)) → C(cz,y)) "x et y sont en contact faible, x et y se touchent" Ce dernier type de contact, bien que le plus courant, se trouve être le plus complexe car il est dépendant du niveau de granularité de la description. Une approche modale de la prise en compte de cette notion de granularité est développée dans [3].
Le prédicat Cont regroupe les trois types de contact : D14 Cont(x,y) ≡ def EC(x,y) ∨ ICont(x,y) ∨ WCont(x,y) "x et y sont en contact" La dernière partie de la composante topologique de notre géométrie concerne la notion de frontière ou de limite, telle qu'elle s'exprime à travers l'usage de noms de localisation interne comme surface, dessus, bord, ou pointe. Il est facile de voir que les portions d'espace déterminées par ces limites sont bien des corps, c'est-à-dire des individus étendus du même type que les autres, et non des éléments de dimension inférieure. Les exemples suivants montrent que l'on conçoit les surfaces des objets comme ayant une certaine épaisseur, et les pointes une certaine surface : la surface de la table est éraflée la pointe de ce crayon est émoussée Cette constatation ne signifie pas qu'il n'y a pas de différence entre ces individus. Chaque type de limite met en évidence la minimalité de l'individu selon un critère différent.
Nous définissons tout d'abord en D16 l'enveloppe x d'un individu y comme étant sa partie tangentielle minimale telle que tout individu extérieurement connecté à y est aussi extérieurement connecté à x et réciproquement. Cette enveloppe x est en fait la surface maximale de l'individu y. D15 Env'(x,y) ≡ def TP(x,y) ∧ ∀z (EC(y,z)) ↔ EC(x,z)) D16 Env(x,y) ≡ def Env'(x,y) ∧ ∀w (Env'(w,y) → P(x,w)) "x est l'enveloppe de y" L'enveloppe d'une enveloppe est cette même enveloppe, ce qui évite le problème délicat d'avoir une primitive supplémentaire décidant a priori quels sont les individus "normaux", ou non-limites. On peut remarquer également que la condition de minimalité dans D16 implique que cette définition n'est opératoire (c'est-à-dire a une extension non vide) que dans un domaine atomique, par exemple, un domaine fini. Cette contrainte nous semble réaliste dans le contexte d'interprétation de textes qui est le nôtre.
A partir de la définition d'enveloppe, on peut ensuite caractériser les limites de premier type d'une entité comme étant les parties tangentielles de son enveloppe : D17 Lim1(x,y) ≡ def ∃z (Env(z,y) ∧ ΤP(x,z)) "x est limite 1 de y" On peut réitérer le processus. Les limites de second type seront des parties d'un contour, lui-même introduit comme "frontière" maximale d'une partie de l'enveloppe par rapport au reste de l'enveloppe. Enfin, les limites de troisième type seront les extrémités d'une limite 2. D18 Contour'(x,y)≡ def ∃w,w' (Env(w,w') ∧ TP(y,w) ∧ ∀z (P(z,w) → (EC(z,y) ↔ EC(z,x))) D19 Contour(x,y)≡ def Contour'(x,y) ∧ ∀w (Contour'(w,y) → P(x,w)) "x est le contour de y" D20 Lim2(x,y) ≡ def ∃w,w' (Contour(w,w') ∧ Lim1(w',y) ∧ ΤP(x,w)) "x est limite 2 de y" D21 Ends'(x,y)≡ def ∃w,w' (Contour(w,w') ∧ ΤP(y,w) ∧ ∀z (P(z,w) → (EC(z,y) ↔ EC(z,x))) D22 Ends(x,y)≡ def Ends'(x,y) ∧ ∀w (Ends'(w,y) → P(x,w)) "x est la fusion des bouts de y" D23 Lim3(x,y) ≡ def ∃w,w' (Ends(w,w') ∧ Lim2(w',y) ∧ ΤP(x,w)) "x est une limite 3 de y"
Les frontières décrites par des expressions du type la surface de la table, le bord de la table, ou le coin de la table désignent en fait des individus ayant des propriétés supplémentaires par rapport à Lim1, Lim2 et Lim3 :
D24 Surface(x,y) ≡ def Con(x) ∧ Lim1(x,y) ∧ ¬Lim2(x,y) "x est une surface de y" D25 Line(x,y) ≡ def Con(x) ∧ Lim2(x,y) ∧ ¬Lim3(x,y) "x est une ligne de y" D26 Point(x,y) ≡ def Con(x) ∧ Lim3(x,y)
"x est un point de y"
Distance
La distance est définie en mathématiques par une fonction qui, à deux points, ou par extension à deux ensembles de points, associe un nombre réel positif. Cette notion semble donc numérique par essence. Toutefois, comme pour bien d'autres concepts qui ont été modélisés par l'arithmétique, la notion sous-jacente essentielle est en fait un ordre, qui peut être modélisé symboliquement. Cette notion permet de traiter directement les différentes comparaisons de distance qui apparaissent dans la langue dans les expressions plus près, plus loin, plus grand, plus petit ou même dans des adjectifs de forme comme carré ou rond. 2 Suivant ici la même démarche que précédemment, nous introduisons donc maintenant une nouvelle relation ternaire primitive entre individus, Closer(x,y,z), qui se lit "x est plus près de y que de z". Une relation similaire a été introduite dans [31], entre triplets de points. Comme nous allons le voir, une relation entre individus est plus complexe car elle interagit avec la méréo-topologie.
Closer(x,y,z) établit implicitement un ordre entre les couples d'individus (x,y) et (x,z). Cet ordre est strict (A11), ce qui permet de définir la relation d'équidistance par D27. A11 Closer(x,y,z) → ¬Closer(x,z,y) D27 Equidist(x,y,z) ≡ def ¬Closer(x,y,z) ∧ ¬Closer(x,z,y) "x est à égale distance de y et de z" L'ordre implicite est un ordre total (A12), qui est bien entendu transitif (A13 et A14).
A12 Closer(x,y,z) → (Closer(x,y,t) ∨ Closer(x,t,z)) A13 (Closer(x,y,z) ∧ ¬Closer(z,y,x)) → Closer(y,x,z) A14 (Closer(x,y,z) ∧ ¬Closer(x,t,z)) → Closer(x,y,t)
La topologie induit des contraintes supplémentaires sur la notion de distance minimale:
A15 C(x,y) → ¬Closer(x,z,y) A16 (C(x,y) ∧ ¬C(x,z)) → Closer(x,y,z) A17 (WCont(x,y) ∧ ¬C(x,z)) → ¬Closer(x,z,y) A18 (WCont(x,y) ∧ ¬WCont(x,z) ∧ ¬C(x,z)) → Closer(x,y,z)
Enfin, l'ordre de la distance est lié à celui de l'inclusion : A19 P(x,y) → ¬Closer(z,x,y) Un certain nombre de propriétés désirables peuvent être démontrées. Par exemple, la transitivité de la relation d'équidistance sous ses deux formes :
(Equidist(x,y,z) ∧ Equidist(x,z,t)) → Equidist(x,y,t) (Equidist(x,y,z) ∧ Equidist(z,x,y)) → Equidist(y,x,z)
Ou encore, le fait que rien n'est plus près de soi que soi-même :
¬C(x,y) → Closer(x,x,y)
Par contre, pour exprimer l'inégalité triangulaire, propriété bien connue de la distance, il nous manque la notion d'alignement. Cette dernière est introduite dans le paragraphe suivant traitant de l'orientation.
Orientation et géométrie projective
Pour pouvoir traiter des relations d'orientation, nous complétons notre ontologie en introduisant la notion de direction. Ces nouvelles entités de notre langage formel seront notées Di (nous employons donc un langage du premier ordre typé). On peut cerner intuitivement la notion de direction en supposant que ces variables désignent des droites vectorielles orientées.
Nous introduisons une relation primitive entre directions Kd(D1,D2,D3) indiquant que "D1 est plus proche de D2 que de D3" (en termes de valeurs angulaires). Cette relation est, elle aussi, similaire à la primitive K dénotant la distance relative entre points axiomatisée dans [31], c'est pourquoi nous la notons Kd. Elle est irréflexive et transitive
(et donc asymétrique) : Α20 ¬Kd(D1,D2,D2) Α21 (Kd(D1,D2,D3) ∧ Kd(D1,D3,D4)) → Kd(D1,D2,D4)
Comme dans le cas de la relation Closer, un second type de transitivité doit être établi :
Α22 (Kd(D1,D2,D3) ∧ Kd(D3,D1,D2)) → Kd(D2,D1,D3)
La primitive Kd permet de caractériser les notions de directions opposées et de directions orthogonales. L'opposée d'une direction est la direction la plus éloignée de cette dernière alors qu'une direction orthogonale à une direction donnée est située à égale distance de cette direction et de son opposée :
D28 -(D1,D2) ≡ def D3≠D2 → Kd(D1,D3,D2) D29 Ortho(D1) = def {D2: -D1=D3 ∧ ¬Kd(D2,D1,D3) ∧ ¬Kd(D2,D3,D1)}
Un axiome supplémentaire assure l'existence de l'opposée d'une direction :
Α23 ∀D1 ∃D2 (∀D3 D3≠D2 → Kd(D1,D3,D2)).
Comme pour les axiomes A4-8, l'axiome A23 et le fait que l'opposée d'une direction est unique (ce qui peut être démontré en utilisant A23 et l'asymétrie de Kd), permettent d'introduire dans les notations un nouvel opérateur -sur les directions, -D dénotant la direction opposée à la direction D.
On peut alors définir la médiane de deux directions ainsi qu'une opération de somme ou de composition de directions. La somme de deux directions est le sous-ensemble de l'ensemble des médianes constitué par les directions qui sont les plus proches des deux directions considérées (pour des directions non opposées cet ensemble est un singleton alors que dans le cas de directions opposées cet ensemble comprend deux éléments en deux dimensions et définit un plan en trois dimensions). Nous introduisons ci-dessous les définitions caractérisant les médianes et les sommes ainsi qu'un axiome de linéarité :
D30 Med(D1,D2) = def {D3: (D1=D2 ∧ D3=D1) ∨ (D1≠D2 ∧ ¬Kd(D3,D1,D2) ∧ ¬Kd(D3,D2,D1))} D31 D3∈Sum(D1,D2) ↔ (D3∈Med(D1,D2) ∧ ∀D4 (D4∈Med(D1,D2) → ¬Kd(D1,D4,D3))) Α24 (D1≠D2 ∧ D1≠D3 ∧ D2≠D3) → (Kd(D1,D2,D3) ∨ Kd(D1,D3,D2) ∨ D1∈Med(D2,D3))
Deux axiomes expriment le caractère circulaire ou réflexif des directions :
Α25 Kd(D1,D2,D3) ↔ Κd(D1,-D3,-D2) Α26 Kd(D1,D2,D3) ↔ Κd(-D1,-D2,-D3)
Enfin, un axiome établissant la transitivité entre médianes est introduit et la relation entre une direction D et deux directions D2 et D3 est exprimée sur la base de la somme de ces directions :
Α27 (D∈Med(D1,D2) ∧ D∈Med(D2,D3)∧ D1≠D3) → D∈Med(D1,D3) Α28 (Kd(D,D2,D3) ∧ D1∈Sum(D2,D3)) → (Kd(D3,D1,D) ∧ Kd(-D2,-D1,D))
La théorie basée sur la primitive Kd comporte d'autres définitions et axiomes (directions coplanaires, extensionnalité...) et permet d'établir de nombreux théorèmes [5].
La formalisation des phénomènes orientationnels dans la langue nécessite également l'utilisation, au niveau géométrique, d'un ensemble de treize relations constituant une extension des relations d'Allen Nous posons alors que y constitue une extrémité de x dans la direction D si y est une limite de x et si, de plus, tout individu inclus dans x (et non inclus dans y) précède ou rencontre y dans cette direction D : D32 Ext(y,x,D) ≡ def Lim1(y,x) ∧ ∀v ((P(v,x) ∧ ¬P(v,y)) → <m(v,y,D)) Soulignons que, dans certaines situations, plusieurs directions peuvent vérifier cette relation pour deux individus x et y donnés. Généralement, ceci se produit lorsqu'une tangente à la surface ne peut être définie au point considéré (par exemple lorsque l'on se trouve en présence d'un sommet y d'un triangle x).
Si nous souhaitons qu'une seule direction soit sélectionnée, il est alors nécessaire d'introduire des contraintes supplémentaires dans la définition. Ceci nous amène à définir une relation "Exts" indiquant que y constitue une extrémité de x dans la direction D et z une extrémité (d'une partie u de x) dans la direction opposée : D33 Exts(y,z,x,D)
≡ def Ext(y,x,D) ∧ ∃u (P(u,x) ∧ P(y,u) ∧ Ext(z,u,-D) ∧ Salient(z,x) ∧ (¬∃v Point(z,v) ∨ ¬∃v Point(y,v)))
Dans cette définition, le prédicat "Salient" rend compte des processus visuels et cognitifs qui conduisent à sélectionner un individu géométriquement saillant z dans l'individu x. Le reste de la définition garantit que l'individu z constitue une extrémité dans la direction -D et que l'une des extrémités considérées n'est pas ponctuelle.
Vers une géométrie basée exclusivement sur les individus
L'approche formelle développée ici permet de saisir les notions topologiques et celle de distance qualitative sur la base d'une seule catégorie d'éléments primitifs, les individus. Ceux-ci correspondent aux morceaux d'espace (tridimensionnels ou étendus) que déterminent les entités de notre monde, parmi lesquels les objets. Ce choix nous semble non seulement justifié d'un point de vue cognitif mais également raisonnable d'un point de vue ontologique. En effet, contrairement à Aristote ou Kant, et en accord avec Leibniz, nous pensons que l'espace linguistique et cognitif n'est pas une structure abstraite donnée a priori mais qu'il est construit relationnellement à partir des entités qui nous entourent. Il est donc naturel que ces entités constituent le substrat de notre théorie de l'espace.
Toutefois la partie orientationnelle de cette théorie fait appel à un deuxième type d'éléments primitifs à savoir les directions. Afin de préserver la minimalité de l'ontologie nous envisageons d'étudier dans quelle mesure les directions pourraient être définies à partir des individus. Il semble possible par exemple d'introduire une relation d'alignement entre individus, une direction correspondant alors à un triplet d'individus alignés. Une autre possibilité serait la définition des directions comme étant essentiellement dynamiques et résultant du mouvement des individus. Un tel choix nécessite cependant la prise en compte des observations et des théories que proposent la psychologie cognitive et la psycholinguistique. Il serait donc important de déterminer quel(s) point(s) de vue, "abstrait" (directions vectorielles primitives, comme la gravité) ou "matériel statique" (direction donnée par l'alignement d'individus, par exemple établi par la vision d'objets en occultant d'autres), ou enfin "matériel dynamique" (linéarité du mouvement perçue grâce à la persistance rétinienne) sous-tend(ent) notre représentation mentale de l'orientation dans l'espace, si tant est qu'il y en ait un de primitif.
Ces considérations mises à part, ce travail n'est toutefois qu'une étape dans l'entreprise d'élaboration d'une géométrie cognitive. Il demande, en particulier, que des liens inférentiels soient établis entre ses trois parties, à savoir la topologie, la distance et l'orientation. Ces liens, c'est-à-dire très certainement des axiomes supplémentaires, permettront l'expression de propriétés comme l'inégalité triangulaire, mentionnée plus haut.
Concepts fonctionnels et relations spatiales
Comme cela a pu être mis en évidence dans la section 1.2, le fonctionnement des expressions spatiales fait largement appel aux propriétés fonctionnelles des entités et des relations entre ces entités. Nous analysons dans la suite un certain nombre de ces concepts fonctionnels. Il s'agit de la structure interne des entités, des notions d'orientation ainsi que des notions de support et de contenance.
Structure des entités et relations de partie à tout
Un certain nombre d'expressions spatiales comme les noms de localisation interne réfèrent à des parties d'entités, et donc à une structure interne de ces entités. On observe également que les prépositions dites topologiques comme dans et sur sont parfois utilisées pour décrire des relations de partie à tout, comme dans les pépins sont dans la pomme ou les touches sont sur le clavier. Plus généralement, la prise en compte de propriétés fonctionnelles comme celles donnant lieu à une orientation intrinsèque, repose sur l'analyse différenciée du rôle des parties dans le tout. La relation d'inclusion P relie parties et touts spatialement mais non fonctionnellement. Elle ne permet pas de distinguer entre différents types de relations de partie à tout (par exemple distinguer entre la relation entre une page et un livre et la relation entre la préface et le livre). La langue fait pourtant appel à des relations structurelles variées qui se distinguent notamment par leur comportement inférentiel [35].
Cette section propose une analyse des relations de partie à tout qui sous-tendent la notion de structure interne des entités, sans laquelle l'étude des rôles fonctionnels mis en jeu par les expressions spatiales ne saurait être complète. Nous y précisons également l'ontologie des entités apparaissant dans les expressions spatiales non métaphoriques considérées. En effet, les relations structurelles sont souvent manifestées par un choix linguistique qui correspond à un point de vue ontologique particulier sur les entités. Ainsi, le terme de masse du riz (ou un tas de riz) met en évidence une structure interne continue, alors que le terme pluriel des grains de riz introduit une structure de collection 5 . Un première tâche à réaliser est donc la modélisation de la notion de structure présente au niveau du syntagme nominal.
Structure plurielle
Les syntagmes nominaux pluriels (Jean et Marie, les arbres) réfèrent à des collections, ainsi que nombre de syntagmes nominaux singuliers (le couple Dupont, la forêt). Des études poussées sur les pluriels et la notion de collection ont été effectuées en sémantique formelle. Nous reprenons ici la structure de treillis introduite dans [21] que nous modifions afin, entre autres, de prendre en compte les remarques de [10].
Dans cette structure dite "plurielle", les atomes représentent les entités dénotées par un syntagme nominal singulier, qu'elles soient des collections ou non, et les constituants non-atomiques représentent les entités dénotées par un syntagme pluriel. La relation d'ordre du treillis lie donc les collections plurielles à leurs éléments (constituants inférieurs atomiques) et à leurs sous-collections (constituants inférieurs non-atomiques). Par contre, une collection singulière comme la forêt n'est pas liée directement à ses éléments dans le treillis, puisque c'est un atome. Elle l'est cependant indirectement, car il existe toujours une collection plurielle correspondante (dans cet exemple, les arbres), liée à ses atomes dans le treillis.
La relation primitive utilisée est un ordre partiel non strict, noté ≤. Les axiomes et définitions suivants sont nécessaires pour caractériser cet ordre : A30 (x≤y ∧ y≤z) → x≤z A31 (x≤y ∧ y≤x) ↔ x=y D34 At(x) ≡ def ∀y (y≤x → y=x) "x est atomique" A32 ∀x ∀y ∃z ∀u (u≤z ↔ ∃v (v≤u → ∃w (w≤v ∧ (w≤x ∨ w≤y)))) "z, noté x∪y, est la somme de x et de y" A33 ∀x ∀y (∃v (v≤x ∧ v≤y) → ∃z ∀u (u≤z ↔ (u≤x ∧ u≤y))) "z, noté x∩y, est l'intersection de x et de y" A34 x≤y → P(sref(x),sref(y)) Modéliser deux sortes de collections (plurielles et singulières) peut sembler arbitrairement compliqué. En fait, cela autorise la distinction entre plusieurs collections segmentant un même matériau, distinction qui apparaît dans la langue par exemple entre les cartes et les jeux de cartes, entités qui n'ont pas les mêmes éléments, même lorsqu'elles désignent une même réalité physique et ont donc le même référent spatial (exemple tiré de [21]). Cependant cette remarque pointe aussi sur le fait que ce lien spatial entre collection singulière et collection plurielle n'est pas univoque : dans ce même exemple, la collection singulière le paquet de cartes a le même référent spatial que les deux collections plurielles les cartes et les jeux de cartes. Inversement, certaines entités atomiques sont liées spatialement à des collections plurielles alors qu'elles ne sont pas des collections singulières. Par exemple, les entités décrites par les termes de masse du riz ou le bol de riz ne sont pas des collections singulières mais ont le même référent spatial que les grains de riz. Le lien spatial ne suffisant donc pas à établir les bonnes correspondances entre entités singulières et entités plurielles, nous introduisons une nouvelle relation primitive notée Is-coll(x,y), qui se lit "x est la collection des y". Elle vérifie les axiomes suivants : A35 Is-coll(x,y) → (At(x) ∧ ¬At(y)) A36 Is-coll(x,y) → sref(x) = s sref(y) A37 (Is-coll(x,y) ∧ Is-coll(x,z)) → y=z Il est utile d'ajouter la définition suivante pour les collections : D35 Coll(x) ≡ def ¬At(x) ∨ ∃y Is-coll(x,y)
Structure massique
En ce qui concerne les syntagmes nominaux qui sont des termes de masse, notre formalisation reprend surtout, parmi l'abondante littérature, les travaux de Parsons [23,24].
Si pour traiter des pluriels, nous séparions implicitement les entité en deux classes, les collections et les non-collections (ou entités simples), nous devons ici introduire des distinctions ontologiques parmi les entités simples. Les termes de masse sont formés d'un déterminant partitif ou de mesure (du, de la, un peu de, un verre de...) et d'un nom de substance (eau, neige, sable, mobilier...). Afin d'expliquer correctement à la fois le comportement linguistique des termes de masse et celui des emplois nominaux génériques des substances (comme dans l'eau est un liquide, l'oignon a un goût prononcé) il est nécessaire de considérer que les substances sont toujours des entités simples particulières. De même, il convient de distinguer parmi les entités simples les quantités de substance, ou morceaux de matière, que les termes de masse permettent de désigner. Ces dernières ont la particularité de ne pas être comptables : de l'eau plus de l'eau est encore de l'eau. Cette propriété est souvent décrite sous le nom de "référence cumulative" [27].
Nous introduisons donc trois nouveaux prédicats : Subst(x) qui caractérise les substances, Mat(x) qui caractérise les morceaux de matière, et Q(x,y), relation introduite dans [23], que l'on peut paraphraser en "x est une quantité de y". Les axiomes suivants précisent leurs rapports entre eux et avec la structure spatiale : A38 Q(x,y) → (Mat(x) ∧ Subst(y) ∧ P(sref(x),sref(y))) A39 Mat(x) → ∃y (Q(x,y) ∧ ∀z (Q(x,z) → y=z)) A40 (Q(x,y) ∧ Q(z,y)) → ∃t (Q(t,y) ∧ sref(t)= s sref(x)+sref(z)) 6 A41 (¬Coll(x) ∧ ¬Coll(y) ∧ sref(x)= s sref(y) ∧ ∃z(Q(x,z) ∧ Q(y,z))→ x=y
Classification des entités
Nous venons de voir que les structures plurielles et massiques introduisent des notions qui reposent sur la distinction de différentes classes d'entités. Nous allons maintenant considérer dans son ensemble la classification que nous utilisons ici, autrement dit, l'ontologie de notre système formel.
Cette classification a deux dimensions : la première divise les entités selon leur nombre et la seconde les divise selon leur essence. En ce qui concerne le nombre, nous avons vu que les entités peuvent être simples ou collectives, et que les entités collectives se répartissent en collections singulières et collections plurielles. En ce qui concerne la nature essentielle des entités, nous avons déjà rencontré au moins deux classes, les substances et les morceaux de matière. Pour analyser la sémantique des prépositions spatiales (on le verra en particulier pour dans) il est en fait nécessaire de considérer au total cinq classes : les objets (Marie, une forêt, le bord de la table), les morceaux de matière (un verre d'eau, du mobilier, le bois de la chaise), les substances (la neige), les lieux (Toulouse, mon jardin) et les morceaux d'espace (l'intérieur d'une boîte, un trou dans le gruyère, une grotte). Les objets (Obj) sont des entités comptables, matérielles, non génériques, en général mobiles. Les lieux (Loc) sont des entités fixes les unes par rapport aux autres, et pour lesquelles nous considérerons ici, mais c'est une simplification (cf. [6]), qu'elles sont co-extensionnelles avec une portion de la surface terrestre. Les morceaux d'espace (Sp-port) sont les seules entités immatérielles. Elles sont cependant toujours dépendantes, souvent de manière fonctionnelle, d'une ou plusieurs autres entités qui sont matérielles [14]. A42 Sp-port(x) → ∃y ((Obj(y) ∨ Mat(y) ∨ Loc(y)) ∧ Depend(x,y)) Si les référents spatiaux des objets, morceaux de matière, substances et lieux, sont déterminés directement par leur extension matérielle, les référents spatiaux des morceaux d'espace sont déterminés indirectement par des fonctions géométriques sur les référents spatiaux des entités dont ils dépendent.
Nous avons pu noter que la classification ne dépend pas d'une réalité objective du monde, mais de la façon dont nous décrivons cette réalité dans la langue. La classification n'est cependant pas réalisée au niveau du lexique, un même lexème pouvant, selon l'usage, désigner des entités de nature différente. Pomme peut désigner un objet (une pomme), un morceau de matière (de la pomme) ou une substance (la pomme dans la pomme et le hareng s'accordent bien). De même, c'est le contexte qui permettra de déterminer si la forêt désigne l'objet collection d'arbres ou le lieu où cette collection pousse.
Même si les deux dimensions de la classification sont orthogonales, nous pensons nécessaire d'ajouter une contrainte entre les deux. Nous faisons l'hypothèse que les cinq classes "essentielles" sont séparées et épuisent bien les entités simples de notre domaine d'étude 7 . Ceci revient à supposer qu'il n'existe pas de lexème décrivant des collections hétéroclites. Nous ajoutons donc l'axiome suivant, où ⊕ dénote le ou exclusif :
A43 At(x) → (Obj(x) ⊕ Mat(x) ⊕ Subst(x) ⊕ Loc(x) ⊕ Sp-port(x))
Méronomies
Grâce aux outils formels que nous venons d'introduire, il est possible de donner une définition pour les différentes relations de partie à tout, encore appelées méronomies, que la langue permet d'exprimer. Le classement des méronomies que nous introduisons ici est inspiré de [35] et est amplement motivé dans [33].
Nous avons déjà implicitement mentionné les deux méronomies "élément / collection" (un arbre de la forêt) et "sous-collection / collection" (le conseil de sécurité de l'ONU) que la structure plurielle permet de formaliser immédiatement.
La structure massique permet également de formaliser assez directement deux autres méronomies : "portion / tout" (ceci est une part de gâteau) et "substance / tout" (il y a du sucre dans ce gâteau). Pour la première relation, la partie et le tout sont deux quantités de la même substance, alors que dans le cas de la méronomie "substance / tout", deux substances, l'une pour la partie, l'autre pour le tout, sont mises en relation.
Deux autres relations méronomiques sont employées en français, mais cette fois, les structures plurielle et massique n'aident pas à leur formalisation. Ces méronomies relient des entités simples qui ne sont ni des morceaux de matière, ni des substances. La première, "composant / assemblage", est peut-être celle qui peut être considérée comme le prototype des relations de partie à tout (le pied de la chaise, le moteur de la voiture, la main de mon bras droit...). Elle fait surtout appel au fait que la partie remplit une fonction par rapport au tout [16], cette fonction étant évoquée par les termes employés pour désigner la partie et le tout. Nous n'analyserons pas ici la relation de fonctionnalité. Sa complexité, due au fait que toutes sortes de fonctions (support, production d'énergie, préhension...) peuvent être en jeu, est évidente.
La dernière de nos méronomies, "morceau / tout", est en contraste avec "composant / assemblage" justement en ce qui concerne l'absence de fonction évoquée. Si un composant a généralement une forme et une position déterminée par sa fonction, un morceau est une partie découpée arbitrairement dans le tout. Cette partie est donc souvent désignée en décrivant sa forme et sa position, avec un nom de localisation interne par exemple (le haut de l'armoire, la pointe du couteau, le sud-ouest de la France). La partie est toutefois contrainte géométriquement par le fait qu'elle doit être connexe, ce qui n'est pas requis dans les autres méronomies.
Une analyse plus poussée de l'expression des relations de partie à tout en français et en basque, ainsi qu'une formalisation complète de leurs propriétés inférentielles -notamment de la transitivité qui ne s'applique que pour certaines combinaisons-peut être trouvée dans [9]. Le prédicat Part regroupe les six méronomies de façon indifférenciée :
D36 Part(x,y) ≡ def Member(x,y) ∨ Subcoll(x,y) ∨ Portion(x,y) ∨ Subst-Wh(x,y) ∨
Component(x,y) ∨ Piece(x,y) "x est une partie de y"
Orientation
La formalisation des processus orientationnels s'appuie sur les outils mis en place au niveau géométrique pour manipuler les concepts d'orientation et prend également en considération des propriétés fonctionnelles directement liées aux entités. Nous présentons dans la suite les définitions formelles proposées pour rendre compte des orientations intrinsèques verticale et frontale des entités. Nous montrons ensuite la manière dont cette modélisation des concepts d'orientation intervient dans la spécification du contenu sémantique des prépositions spatiales externes devant/derrière.
Les orientations intrinsèques
Il nous faut tout d'abord mettre en évidence le fait que, dans de nombreux cas, associer une orientation intrinsèque à une entité revient à dire que, pour des raisons fonctionnelles, une portion particulière de cette entité constitue une extrémité dans la direction considérée (par exemple le goulot d'une bouteille délimite cette bouteille vers le haut). En nous basant sur cette remarque, nous introduisons une nouvelle fonction partielle mettant en correspondance une extrémité y d'une entité x (et un extrémité z d'une portion de x) avec la direction correspondante D (la relation Exts utilisée dans cet axiome a été définie au niveau géométrique) : A44 dir-ext(y,z,x)=D ↔ (Part(y,x) ∧ Part(z,x) ∧ Exts(sref(y),sref(z),sref(x),D)) Dans la suite nous dirons qu'une telle direction est générée par les extrémités y et z de x. Une direction donnée peut être considérée comme constituant la direction intrinsèque supérieure d'une entité si, dans une situation canonique, cette direction coïncide avec la direction supérieure induite par la gravité :
D37 Orient-haut(D,x) ≡ def ∃y,z (dir-ext(y,z,x)=D ∧ Can-Use(x) ∧ (In-Use(x) > dir- ext(y,z,x)=haut-grav))
Dans cette définition le prédicat "Can-Use" indique que l'entité x a un usage canonique. Le prédicat "In-Use" associé à un mécanisme d'implication non-monotone (> dénotant une implicature) nous permet de restreindre la coïncidence entre directions aux situations dans lesquelles l'entité x donne lieu à une utilisation canonique.
Une formule similaire caractérise ce qu'est une orientation intrinsèque inférieure, la relation entre cette notion et celle d'orientation supérieure préalablement introduite étant également spécifiée : D38 Orient-bas(D,x) ≡ def ∃y,z (dir-ext(y,z,x)=D ∧ Can-Use(x) ∧ (In-Use(x) > dirext(y,z,x)=bas-grav)) D39 bas-grav = def -(haut-grav)
Le fonctionnement de l'orientation frontale fait appel à des mécanismes plus complexes. En fait, nous distinguons trois cas d'orientations frontales intrinsèques qui ne sont cependant pas mutuellement exclusifs.
Le premier cas (êtres humains, animaux, flèches, voitures, véhicules en général...) couvre les situations dans lesquelles l'orientation frontale d'une entité x découle de ce que Vandeloise appelle l'"orientation générale" de x [32] et qui dépend de plusieurs facteurs, parmi lesquels, la direction frontale, la direction du déplacement et la disposition des organes perceptifs: D40 Orient-avant1(D,x) ≡ def ∃y,z dir-ext(y,z,x)=D ∧ Orient-gen(x,D)
Un second type d'orientation frontale (qualifié d'orientation en tandem) regroupe l'ensemble des entités dont la direction frontale coïncide, lors d'une utilisation canonique, avec la direction frontale de l'utilisateur (chaises, voitures, vêtements...). A travers cette seconde règle, nous indiquons donc qu'une direction spécifique d'une entité x constitue une direction frontale de type 2 si la direction frontale d'une entité quelconque utilisant x d'une manière canonique coïncide avec cette direction de x : La formalisation de l'orientation intrinsèque latérale dont nous ne donnerons pas ici le détail fait appel à des représentations plus complexes que celles introduites pour modéliser l'orientation frontale (ces dernières étant elles-mêmes plus complexes que celles associées à l'orientation verticale). Cette propriété de nos outils formels reflète bien les observations effectuées par les psycholinguistes à propos de l'acquisition et de la manipulation des notions d'orientation [26].
Orientation et sémantique des prépositions spatiales externes
Nous illustrons, dans la suite, la manière dont les outils formels élaborés pour rendre compte des processus orientationnels peuvent être mis en oeuvre pour représenter le contenu sémantique de certains marqueurs spatiaux. Pour cela nous examinons les définitions formelles associées aux prépositions spatiales externes devant/derrière.
Une entité y est décrite comme se trouvant située (intrinsèquement) devant une entité x si y est incluse dans la portion d'espace située devant x (c'est-à-dire dans la portion d'espace délimitée au moyen de x et de sa direction frontale intrinsèque). Afin de saisir une telle notion, nous introduisons le prédicat In-sp(y,x,D) qui indique qu'une entité y est incluse dans l'espace délimité au moyen de l'entité x et de la direction D. D'un point de vue formel, ceci est exprimé en posant qu'une relation m i ou > existe entre les référents spatiaux de y et de x dans la direction D 8 : Le fait que le locuteur soit placé devant le site auquel il donne une orientation frontale signifie que nous considérons ici une configuration en miroir (entre le locuteur orienteur et le site). Ceci est exprimé par le signe négatif associé à la direction apparaissant dans le prédicat "Orient-avant". En fait, les interactions de type miroir sont très fréquentes en français par opposition aux orientations en tandem qui semblent moins souvent utilisées.
Les définitions formelles associées à la préposition derrière sont très similaires à celles proposées pour devant, les principales différences concernant la nature des orientations sous-jacentes. Précisons que cette modélisation des concepts d'orientation a également permis de rendre compte de la sémantique d'un certain nombre de lexèmes utilisés pour désigner les diverses portions d'une entité et appelés noms de localisation interne (ex : haut, bas, avant, arrière, dessus, dessous, devant, derrière...). On trouvera une formalisation de la sémantique de ces éléments lexicaux dans [4] et dans [5].
En nous focalisant sur les emplois statiques des prépositions orientationnelles, nous avons délibérément laissé de côté un élément du contexte susceptible de jouer un rôle majeur dans l'interprétation de ces prépositions à savoir le déplacement. Il semble donc important de noter que, parallèlement à cette description formelle de l'orientation statique dans la langue, une analyse des interprétations dynamiques des prépositions orientationnelles a été entreprise [7,22]. Elle devrait, à terme, permettre d'aboutir à la définition d'un cadre théorique unifié pour la représentation des notions d'orientation dans la langue.
Support et préposition sur
La position des entités sur l'axe vertical constitue un critère essentiel au moment de différencier les diverses configurations spatiales auxquelles permet de se référer la préposition sur. Si la cible se trouve située plus haut que le site (a) nous parlerons de sur1. La situation dans laquelle le site est placé au même niveau que le site (b) sera désignée par sur2. Enfin sur3 s'applique lorsque la cible est située plus bas que le site (c).
(
a) Le livre est sur la table (b) L'affiche est sur le mur (c) La mouche est sur le plafond
Du point de vue géométrique, ces configurations spatiales donnent lieu à trois types de contact entre individus (notés respectivement Cont1, Cont2, et Cont3). Ainsi, Cont1 correspond aux situations dans lesquelles une zone z1 de la surface de x est en contact avec une zone z2 de la surface de y, z1 étant située plus haut que z2 (le prédicat Zonecont(z1,x,y) caractérise la zone de contact z1 entre x et y, c'est-à-dire la portion maximale de l'enveloppe de x en contact faible avec y) : D47 Cont1(x,y) ≡ def Cont(x,y) ∧ ∃z1,z2 (Zonecont(z1,x,y) ∧ Zonecont(z2,y,x) ∧ Plus_haut(z1,z2)) certain nombre d'entités. On a pu en particulier montrer que plusieurs propriétés inférentielles intéressantes caractérisant la version initiale du prédicat "In-sp" n'étaient plus vérifiées.
La comparaison des positions relatives des deux zones de contact entre les référents spatiaux des entités concernées (et non les positions relatives des référents spatiaux des entités elles-mêmes, afin de traiter correctement le cas d'une personne assise sur une chaise) permet donc de classer dans l'un des trois cas mentionnés plus haut les diverses configurations décrites par la préposition sur.
Hormis ces caractéristiques géométriques (positions relatives des zones de contact), la sémantique de la préposition sur fait également appel à deux concepts fonctionnels importants, à savoir la notion de "catégories comparables" et celle de "stabilisation".
Deux entités x et y appartiennent à des catégories comparables si elle présentent des dimensions similaires, ce que l'on matérialise par la relation Catcomp(x,y). Cette propriété est calculée en comparant l'extension de x et de y selon les divers axes ou dimensions associés à ces entités. Selon la configuration considérée, l'extension relative des entités dans une dimension particulière peut avoir plus d'importance que leur extension dans les autres dimensions. Par exemple dans le cas d'un sur1 (d) les tailles respectives de la cible et du site selon l'axe vertical sont assez peu contraintes alors que pour des usages de type sur3 il est beaucoup plus difficile d'admettre une extension de la cible selon cette dimension (e). En conséquence, nous faisons appel à trois prédicats Catcomp différents (Catcomp1, Catcomp2, Catcomp3) correspondant aux trois configurations de sur. Une spécification complète de la notion de catégories comparables doit enfin tenir compte d'un certain nombre de propriétés liées à la nature et à la fonction des entités.
(
d) Le vase est sur la nappe (e) * Le lustre est sur le plafond
Le support ou stabilisation constitue un autre concept fonctionnel jouant un rôle important dans la sémantique de la préposition sur. Dans notre système, le prédicat Stabilise(x,y) indique qu'une entité x stabilise une entité y et le postulat suivant établit que, contrairement à la relation de contact, la stabilisation est transitive : A46 (Stabilise(x,y) ∧ Stabilise(y,z)) → Stabilise(x,z)
Une entité stable par nature (ex : le sol) est qualifiée de stabilisateur intrinsèque, toute entité n'appartenant pas à cette catégorie devant être stabilisée par une autre entité en contact avec elle : Α47 ¬Stabilisateur_Intrinseque(x) → ∃y (¬(y=x) ∧ Stabilise(y,x) ∧ Cont(sref(y), sref(x))) Un axiome doit être également introduit afin de rendre compte de l'interaction entre relations de partie à tout et processus de stabilisation. Si une partie z d'une entité y stabilise une entité x, alors y stabilise x : Α48 (Part(z,y) ∧ ¬Part(x,y) ∧ Stabilise(z,x)) → Stabilise(y,x) Le concept de stabilisation totale est défini en posant qu'une entité y stabilise totalement une entité x si, non seulement y stabilise x mais si, de plus, toute entité z disjointe de y stabilisant directement x est elle-même totalement stabilisée par y : D48 Stab_tot(y,x) ≡ def Stabilise(y,x) ∧ ∀z ((Cont(sref(z),sref(x)) ∧ Stabilise(z,x) ∧ ¬O(sref(z),sref(y)) → Stab_tot(y,z)) L'ensemble de ces outils géométriques et fonctionnels nous permettent d'introduire la définition suivante pour les configurations de type sur1 :
D49 Sur1(x,y) ≡ def Catcomp1(x,y) ∧ Cont1(sref(x),sref(y)) ∧ Stabilise(y,x)
Cette définition stipule que si y et x appartiennent à des catégories comparables, si la zone de contact de x (avec y) est située plus haut que la zone de contact de y (avec x), et si, de plus, y stabilise x, nous pouvons déduire que x est sur y.
Les configurations de type sur2 dans lesquelles la cible est située au même niveau que le site, sont caractérisées au moyen d'une définition similaire, les principales différences concernant le type de contact (Cont2), les catégories comparables (Catcomp2) et la nature du support qui, dans ce cas, doit être total : D50 Sur2(x,y) ≡ def Catcomp2(x,y) ∧ Cont2(sref(x),sref(y)) ∧ Stab_tot(y,x) La notion de stabilisation totale introduite ici permet, par exemple, de distinguer la situation dans laquelle une télévision est posée sur une étagère elle-même fixée à un mur (la télévision est sur le mur) de celle dans laquelle cette télévision repose sur une table placée contre le mur (#la télévision est sur le mur).
La définition associée au cas sur3 diffère de celle associée à sur2 par le type de contact entre les entités concernées ainsi que par le prédicat Catcomp3 relatif aux catégories comparables (ce prédicat est le plus restrictif parmi les divers prédicats "Catcomp", en particulier pour ce qui concerne l'axe vertical) : D51 Sur3(x,y) ≡ def Catcomp3(x,y) ∧ Cont3(sref(x),sref(y)) ∧ Stab_tot(y,x)
Contenance et préposition dans
En ce qui concerne dans, notons tout d'abord que la relation géométrique d'inclusion qui est souvent considérée comme la formalisant, ne relie en général pas entre eux les référents spatiaux des entités. Lorsque le livre est dans l'armoire, le livre et l'armoire ne partagent aucune portion de matière, leurs référents spatiaux ne se recouvrent donc pas. Le référent spatial du livre est inclus dans le référent spatial du morceau d'espace qu'est l'intérieur de l'armoire. On peut noter que la fonction géométrique de fermeture convexe, fréquemment utilisée aussi, ne définit qu'imparfaitement l'intérieur car une concavité quelconque ne correspond pas forcément à un intérieur, comme cela est visible sur la figure 1. Dans le cas d'objets dont la fonction est de contenir (vase, boîte), une concavité doit être elle-même "contenante" pour constituer un intérieur [19].
La propriété de contenance peut être décrite comme la restriction du mouvement potentiel du contenu [32]. Elle se fonde en particulier sur l'opposition à la gravité mais se distingue de la notion de support par des restrictions supplémentaires concernant les mouvements latéraux (d'où la différence entre les expressions sur un tabouret et dans un fauteuil). La notion de contenance s'avère donc particulièrement importante pour la sémantique de dans, même si l'expression x est dans y n'implique pas obligatoirement que y contienne x (l'oiseau est dans le ciel ne suppose aucun phénomène de contenance). Nous ne donnons pas ici de formalisation de la notion de contenance, mais nous axiomatisons la notion d'intérieur.
Nous venons de voir que l'intérieur d'une entité contenante correspond à l'ensemble de ses concavités contenantes. Les intérieurs des entités non contenantes sont définis exclusivement par leur forme. Ils sont de trois types : On distingue d'une part le cas des objets éparpillés (collections comme dans le chien est dans la foule) ou déterminant un volume sans le remplir ni le délimiter (l'oiseau est dans l'arbre). Leurs intérieurs sont alors définis grâce à la fonction de contour, ou "outline", introduite dans [19] (voir aussi [33]). D'autre part, on distingue les cas des objets ou morceaux de matière non solides entourant complètement leur intérieur, qui est souvent temporaire et créé par l'enchâssement de la cible dans le site (le poisson est dans la mer/l'eau). Le référent spatial de ces intérieurs est alors une composante connexe du complément du référent spatial de l'entité. Enfin, les intérieurs des lieux sont définis un peu plus arbitrairement par un morceau d'espace limité latéralement par des verticales passant par les frontières du lieu et verticalement par le lieu lui-même et par un plan horizontal situé "suffisamment" haut au-dessus du lieu (ceci étant assez compliqué à représenter d'un point de vue géométrique, cette contrainte n'est pas considérée dans l'axiome A49). Les morceaux d'espace n'ont évidemment pas d'intérieur, ils jouent eux-mêmes directement ce rôle. Nous faisons l'hypothèse que les substances ne définissent pas non plus d'intérieur.
La fonction "int" vérifie les axiomes suivants 9 : A49 y=int(x) → ((Obj(x) ∨ Mat(x) ∨ Loc(x)) ∧ Sp-port(y) ∧ Depend(y,x) ∧ (t=int(x) → y=t) ∧ ICont(sref(x),sref(y)) ∧ (¬Loc(x) → (P(i(sref(y)),preint(sref(x))) ∧ (Container(x) ∧ ◊∃z Contain(y,z)) ∨ sref(y)=outline(sref(x)) ∨ Con-Comp(sref(y),sref(x))))) A50 (t=int(x) ∧ u=int(y) ∧ Part(x,y) ∧ Rest(y,x,r)) → P(sref(t), sref(u)+sref(r))) A51 (t=int(x) ∧ u=int(y) ∧ P(i(sref(x)), sref(u)) → P(sref(t), sref(u)+sref(y))) On peut décrire la sémantique de dans en distinguant trois types de configurations spatiales. Tous les exemples que nous venons d'évoquer illustrent les deux premiers cas à savoir les situations dans lesquelles le référent spatial de la cible est inclus dans le référent spatial de l'intérieur du site (du site lui-même dans le cas d'un morceau d'espace) ou bien le recouvre. Dans le premier cas -situation prototypique-cette inclusion est totale (le livre est dans l'armoire). Nous appellerons ce cas "dans-total". Dans le second cas, il n'y a que recouvrement, ou "inclusion partielle" (la cuillère est dans la tasse). Ce cas sera dénommé "dans-partiel".
Le troisième cas semble, à première vue, assez différent : il décrit une relation de partie à tout entre les deux entités et donc le référent spatial de la cible est directement inclus dans le référent spatial du site. L'escalier est dans la maison et l'homme est dans la foule en constituent des exemples. Nous appellerons ce troisième cas "dans/partie-de". Toute méronomie ne peut toutefois être décrite au moyen de la préposition dans. Par exemple, si le cerveau est dans la tête est acceptable, *le nez est dans la tête ne l'est pas [32]. On peut rendre compte de ce phénomène à travers un principe que nous appelons "de contraste". Une expression spatiale décrivant une méronomie met en évidence la position de la partie par rapport au tout : on considère donc qu'elle relie la partie au tout auquel on aurait ôté, par contraste, cette partie. Le dernier exemple envisagé est inacceptable car le nez n'est inclus dans aucune concavité de "la tête diminuée du nez". Le principe de contraste explique aussi l'emploi de phrases situant le tout dans la partie et qui pourraient donc sembler paradoxales à première vue (la noix/l'escargot est dans sa coquille). Toutefois, il n'est requis que lorsque la méronomie est un cas de "composant / assemblage" ou "morceau / tout", et seulement lorsque la cible et le site sont des objets ou des morceaux de matière. Dans les autres cas, toute instance de méronomie peut être décrite par l'usage de la préposition dans (Le Cotentin est dans le département de la Manche, Paul est dans le jury)
Dans cette première analyse, nous n'avons pas considéré d'assez près la particularité des usages de dans où la cible n'est pas un objet ou un morceau de matière, mais un morceau d'espace ou un lieu. Un morceau d'espace situé dans un objet ou un morceau de matière (il y a un trou dans ce morceau de fromage) décrit en fait une relation de partie à tout (de type "morceau / tout") entre la cible et l'intérieur du site. On peut à la rigueur situer un morceau d'espace dans un autre (l'intérieur de la boîte est dans l'intérieur de l'armoire), mais cela ne décrit qu'une simple inclusion. Un lieu ne peut être situé que dans un autre lieu. S'il ne s'agit pas d'une méronomie, il s'agit alors de la description d'une situation d'enclave (l'île est dans la mer, Saint-Marin est en Italie), pour laquelle il faut remarquer que le contact entre la cible et le site est nécessaire. Nous considérerons que tous ces cas sont des occurrences de "dans-total". La définition obtenue pour "dans total" est donc : D52 TDs(x,y) ≡ def [(Obj(x) ∨ Mat(x)) ∧ (Obj(y) ∨ Mat(y) ∨ Loc(y)) ∧ P(i(sref(x)), sref(int(y)))] ∨ [(Obj(x) ∨ Sp-port(x)) ∧ Sp-port(y) ∧ P(i(sref(x)),sref(y))] ∨ [Spport(x) ∧ (Obj(y) ∨ Mat(y)) ∧ (Piece(x,int(y)) ∨ x=int(y))] ∨ [Loc(x) ∧ Loc(y) ∧ ∀z (EC(sref(z),c(sref(ground)•(-sref(x)))) → EC(sref(z), sref(y)))]
Pour "dans-partiel", on a :
D53 PDs(x,y) ≡ def [(Obj(x) ∨ Mat(x)) ∧ (Obj(y) ∨ Mat(y) ∨ Loc(y)) ∧ O(i(sref(x)), sref(int(y)))] ∨ [(Obj(x) ∨ Mat(x)) ∧ Sp-port(y) ∧ O(i(sref(x)),sref(y))]
Et la définition de "dans/partie-de" est : D54 DPt(x,y) ≡ def [Part(x,y) ∧ ((((Component(x,y) ∨ Piece(x,y)) ∧ (Obj(x) ∨ Mat(x)) ∧ (Obj(y) ∨ Mat(y))) → ∃z (Rest(x,y,z) ∧ TDs(x,z))))] ∨ [(Component(y,x) ∨ Piece(y,x)) ∧ (Obj(x) ∨ Mat(x)) ∧ (Obj(y) ∨ Mat(y)) ∧ ∃z (Rest(y,x,z) ∧ TDs(y,z)))]
Inférences et pragmatique
Comme cela a déjà été noté, l'analyse proposée ici ne se limite pas à la seule représentation du contenu sémantique mais intègre également la dimension inférentielle des processus interprétatifs. Nous détaillons dans la suite certaines des inférences obtenues dans le cadre de la théorie formelle introduite. Nous montrons ensuite que le résultat de ces inférences n'est pas toujours satisfaisant et qu'une meilleure adéquation au raisonnement humain nécessite la prise en compte de données pragmatiques liées principalement au contexte et à la connaissance du monde.
Inférences
Les prépositions spatiales externes
Nous étudions ici des énoncés composés de deux phrases contenant chacune la préposition spatiale externe devant et nous examinons les déductions transitives obtenues à partir de leurs représentations formelles. Plusieurs cas de figure doivent être distingués en fonction de l'interprétation déictique ou intrinsèque des relations spatiales en présence. Une description détaillée de ces divers cas de figure (intrinsèque/intrinsèque, déictique/déictique, intrinsèque/déictique) est proposée dans [5]. Nous considérons dans la suite un énoncé combinant deux prépositions devant interprétées intrinsèquement : Le tabouret est devant le fauteuil Le fauteuil est devant Max A partir des définitions introduites pour la préposition devant il est possible d'associer à cet énoncé la représentation formelle ci-dessous dans laquelle les constantes t, f et m identifient respectivement le tabouret, le fauteuil et Max :
Etre-devant-i(t,f,d1) Etre-devant-i(f,m,d2) Le prédicat "In-sp" apparaissant dans la définition de "Etre-devant" nous permet de déduire les relations de Allen suivantes entre les référents spatiaux de t, f et m :
mi>(sref(t),sref(f),d1) mi>(sref(f),sref(m),d2).
Il est important de noter que le processus déductif est conditionné par un paramètre fondamental à savoir l'identité des directions d1 et d2 associées aux deux relations "Etredevant". Si ces directions coïncident (ce qui est formellement exprimé par d1=d2) il est alors possible, sur la base des axiomes associés aux relations d'Allen (nous utilisons ici le théorème ∀ x,y,z (mi>(x,y,D) ∧ mi>(y,z,D) → >(x,z,D))), de déduire la relation >(sref(t),sref(m),d2) qui, associée à la définition de "In-sp", permet d'inférer Insp(t,m,d2). En combinant ce fait à la formule Orient-avant(d2,m), contenue dans la définition de Etre-devant-i(f,m,d2) et dénotant l'orientation intrinsèque frontale de m, on obtient finalement :
Etre-devant-i(t,m,d2) ↔ Orient-avant(d2,m) ∧ In-sp(t,m,d2)
A partir des deux phrases citées plus haut et de la contrainte additionnelle concernant la coïncidence des directions frontales intrinsèques du fauteuil et de Max nous parvenons donc à établir que le tabouret est devant Max.
Indiquons que les cas de figure déictique/déictique et intrinsèque/déictique donnent lieu à des processus inférentiels similaires, la coïncidence des directions associées à chacune des relations spatiales constituant chaque fois une condition indispensable pour l'application de la transitivité.
Cas d'une préposition spatiale interne, dans
L'étude détaillée des cas de transitivité de dans montre combien l'analyse complexe que nous avons proposée, combinant plusieurs sens d'intérieur, trois situations spatiales et plusieurs classes pour la cible et le site était nécessaire. En effet, dans est loin d'être transitive dans tous les cas, alors que la transitivité doit être admise si cette préposition est simplement modélisée par l'inclusion. Nous ne présentons ici que quelques cas. L'étude complète, y compris la démonstration des différents théorèmes, peut être trouvée dans [33].
"Dans-total" entre deux objets et un lieu est transitif :
Paul est dans la maison, La maison est dans l'île Paul est dans l'île
Pragmatique
Diverses lois et conventions pragmatiques agissent sur les représentations et inférences obtenues au niveau sémantique. Au delà des connaissances purement fonctionnelles, elles se basent sur la connaissance du monde (en particulier la connaissance des situations typiques) et sur les informations issues du contexte. Les lois que nous envisageons à ce niveau peuvent être considérées comme des adaptations au domaine de l'espace de lois plus générales (telles que les principes de coopérativité de Grice). Tout d'abord, les lois pragmatiques peuvent nous conduire à déduire (souvent par "implicature") plus d'informations qu'il n'y en a effectivement dans le texte et donc plus que n'en donnent les deux premiers niveaux du système. Par exemple, la phrase Marie est dans la voiture est généralement interprétée comme Marie est dans l'habitacle, écartant par là-même la solution alternative décrite par Marie est dans le coffre.
Inversement, ces règles peuvent amener le système à écarter certaines expressions (par exemple, des expressions inférées aux niveaux précédents) qui, bien que correctes d'un point de vue purement sémantique, ne sont pas pour autant inférées car contredisant certaines données ou connaissances pragmatiques. Ainsi, si nous savons que Marie est dans le coffre de la voiture, alors la phrase Marie est dans la voiture n'est pas fausse et cependant on ne l'utilisera (en général) pas pour répondre à la question où se trouve Marie ? puisque dans la plupart des contextes on sait qu'elle sera interprétée comme Marie se trouve dans l'habitacle.
Un principe dit de "fixation" sous-tend le fonctionnement des exemples cités cidessus. Ce principe, introduit dans [32], stipule que l'usage typique d'un objet fixe certaines des ses caractéristiques. Par exemple, l'avant et l'arrière d'une voiture sont fixés par la direction usuelle -et non réelle ou effective-de son mouvement. En fait, de nombreux cas d'orientation intrinsèque sont déterminés de cette façon. Toutefois, l'importance de ce principe peut être telle (par exemple dans le cas de l'orientation) que ses conséquences ne sont jamais remises en cause et qu'il est alors justifié de les prendre en compte au niveau fonctionnel. Ceci illustre la complexité des rapports entre sémantique et pragmatique et l'illusion de vouloir tracer une frontière stricte entre ces deux domaines.
De nombreux autres principes rentrent dans la catégorie des principes restrictifs. Le principe de "cible maximale", qui est en fait un cas particulier de la maxime de quantité indique qu'une relation spatiale situe plutôt le tout que la partie. Le principe symétrique de "site minimal" stipule que plus le site est restreint, plus la localisation est précise. L'application de tels principes doit évidemment être limitée pour éviter que l'on aboutisse à des aberrations (on ne localisera pas un plongeur en disant qu'il est dans son scaphandre), et leurs interactions doivent être contrôlées.
Il reste à mentionner ici un troisième type de facteur pragmatique qui conduit au relâchement ou à la suppression de conditions introduites dans les définitions sémantiques. La possibilité de supprimer certaines conditions apparaissant au niveau sémantique dépend largement du principe gricéen de pertinence, qui indique que si une relation est plus pertinente qu'une autre, la première ne peut être neutralisée au profit de la seconde. C'est en fait cet important phénomène qui contrôle l'acceptabilité de l'imprécision d'une relation spatiale suivant les contextes.
Il est clair que cette partie de notre système doit faire l'objet d'une analyse plus détaillée qui mette en évidence l'ensemble des principes pragmatiques nécessaires au traitement des expressions spatiales et surtout l'articulation entre ces principes. Il ne nous est donc pas possible ici d'en proposer une formalisation.
Conclusion
L'analyse systématique du contenu sémantique et du fonctionnement des marqueurs spatiaux a permis de mettre en évidence les nombreuses propriétés de l'espace linguistique. Les structures conceptuelles qui sous-tendent cet espace linguistique font appel à une géométrie dont les caractéristiques sont bien souvent en opposition avec les fondements et les principes mêmes de la géométrie cartésienne : localisation relationnelle, imprécision/incomplétude, granularité variable... On a vu, par ailleurs, que les seules données géométriques ne suffisaient pas à saisir le contenu sémantique des marqueurs spatiaux et qu'il était nécessaire de prendre en compte diverses notions liées à la fonction des entités ou bien à la pragmatique. Ces observations nous ont donc amenés à poser les bases d'une véritable géométrie cognitive ainsi qu'à élaborer une théorie en trois niveaux (respectivement géométrique, fonctionnel et pragmatique) permettant de représenter la signification des expressions spatiales et de produire diverses déductions. L'adéquation entre les résultats de ces déductions et ceux des raisonnements humains valide, dans une certaine mesure, le cadre théorique proposé pour représenter les concepts spatiaux dans la langue.
On peut noter que les éléments théoriques proposés dans cette étude ont aussi contribué à modéliser l'apport de la sémantique lexicale de l'espace dans l'analyse de la structure de discours, en particulier pour des textes qui décrivent des trajectoires [2].
Indiquons enfin que plusieurs expérimentations sont développées en collaboration avec des psycholinguistes, dans un but de validation de la partie sémantique de ce travail et d'enrichissement de son versant pragmatique. Ces expérimentations visent notamment à vérifier si la complexité des définitions formelles proposées pour les expressions spatiales étudiées est corrélée à la complexité de leur traitement traduite en termes de temps de réponse. Les premiers résultats confirment l'importance des propriétés fonctionnelles des entités dans le fonctionnement des marqueurs orientationnels, et ceci dans des configurations canoniques aussi bien que non canoniques [13].
Figure 1
1Figure 1
Figure 2
Ces définitions jointes aux axiomes donnent aux relations les propriétés inférentielles souhaitées : P, PP et NTP sont transitives, O et EC sont symétriques... Un grand nombre de propriétés sont aussi obtenues en combinant différentes relations ; on a par exemple : ∀x ∀y ∀z ((NTP(x,y) ∧ EC(x,z)) → O(y,z)), c'est-à-dire que tout individu extérieurement connecté à une partie non tangentielle d'un autre individu recouvre aussi ce dernier.
[1] 3 .
3Chaque relation du type Rel(x,y,D) dénote la configuration dans laquelle se trouvent les intervalles maximaux définis par les individus x et y dans la direction D. Hormis les axiomes classiques associés aux relations d'Allen, nous introduisons ici un postulat établissant que pour toute paire d'individus connectés x et y et toute direction D, l'une des relations m, o, s, d, f ou = est vérifiée : A29 C(x,y) → mosdf=m i o i s i d i f i (x,y,D) 43 Cette axiomatique, basée sur 13 relations mutuellement exclusives, a été proposée dans le but d'effectuer des calculs sur les intervalles temporels. <(x,y) dénote que x précède (complètement) y, m(x,y) que x (précède et) rencontre y, o(x,y) que x (précède et) chevauche y, s(x,y) que x débute y, f(x,y) que x termine y, et d(x,y) que x est inclus dans y (sans débuter ni terminer y). >, mi, oi, si, fi et di sont les relations inverses. x=y dénote l'égalité de x et y.4 Sur la base de ce postulat et en utilisant la définition de l'inclusion (P) ainsi que plusieurs théorèmes associés aux relations de Allen, il est possible par exemple de déduire que : P(x,y) → sfd=(x,y,D)
D41 Orient-avant2(D,x) ≡ def ∃y,z (dir-ext(y,z,x)=D ∧ Can-Use(x) ∧ ∀u,D' ((Utilise(x,u) ∧ Orient-avant1(D',u)) > D'=dir-ext(y,z,x))) La troisième et dernière règle caractérise les entités dont la direction frontale est opposée, lors d'un usage canonique, à la direction frontale de l'utilisateur (armoires, ordinateurs, télévisions...) : D42 Orient-avant3(D,x) ≡ def ∃y,z (dir-ext(y,z,x)=D ∧ Can-Use(x) ∧ ∀u,D' ((Utilise(x,u) ∧ Orient-avant1(D',u)) > D'=-dir-ext(y,z,x))) Enfin, nous exprimons au moyen des formules ci-dessous, que toute entité possédant une orientation frontale intrinsèque obéit à l'un des trois cas de figure distingués ci-dessus et que les directions avant et arrière constituent des directions opposées : D43 Orient-avant(D,x) ≡ def Orient-avant1(D,x) ∨ Orient-avant2(D,x) ∨ Orient-avant3(D,x) A45 Orient-avant(D,x) ↔ Orient-arriere(-D,x)
maintenant caractériser le fait qu'une entité y est située intrinsèquement devant une entité x en indiquant que y se trouve dans l'espace délimité au moyen de x et de la direction D et que, de plus, cette dernière direction constitue la direction frontale intrinsèque de x : D45 Etre-devant-i(y,x,D) ≡ def Orient-avant(D,x) ∧ In-sp(y,x,D) L'usage déictique de la préposition devant diffère de son usage intrinsèque par le fait que la direction sous-jacente est induite par le locuteur décrivant la scène située devant lui et non par le site lui-même : D46 Etre-devant-d(y,x,D) ≡ def ∃s (Orient-avant(-D,s) ∧ s≠x ∧ s≠y ∧ Speaker(s) ∧ Insp(y,x,D) ∧ Etre-devant-i(x,s,-D))
, ou comme d'une entité à trois dimensions: nous pénétrons dans la ville par le sud; un plan est nécessaire pour s'orienter à l'intérieur de la ville ou bien comme des deux à la fois : nous pénétrons dans une ville qui a l'étendue d'une véritable capitale Plus largement, on sait que tout ce qui est voie de communication (rue, chemin, route, chemin de fer) et cours d'eau (rivière, canal, fleuve) est tantôt traité comme une ligne : les courbes du chemin; le tracé du canal, la rue est toute droite, le fleuve est sinueux , la route s'étire, tantôt comme une surface ou éventuellement comme un volume : le chemin est cabossé, la route est asphaltée et bien plane, le canal a un volume d'eau bien faible ; s'engager dans une rue; plonger dans la rivière. Sans compter qu'une même phrase peut référer à plusieurs de ces dimensions : une route asphaltée et bien plane, au tracé rectiligne… Tous ces exemples sont l'illustration d'un même phénomène que révèle le langage en ce qui concerne notre traitement de l'espace et des référents spatiaux : la vision et la représentation que nous en avons n'est pas établie une fois pour toutes, sur des propriétés immuables et s'excluant mutuellement. Bien au contraire, vision et représentation fluctuent, se modifient selon les variations dues à un certain nombre de facteurs qui entrent obligatoirement en ligne de compte : données perceptuelles, évidemment, mais également et surtout données liées à la situation de discours (thématique, argumentation, finalité...). Ces points de vue différents que nous introduisons dans nos représentations et qui changent apparemment les propriétés des référents spatiaux que nous traitons induisent sur ces référents ce qu'il est convenu d'appeler une granularité variable, que nous devons nécessairement intégrer dans la représentation et dans le calcul.
La notion mathématique d'intérieur n'a rien à voir avec la notion de sens commun d'intérieur fonctionnel que nous introduisons dans la section 3.4.
Les expressions de la distance en langage naturel font parfois usage de valeurs numériques, comme dans Muret est à 20 km de Toulouse. Nous pensons que tenter de traiter l'aspect numérique sans considérer l'aspect imprécis de ces expressions serait erroné puisque cette imprécision affecte l'opération numérique de base qu'est l'addition des distances. Comme le traitement de ce genre d'imprécision du langage naturel va bien au-delà du seul problème de la distance, nous gardons ce problème pour de futurs travaux. Cependant, ce n'est pas trop s'avancer que de penser que la modélisation qualitative de la distance que nous proposons ici sera alors utile.
Bien qu'ils puissent décrire une même réalité physique, nous considèrerons dans ce cas, comme dans nombre de cas similaires étudiés dans la suite, que ces deux termes désignent bien deux entités différentes dans notre conceptualisation du monde liée à la langue, ce qui est bien l'objet de notre modélisation.
Cet axiome formalise la propriété de référence cumulative. La propriété de référence partitive (toute entité spatialement incluse dans une quantité de substance est aussi une quantité de cette substance), a souvent été décrite à propos des termes de masse. Nous ne la retenons pas ici, car les contre-exemples sont nombreux (un pied de chaise n'est pas du mobilier, un atome d'hydrogène n'est pas de l'eau) et cela rentrerait en contradiction avec le fait qu'on peut distinguer plusieurs entités pour un même référent spatial.
Dans un travail ayant une portée spatio-temporelle, d'autres classes seraient introduites pour les éventualités (événements et états) et les temps (lundi, cette année...). Rappelons que nous ne considérons ici aucune entité abstraite.
Cette spécification de "In-sp" est suffisante pour des entités parallélépipédiques, sphériques et cylindriques. La prise en compte d'entités ayant des formes plus compliquées (telles que des amphithéâtres, des arches) accroîtrait la complexité de la formalisation. Cette dernière possibilité a été testée sur un
La fonction géométrique preint est telle que preint(x) dénote la fermeture convexe de l'individu x, moins x ; le prédicat géométrique Con-Comp(x,y) établit que x est une composante connexe de l'individu y ; ◊ est l'opérateur modal de possibilité ; Rest(y,x,r) établit que r est la partie de y complémentaire de x dans y.
La transitivité ne s'applique pas lorsque Part(x,z) est vérifié.
Puisque Paul et la maison sont des objets, et que l'île est un lieu, l'antécédent est interprété par : TDs(paul,maison) ∧ TDs(maison,île), ce qui donne : P(i(sref(paul)), sref(int(maison))) ∧ P(i(sref(maison)), sref. Nous allons en esquisser rapidement la démonstration. int(îleNous allons en esquisser rapidement la démonstration. Puisque Paul et la maison sont des objets, et que l'île est un lieu, l'antécédent est interprété par : TDs(paul,maison) ∧ TDs(maison,île), ce qui donne : P(i(sref(paul)), sref(int(maison))) ∧ P(i(sref(maison)), sref(int(île))).
La seconde clause donne, par l'axiome A51 : P(sref(int(maison)), sref(int(île))+ sref(île)). La seconde clause donne, par l'axiome A51 : P(sref(int(maison)), sref(int(île))+ sref(île))
En supposant ensuite que ¬O(sref(int(maison)), sref(île)), par un postulat général sur la séparation des référents spatiaux des lieux avec ceux des entités d'autre type, et à l'aide du théorème (P(x,y+z) ∧ ¬O(x,y)) → P(ix,z) nous obtenons : P(i(sref(int(maison))), sref. int(îleEn supposant ensuite que ¬O(sref(int(maison)), sref(île)), par un postulat général sur la séparation des référents spatiaux des lieux avec ceux des entités d'autre type, et à l'aide du théorème (P(x,y+z) ∧ ¬O(x,y)) → P(ix,z) nous obtenons : P(i(sref(int(maison))), sref(int(île)))
iy)) et ∀x ∀y (P(ix,y) → P(ix,iy)) sont aussi des théorèmes, par la transitivité de P, nous concluons enfin la formule suivante qui est bien l. Puisque (P(x,iy) → P(ixinterprétation du conséquent : P(i(sref(paul)), sref(int(îlePuisque (P(x,iy) → P(ix,iy)) et ∀x ∀y (P(ix,y) → P(ix,iy)) sont aussi des théorèmes, par la transitivité de P, nous concluons enfin la formule suivante qui est bien l'interprétation du conséquent : P(i(sref(paul)), sref(int(île)))
On peut montrer que "dans-total" entre trois objets x, y et z est aussi transitif, si l'on admet ¬O(sref(int(y)), sref(z)), ce qui est réaliste dans la plupart des contextes. On peut montrer que "dans-total" entre trois objets x, y et z est aussi transitif, si l'on admet ¬O(sref(int(y)), sref(z)), ce qui est réaliste dans la plupart des contextes. 10
dans-total" entre un objet et deux lieux n'est pas transitif, ce qui peut encore une fois être démontré : Paul est dans l'île. Mais, L'île est dans la mer #Paul est dans la merMais "dans-total" entre un objet et deux lieux n'est pas transitif, ce qui peut encore une fois être démontré : Paul est dans l'île, L'île est dans la mer #Paul est dans la mer
On peut également montrer que la combinaison d'un "dans-total" entre une portion d'espace et un objet, et d'un "dans-total" entre deux objets n'est pas valide à cause de la présence d'une relation de partie à. La non-transitivité est due au fait que les référents spatiaux des intérieurs de deux lieux en relation de "dans-total" ne sont pas inclus l'un dans l'autre, ils ne se recouvrent même pas. Paul est dans le Tarn ; le Tarn est dans Midi-Dans-partiel" n'est jamais transitif, à cause de la non-transitivité de O. La transitivité de "dans/partie-de" varie selon les cas car elle repose sur la transitivité des méronomies. et de "dans-total. or on a pu noter plus haut que ces transitivités ne sont pas toujours validesLa non-transitivité est due au fait que les référents spatiaux des intérieurs de deux lieux en relation de "dans-total" ne sont pas inclus l'un dans l'autre, ils ne se recouvrent même pas. Un "dans-total" entre lieux correspond à une sorte de relation d'entourement, qui ne situe donc pas le premier lieu par rapport à l'intérieur du second. On peut noter que s'il s'était agit d'une relation "dans/partie-de" entre lieux (Paul est dans le Tarn, le Tarn est dans Midi-Pyrénées), la transitivité serait assurée. On peut également montrer que la combinaison d'un "dans-total" entre une portion d'espace et un objet, et d'un "dans-total" entre deux objets n'est pas valide à cause de la présence d'une relation de partie à tout dans le premier "dans-total" qui ne peut pas être transmise : Il y a un trou dans le drap, Le drap est dans le tiroir #Il y a un trou dans le tiroir "Dans-partiel" n'est jamais transitif, à cause de la non-transitivité de O. La transitivité de "dans/partie-de" varie selon les cas car elle repose sur la transitivité des méronomies et de "dans-total", or on a pu noter plus haut que ces transitivités ne sont pas toujours valides.
Towards a general theory of action and time. J Allen, Artificial Intelligence. 232Allen, J. (1984). Towards a general theory of action and time. Artificial Intelligence 23(2), pp. 123-154.
De l'espace-temps dans l'analyse du discours. N Asher, M Aurnague, M Bras, & L Vieu, Sémiotiques 9Asher, N., M. Aurnague, M. Bras & L. Vieu (1995). De l'espace-temps dans l'analyse du discours. Sémiotiques 9.
Toward a Geometry of Common Sense: A Semantics and a Complete Axiomatization of Mereotopology. N & L Asher, Vieu, IJCAI'95. San Mateo, CAMorgan KaufmannAsher, N. & L. Vieu (1995). Toward a Geometry of Common Sense: A Semantics and a Complete Axiomatization of Mereotopology. In: IJCAI'95. San Mateo, CA: Morgan Kaufmann.
Contribution à l'étude de la sémantique formelle de l'espace et du raisonnement spatial : la localisation interne en français, sémantique et structures inférentielles. M Aurnague, ToulouseIRIT Université Paul SabatierThèse de doctoratAurnague, M. (1991). Contribution à l'étude de la sémantique formelle de l'espace et du raisonnement spatial : la localisation interne en français, sémantique et structures inférentielles. Thèse de doctorat, IRIT Université Paul Sabatier, Toulouse.
Orientation in French Spatial Expressions: Formal Representations and Inferences. M Aurnague, Journal of Semantics. 123Aurnague, M. (1995). Orientation in French Spatial Expressions: Formal Representations and Inferences. Journal of Semantics 12(3), pp. 239-267.
Some remarks about the notion of location. Manuscrit. M Aurnague, ToulouseERSS Université de Toulouse le-MirailAurnague, M. (1995). Some remarks about the notion of location. Manuscrit, ERSS Université de Toulouse le-Mirail, Toulouse.
Numéro spécial. M Aurnague, J Jayez, & P Sablayrolles, Les Informations Spatio-Temporelles dans les Constats d'Accidents : Représentation du Contenu Sémantique et Raisonnement. Traitement Automatique des Langues (TAL). 35Aurnague, M., J. Jayez & P. Sablayrolles (1994). Les Informations Spatio- Temporelles dans les Constats d'Accidents : Représentation du Contenu Sémantique et Raisonnement. Traitement Automatique des Langues (TAL), Numéro spécial "Approches Sémantiques", 35(1), pp. 107-130.
A three-level approach to the semantics of space. M & L Aurnague, Vieu, Semantics of Prepositions in Natural Language Processing. C. Zelinski-WibbeltBerlinMouton de Gruyter3Natural Language ProcessingAurnague, M. & L. Vieu (1993). A three-level approach to the semantics of space. In: C. Zelinski-Wibbelt (ed.) Semantics of Prepositions in Natural Language Processing. Berlin: Mouton de Gruyter, Natural Language Processing n° 3, pp. 393- 439.
Modelling Part-Whole Relations. Insights from Basque and French. M & L Aurnague, en préparationVieu, en préparationManuscritAurnague, M. & L. Vieu (en préparation). Modelling Part-Whole Relations. Insights from Basque and French. Manuscrit, .
The algebra of events. E Bach, Linguistics and Philosophy. 9Bach, E. (1986). The algebra of events. Linguistics and Philosophy 9, pp. 5-16.
Le lexique de l'espace: les noms et les adjectifs de localisation interne. A Borillo, Cahiers de Grammaire. 13Borillo, A. (1988). Le lexique de l'espace: les noms et les adjectifs de localisation interne. Cahiers de Grammaire 13, pp. 1-22.
Le lexique de l'espace : prépositions et locutions prépositionnelles de lieu en français. A Borillo, Hommage à Nicolas Ruwet. Ghent: Communication et Cognition. L. Tasmowski & A. Zrib-HertzBorillo, A. (1992). Le lexique de l'espace : prépositions et locutions prépositionnelles de lieu en français. In: L. Tasmowski & A. Zrib-Hertz (eds.), Hommage à Nicolas Ruwet. Ghent: Communication et Cognition.
Représentations lexico-sémantiques de l'espace et leur traitement par des sujets normaux et pathologiques. K Boulanouar, M Aurnague, J.-L Nespoulous, L Vieu, A Borillo, & M Borillo, Actes des Assises Prescot. s des Assises PrescotToulouseBoulanouar, K., M. Aurnague, J.-L. Nespoulous, L. Vieu, A. Borillo & M. Borillo (1994). Représentations lexico-sémantiques de l'espace et leur traitement par des sujets normaux et pathologiques. In: Actes des Assises Prescot, Toulouse.
Holes and Other Superficialities. R A C Casati, Varzi, The MIT PressCambridge, Massachussets; A Bradford BookCasati, R. & A.C. Varzi (1994). Holes and Other Superficialities. Cambridge, Massachussets: The MIT Press, A Bradford Book.
A calculus of individuals based on "connection. B L Clarke, Notre Dame Journal of Formal Logic. 223Clarke, B.L. (1981). A calculus of individuals based on "connection". Notre Dame Journal of Formal Logic 22(3), pp. 204-218.
Lexical semantics. D Cruse, Cambridge University PressCambridgeCruse, D. (1986). Lexical semantics. Cambridge: Cambridge University Press.
Syntax and semantics 3: Speech Acts. H P Grice, P. Cole & J. MorganAcademic PressNew-YorkLogic and conversationGrice, H.P. (1975). Logic and conversation. In: P. Cole & J. Morgan (eds.), Syntax and semantics 3: Speech Acts. New-York: Academic Press, pp. 41-58.
Formal theories of the commonsense world. P J Hayes, J.R. Hobbs & R.C. MooreAblexNorwood (NJThe second naive physics manifestoHayes, P.J. (1985). The second naive physics manifesto. In: J.R. Hobbs & R.C. Moore (eds.), Formal theories of the commonsense world. Norwood (NJ): Ablex, pp. 1-36.
Space and the prepositions in English: regularities and irregularities in a complex domain. A Herskovits, Stanford UniversityPh.D. dissertationHerskovits, A. (1982). Space and the prepositions in English: regularities and irregularities in a complex domain. Ph.D. dissertation, Stanford University.
S Lesniewski, O podstawach matematyki. On the Foundations of MathematicsLesniewski, S. (1927-1931). O podstawach matematyki [On the Foundations of Mathematics].
Philosophical Review] 30-34. Traduction française : Sur les fondements de la mathématique. Przeglad Filosoficzny, HermèsParisPrzeglad Filosoficzny [Philosophical Review] 30-34. Traduction française : Sur les fondements de la mathématique, Paris: Hermès, 1989.
The logical analysis of plurals and mass terms: A lattice theoretical approach. G Link, R. Bäuerle, C. Schwarze & A.v. Stechowde GruyterBerlinMeaning, use and interpretation of languageLink, G. (1983). The logical analysis of plurals and mass terms: A lattice theoretical approach. In: R. Bäuerle, C. Schwarze & A.v. Stechow (eds.), Meaning, use and interpretation of language. Berlin: de Gruyter, pp. 302-323.
Sémantique de l'espace : formalisation des prépositions "avant" et "après. P Muller, ToulouseIRIT Université Paul SabatierMémoire de DEAMuller, P. (1995). Sémantique de l'espace : formalisation des prépositions "avant" et "après". Mémoire de DEA, IRIT Université Paul Sabatier, Toulouse.
An analysis of mass terms and amount terms. Foundations of language 6. T Parsons, Parsons, T. (1970). An analysis of mass terms and amount terms. Foundations of language 6, pp. 363-388.
Afterthougths on mass terms. T Parsons, Synthese. 31Parsons, T. (1975). Afterthougths on mass terms. Synthese 31, pp. 517-521.
La représentation de l'espace chez l'enfant. J & B Piaget, Inhelder, PUF, Bibliothèque de Philosophie ContemporaineParisPiaget, J. & B. Inhelder (1948). La représentation de l'espace chez l'enfant. Paris: PUF, Bibliothèque de Philosophie Contemporaine.
Genèse et structuration des marqueurs de relations spatiales entre trois et dix ans. Cahiers de l. B Pièrart, Institut de Linguistique de Louvain (CILL) 5(1-2)Pièrart, B. (1979). Genèse et structuration des marqueurs de relations spatiales entre trois et dix ans. Cahiers de l'Institut de Linguistique de Louvain (CILL) 5(1-2), pp. 41-59.
Word and object. W Quine, MIT PressCambridge, MAQuine, W. (1960). Word and object. Cambridge, MA: MIT Press.
A spatial logic based on regions and connection. D Randell, Z Cui, & A Cohn, Proceedings of KR'92. KR'92San Mateo (CAMorgan KaufmannRandell, D., Z. Cui & A. Cohn (1992). A spatial logic based on regions and connection. In: Proceedings of KR'92. San Mateo (CA): Morgan Kaufmann.
Parts -A study in ontology. P Simons, Clarendon PressOxfordSimons, P. (1987). Parts -A study in ontology. Oxford: Clarendon Press.
Les fondements de la géométrie des corps. A Tarski, A. Tarski (ed.) Logique, Sémantique, Métamathématique. Paris: Armand ColinTarski, A. (1972). Les fondements de la géométrie des corps. In: A. Tarski (ed.) Logique, Sémantique, Métamathématique. Paris: Armand Colin, pp. 28-34.
The logic of time. J Van Benthem, ReidelDordrechtvan Benthem, J. (1983). The logic of time. Dordrecht: Reidel.
L'espace en français : sémantique des prépositions spatiales. Paris: Seuil, Travaux en Linguistique. C Vandeloise, Vandeloise, C. (1986). L'espace en français : sémantique des prépositions spatiales. Paris: Seuil, Travaux en Linguistique.
Sémantique des relations spatiales et inférences spatio-temporelles : une contribution à l'étude des structures formelles de l'espace en Langage Naturel. L Vieu, IRIT Université Paul SabatierThèse de doctoratVieu, L. (1991). Sémantique des relations spatiales et inférences spatio-temporelles : une contribution à l'étude des structures formelles de l'espace en Langage Naturel. Thèse de doctorat, IRIT Université Paul Sabatier.
A N Whitehead, Process and reality. ess and realityNew-YorkMac MillanWhitehead, A.N. (1929). Process and reality. New-York: Mac Millan.
A taxonomy of part-whole relations. M Winston, R Chaffin, & D Herrmann, Cognitive Science. 11Winston, M., R. Chaffin & D. Herrmann (1987). A taxonomy of part-whole relations. Cognitive Science 11, pp. 417-444.
| [] |
[
"TGIF: Tree-Graph Integrated-Format Parser for Enhanced UD with Two-Stage Generic-to Individual-Language Finetuning",
"TGIF: Tree-Graph Integrated-Format Parser for Enhanced UD with Two-Stage Generic-to Individual-Language Finetuning"
] | [
"Tianze Shi tianze@cs.cornell.edu \nCornell University\nCornell University\n\n",
"Lillian Lee llee@cs.cornell.edu \nCornell University\nCornell University\n\n"
] | [
"Cornell University\nCornell University\n",
"Cornell University\nCornell University\n"
] | [
"Proceedings of the 17th International Conference on Parsing Technologies (IWPT 2021)"
] | We present our contribution to the IWPT 2021 shared task on parsing into enhanced Universal Dependencies. Our main system component is a hybrid tree-graph parser that integrates (a) predictions of spanning trees for the enhanced graphs with (b) additional graph edges not present in the spanning trees. We also adopt a finetuning strategy where we first train a language-generic parser on the concatenation of data from all available languages, and then, in a second step, finetune on each individual language separately. Additionally, we develop our own complete set of pre-processing modules relevant to the shared task, including tokenization, sentence segmentation, and multiword token expansion, based on pre-trained XLM-R models and our own pre-training of character-level language models. Our submission reaches a macro-average ELAS of 89.24 on the test set. It ranks top among all teams, with a margin of more than 2 absolute ELAS over the next best-performing submission, and best score on 16 out of 17 languages. | 10.18653/v1/2021.iwpt-1.23 | [
"https://www.aclanthology.org/2021.iwpt-1.23.pdf"
] | 235,899,306 | 2107.06907 | 22cf8060d3bea919f5e70ac110636b42cc9ac282 |
TGIF: Tree-Graph Integrated-Format Parser for Enhanced UD with Two-Stage Generic-to Individual-Language Finetuning
August 6, 2021
Tianze Shi tianze@cs.cornell.edu
Cornell University
Cornell University
Lillian Lee llee@cs.cornell.edu
Cornell University
Cornell University
TGIF: Tree-Graph Integrated-Format Parser for Enhanced UD with Two-Stage Generic-to Individual-Language Finetuning
Proceedings of the 17th International Conference on Parsing Technologies (IWPT 2021)
the 17th International Conference on Parsing Technologies (IWPT 2021)Bangkok, ThailandAugust 6, 2021213
We present our contribution to the IWPT 2021 shared task on parsing into enhanced Universal Dependencies. Our main system component is a hybrid tree-graph parser that integrates (a) predictions of spanning trees for the enhanced graphs with (b) additional graph edges not present in the spanning trees. We also adopt a finetuning strategy where we first train a language-generic parser on the concatenation of data from all available languages, and then, in a second step, finetune on each individual language separately. Additionally, we develop our own complete set of pre-processing modules relevant to the shared task, including tokenization, sentence segmentation, and multiword token expansion, based on pre-trained XLM-R models and our own pre-training of character-level language models. Our submission reaches a macro-average ELAS of 89.24 on the test set. It ranks top among all teams, with a margin of more than 2 absolute ELAS over the next best-performing submission, and best score on 16 out of 17 languages.
Introduction
The Universal Dependencies (UD; Nivre et al., 2016Nivre et al., , 2020 initiative aims to provide cross-linguistically consistent annotations for dependency-based syntactic analysis, and includes a large collection of treebanks (202 for 114 languages in UD 2.8). Progress on the UD parsing problem has been steady (Zeman et al., 2017, but existing approaches mostly focus on parsing into basic UD trees, where bilexical dependency relations among surface words must form single-rooted trees. While these trees indeed contain rich syntactic information, the adherence to tree representations can be insufficient for certain constructions including coordination, gapping, relative clauses, and argument sharing through control and raising (Schuster and Manning, 2016). The IWPT 2020(Bouma et al., 2020 and 2021 (Bouma et al., 2021) shared tasks focus on parsing into enhanced UD format, where the representation is connected graphs, rather than rooted trees. The extension from trees to graphs allows direct treatment of a wider range of syntactic phenomena, but it also poses a research challenge: how to design parsers suitable for such enhanced UD graphs.
To address this setting, we propose to use a treegraph hybrid parser leveraging the following key observation: since an enhanced UD graph must be connected, it must contain a spanning tree as a subgraph. These spanning trees may differ from basic UD trees, but still allow us to use existing techniques developed for dependency parsing, including applying algorithms for finding maximum spanning trees to serve as accurate global decoders. Any additional dependency relations in the enhanced graphs not appearing in the spanning trees are then predicted on a per-edge basis. We find that this tree-graph hybrid approach results in more accurate predictions compared to a dependency graph parser that is combined with postprocessing steps to fix any graph connectivity issues.
Besides the enhanced graphs, the shared task setting poses two additional challenges. Firstly, the evaluation is on 17 languages from 4 language families, and not all the languages have large collections of annotated data: the lowest-resource language, Tamil, contains merely 400 training sentencesmore than two magnitudes smaller than what is available for Czech. To facilitate knowledge sharing between high-resource and low-resource languages, we develop a two-stage finetuning strategy: we first train a language-generic model on the concatenation of all available training treebanks from all languages provided by the shared task, and then finetune on each language individually. Secondly, the shared task demands parsing from raw text. This requires accurate text processing pipelines including modules for tokenization, sentence splitting, and multi-word token expansion, in addition to enhanced UD parsing. We build our own models for all these components; notably, we pre-train character-level masked language models on Wikipedia data, leading to improvements on tokenization, the first component in the text processing pipeline. Our multi-word token expanders combine the strengths of pre-trained learning-based models and rule-based approaches, and achieve robust results, especially on low-resource languages.
Our system submission integrates the aforementioned solutions to the three main challenges given by the shared task, and ranks top among all submissions, with a macro-average EULAS of 90.16 and ELAS of 89.24. Our system gives the best evaluation scores on all languages except for Arabic, and has large margins (more than 5 absolute ELAS) over the second-best systems on Tamil and Lithuanian, which are among languages with the smallest training treebanks.
TGIF: Tree-Graph Integrated-Format
Parser for Enhanced UD
Tree and Graph Representations for Enhanced UD
The basic syntactic layer in UD is a single-rooted labeled dependency tree for each sentence, whereas the enhanced UD layer only requires that the set of dependency edges for each sentence form a connected graph. In these connected graphs, each word may have multiple parents, there may be multiple roots for a sentence, and the graphs may contain cycles, but there must exist one path from at least one of the roots to each node. 1 Accompanying the increase in expressiveness of the enhanced UD representation is the challenge to produce structures that correctly satisfy graphconnectivity constraints during model inference. We summarize the existing solutions proposed for the previous run of the shared task at IWPT 2020 (Bouma et al., 2020) into four main categories: • Tree-based: since the overlap between the enhanced UD graphs and the basic UD trees are typically significant, and any deviations tend to be localized and tied to one of several certain syntactic constructions (e.g, argument sharing in a control 1 Enhanced UD graphs additionally allow insertion of phonologically-empty nodes to recover elided elements in gapping constructions. This is currently beyond the scope our system and we use pre-and post-processing collapsing steps to handle empty nodes ( §5). structure), one can repurpose tree-based parsers for producing enhanced UD graphs. This category of approaches include packing the additional edges from an enhanced graph into the basic tree (Kanerva et al., 2020) and using either rule-based or learning-based approaches to convert a basic UD tree into an enhanced UD graph (Heinecke, 2020;Dehouck et al., 2020;Attardi et al., 2020;Ek and Bernardy, 2020). 2 • Graph-based: alternatively, one can directly focus on the enhanced UD graph with a semantic dependency graph parser that predicts the existence and label of each candidate dependency edge. But there is generally no guarantee that the set of predicted edges will form a connected graph, so a postprocessing step is typically employed to fix any connectivity issues. This category of approaches includes the work of Wang et al. (2020), Barry et al. (2020), and Grünewald and Friedrich (2020). 3 • Transition-based: Hershcovich et al. (2020) adapt a transition-based solution. Their system explicitly handles empty nodes through a specialized transition for inserting them; it relies on additional post-processing to ensure connectivity.
• Tree-Graph Integrated: He and Choi (2020) integrate a tree parser and a graph parser, 4 where the tree parser produces the basic UD tree, and the graph parser predicts any additional edges. During inference, all nodes are automatically connected through the tree parser, and the graph parser allows flexibility in producing graph structures. 5 The tree-based approaches are prone to error propagation, since the predictions of the enhanced layer rely heavily on the accuracy of basic UD tree parsing. The graph-based and transition-based approaches natively produce graph structures, but they require post-processing to ensure connectivity. Our system is a tree-graph integrated-format parser that combines the strengths of the available global inference algorithms for tree parsing and the flexibility of a graph parser, without the need to use post-processing to fix connectivity issues. 2 The same idea has also been applied to the task of conjunction propagation prediction (e.g., Grünewald et al., 2021).
3 Barry et al.'s (2020) parsers use basic UD trees as features, but the output space is not restricted by the basic trees. 4 He and Choi (2020) describe their combo as an "ensemble" but we prefer the term "integration" for both their method and ours (which is inspired by theirs), since the two components are not, strictly speaking, targeting same structures. 5 The main difference from the tree-based approaches is that the search space for additional graph edges is unaffected by the predictions of basic UD trees in an integrated approach. Figure 1: An example with basic UD and enhanced UD annotations above and below the text respectively. The extracted spanning tree ( §2.2) is bolded and is different from the basic UD tree.
Spanning Tree Extraction
A connected graph must contain a spanning tree, and conversely, if we first predict a spanning tree over all nodes, and subsequently add additional edges, then the resulting graph remains connected. Indeed, this property is leveraged in some previously-proposed connectivity post-processing steps (e.g., Wang et al., 2020), but extracting a spanning tree based on scores from graph-prediction models creates a mismatch between training and inference. He and Choi (2020) instead train tree parsers and graph parsers separately and combine their prediction during inference, but their tree parsers are trained on basic UD trees whose edges are not always present in the enhanced UD layer.
Our solution refines He and Choi's (2020) approach: we train tree parsers to predict spanning trees extracted from the enhanced UD graphs, instead of basic UD trees, to minimize train-test mismatch. See Figure 1 for an example. Spanning tree extraction is in essence assignment of unique head nodes to all nodes in a graph, subject to tree constraints. For consistent extraction, we apply the following rules:
• If a node has a unique head in the enhanced graph, there is no ambiguity in head assignment.
• If a basic UD edge is present among the set of incoming edges to a given node, include that basic UD edge in the spanning tree.
• Otherwise, there must be multiple incoming edges, none of which are present in the basic UD tree. We pick the parent node that is the "highest", i.e., the closest to the root node, in the basic tree.
The above head assignment steps do not formally guarantee that the extracted structures will be trees, but empirically, we observe that the extraction results are indeed trees for all training sentences. 6 6 Dear Reviewer 1: your question here in the submitted paper caused us to uncover a bug! Fixing it rectified the 4
Parameterization
Our parser architecture is adapted from that of Dozat andManning (2017, 2018), which forms the basis for the prior graph-based approaches in the IWPT 2020 shared task. We predict unlabeled edges and labels separately, and for the unlabeled edges, we use a combination of a tree parser and a graph-edge prediction module.
Representation The first step is to extract contextual representations. For this purpose, we use the pre-trained XLM-R model (Conneau et al., 2020), which is trained on multilingual CommonCrawl data and supports all 17 languages in the shared task. The XLM-R feature extractor is finetuned along with model training. Given a length-n input sentence x = x 1 , . . . , x n and layer l, we extract
[x l 0 , x l 1 , . . . , x l n ] = XLM-R l (<s>, x 1 , . .
. , x n , </s>), where inputs to the XLM-R model are a concatenated sequence of word pieces from each UD word, we denote the layer-l vector corresponding to the last word piece in the word x i as x l i , and the dummy root representations x 0 s are taken from the special <s> token at the beginning of the sequence.
Deep Biaffine Function All our parsing components use deep biaffine functions (DBFs), which score the interactions between pairs of words:
DBF(i, j) =v head i U v mod j + b head · v head i + b mod · v mod j + b,
where v head i and v mod j are non-linearly transformed vectors from weighted average XLM-R vectors across different layers:
v head i = ReLU W head l e α head l l e α head l x l i ,
and v mod j is defined similarly. Each DBF has its own trainable weight matrices U , W head , and W mod , vectors b head and b mod , and scalars b,
{α head l } and {α mod l }.
Tree Parser To estimate the probabilities of head attachment for each token w j , we define
P (head(w j ) = w i ) = softmax i (DBF tree (i, j)).
The tree parsing models are trained with crossentropy loss, and we use a non-projective maximum spanning tree algorithm (Chu and Liu, 1965;Edmonds, 1967) for global inference.
training sentences that weren't originally getting trees. Table 1: Dev-set ELAS (%) results, comparing graph parsers with connectivity-fixing postprocessing against tree-graph integrated models ( §2) and comparing parsers trained directly on each language, genericlanguage parsers, and parsers finetuned on individual languages from the generic-language checkpoint ( §3).
Graph Parser In addition to the spanning trees, we make independent predictions on the existence of any extra edges in the enhanced UD graphs by
P (∃edge w i → w j ) = sigmoid(DBF graph (i, j)).
We train the graph parsing model with a cross entropy objective, and during inference, any edges with probabilities ≥ 0.5 are included in the outputs.
Relation Labeler For each edge in the unlabeled graph, we predict the relation label via
P (lbl(w i → w j ) = r) = softmax r (DBF rel-r (i, j)),
where we have as many deep biaffine functions as the number of candidate relation labels in the data.
To reduce the large number of potential labels due to lexicalization, the relation labeler operates on a de-lexicalized version of the labels, and then a re-lexicalization step expands the predicted labels into their full forms ( §5).
Training The above three components are separately parameterized, and during training, we optimize for the sum of their corresponding crossentropy loss functions.
Empirical Comparisons
In Table 1, we compare our tree-graph integratedformat parser with a fully graph-based approach.
The graph-based baseline uses the same feature extractor, graph parser, and relation labeler modules, but it omits the tree parser for producing spanning trees, and we apply post-processing steps to ensure connectivity of the output graphs. Our tree-graph integrated-format parser outperforms the graphbased baseline on 12 out of the 17 test languages (binomial test, p = 0.07).
4 Pre-TGIF: Pre-Training Grants Improvements Full-Stack Inspired by the recent success of pre-trained language models on a wide range of NLP tasks (Peters et al., 2018;Devlin et al., 2019;Conneau et al., 2020, inter alia), we build our own text processing pipeline based on pre-trained language models. Due to limited time and resources, we only focus on components relevant to the shared task, which include tokenization, sentence splitting, and multiword token (MWT) expansion.
Tokenizers with Character-Level Masked Language Model Pre-Training
We follow state-of-the-art strategies (Qi et al., 2020;Nguyen et al., 2021) for tokenization and model the task as a tagging problem on sequences of characters. But in contrast to prior methods where tokenization and sentence segmentation are bundled into the same prediction stage, we tackle tokenization in isolation, and for each character, we make a binary prediction as to whether a token ends at the current character position or not. An innovation in our tokenization is that we finetune character-based language models trained on Wikipedia data. In contrast, existing approaches typically use randomly-initialized models (Qi et al., 2020) or use pre-trained models on subword units instead of characters (Nguyen et al., 2021).
We follow Devlin et al. (2019) and pre-train our character-level sequence models using a masked language modeling objective: during training, we randomly replace 15% of the characters with a special mask symbol and the models are trained to predict the identity of those characters in the original texts. Due to computational resource constraints, we adopt a small-sized architecture based on simple recurrent units (Lei et al., 2018). 7 We pre-train our models on Wikipedia data 8 and each model takes roughly 2 days to complete 500k optimization steps on a single GTX 2080Ti GPU.
Sentence Splitters
We split texts into sentences from sequences of tokens instead of characters (Qi et al., 2020). Our approach resembles that of Nguyen et al. (2021). 9 This allows our models to condense information from a wider range of contexts while still reading the same number of input symbols. The sentence splitters are trained to make binary predictions at each token position on whether a sentence ends there. We adopt the same two-stage finetuning strategy as for our parsing modules based on pretrained XLM-R feature extractors ( §3). 7 Simple recurrent units are a fast variant of recurrent neural networks. In our preliminary experiments, they result in lower accuracies than long-short term memory networks (LSTMs), but are 2-5 times faster, depending on sequence lengths. 8 We extract Wikipedia texts using WikiExtractor (Attardi, 2015) from Wikipedia dumps dated 2021-04-01. 9 An important difference is that our sentence splitters are aware of token boundaries and the models are restricted from making token-internal sentence splitting decisions.
Multi-Word Token (MWT) Expanders
The UD annotations distinguish between tokens and words. A word corresponds to a consecutive sequence of characters in the surface raw text and may contain one or more syntactically-functioning words. We break down the MWT expansion task into first deciding whether or not to expand a given token and then performing the actual expansion. For the former, we train models to make a binary prediction on each token, and we use pre-trained XLM-R models as our feature extractors.
For the MWT expansion step once the tokens are identified through our classifiers, we use a combination of lexicon-and rule-based approaches. If the token form is seen in the training data, we adopt the most frequently used way to split it into multiple words. Otherwise, we invoke a set of language-specific handwritten rules developed from and tuned on the training data; a typical rule iteratively splits off an identified prefix or suffix from the remainder of the token.
Lemmatizers
While the shared task requires lemmatized forms for constructing the lexicalized enhanced UD labels, we only need to predict lemmas for a small percentage of words. Empirically, these words tend to be function words and have a unique lemma per word type. Thus, we use a full lexicon-based approach to (incomplete) lemmatization. Whenever a lemma is needed during the label re-lexicalization step, we look the word up in a dictionary extracted from the training data.
Evaluation
We compare our text-processing pipeline components with two state-of-the-art toolkits, Stanza (Qi et al., 2020) and Trankit (Nguyen et al., 2021) in Table 2. We train our models per-language instead of per-treebank to accommodate the shared task setting, so our models are at a disadvantage when there are multiple training treebanks for a language that have different tokenization/sentence splitting conventions (e.g., English-EWT and English-GUM handle word contractions differently). Despite this, our models are highly competitive in terms of tokenization and MWT expansion, and we achieve significantly better sentence segmentation results across most treebanks. We hypothesize that a sequence-to-sequence MWT expansion approach, similar to the ones underlying Stanza and Trankit, may provide further gains to morphologically-rich languages that cannot be sufficiently modeled via handwritten rules, notably Arabic.
Other Technical Notes
Hyperparameters We report our hyperparameters in the Appendix.
Empty nodes Enhanced UD graphs may contain empty nodes in addition to the words in the surface form. Our parser does not support empty nodes, so we follow the official evaluation practice and collapse relation paths with empty nodes into composite relations during training and inference.
Multiple relations In some cases, there can be multiple relations between the same pair of words. We follow Wang et al. (2020) and merge all these relations into a composite label, and re-expand them during inference.
De-lexicalization and re-lexicalization Certain types of relation labels include lexicalized information, resulting in a large relation label set. For example, nmod:in contains a lemma "in" that is taken from the modifier with a case relation. To combat this, we follow Grünewald and Friedrich's (2020) strategy and replace the lemmas 10 with placeholders consisting of their corresponding relation labels. The previous example would result in a delexicalized label of nmod: [case]. During inference, we apply a re-lexicalization step to reconstruct the original full relation labels given our predicted graphs. We discard the lexicalized portions of the relation labels when errors occur either in de-lexicalization (unable to locate the source child labels to match the lemmas) or re-lexicalization (unable to find corresponding placeholder relations).
Sequence length limit Pre-trained language models typically have a limit on their input sequence lengths. The XLM-R model has a limit of 512 word pieces. For a small number of sentences longer than that, we discard word-internal word pieces, i.e., keep a prefix and a suffix of word pieces, of the longest words to fit within limit.
Multiple Treebanks Per Language Each language in the shared task can have one or more treebanks for training and/or testing. During evaluation, there is no explicit information regarding the source treebank of the piece of input text. Instead of handpicking a training treebank for each language, we simple train and validate on the concatenation of all available data for each language.
Training on a single GPU The XLM-R model has large number of parameters, which makes it challenging to finetune on a single GPU. We use a batch size of 1 and accumulate gradients across multiple batches to lower the usage of GPU RAM. When this strategy alone is insufficient, e.g., when training the language-generic model, we additionally freeze the initial embedding layer of the model.
Official Evaluation
The shared task performs evaluation on UD treebanks that have enhanced UD annotations across (Zeman, 2018), Swedish (Nivre and Megyesi, 2007), Tamil (Ramasamy and Žabokrtský, 2012), Ukrainian (Kotsyba et al., 2016), and multilingual parallel treebanks (Zeman et al., 2017). The per-language delta ELAS between our submission and the best performing system other than ours, as a function of (the log of the) number of training sentences. (For Italian, the difference is quite small but still positive.) Our models achieve larger improvements on lower-resource languages. Table 3 shows the official ELAS evaluation results of all 9 participating systems in the shared task. 11 Our system has the top performance on 16 out of 17 languages, and it is also the best in terms of macro-average across all languages. On average, we outperform the second best system by a margin of more than 2 ELAS points in absolute terms, or more than 15% in relative error reduction. Figure 2 visualizes the "delta ELAS" between 11 Reproduced from https://universaldependencies. org/iwpt21/results.html. our submission and the best result other than ours on a per-language basis, plotted against the training data size for each language. Our system sees larger improvements on lower-resource languages, where we have more than 5-point leads on Tamil and Lithuanian, two languages among those with the smallest number of training sentences.
Closing Remarks
Our submission to the IWPT 2021 shared task combines three main techniques: (1) tree-graph integrated-format parsing (graph → spanning tree → additional edges) (2) two-stage genericto individual-language finetuning, and (3) preprocessing pipelines powered by language model pre-training. Each of the above contributes to our system performance positively, 12 and by combining all three techniques, our system achieves the best ELAS results on 16 out of 17 languages, as well as top macro-average across all languages, among all system submissions. Additionally, our system shows more relative strengths on lowerresource languages.
Due to time and resource constraints, our system adopts the same set of techniques across all languages and we train a single set of models for our primary submission. We leave it to future work to explore language-specific methods and/or model combination and ensemble techniques to further enhance model accuracies.
17 languages: Arabic(Hajič et al., 2009), Bulgarian(Simov et al., 2004), Czech (Hladká et al., 2010 Bejček et al., 2013; Jelínek, 2017), Dutch (van der Beek et al., 2002; Bouma and van Noord, 2017), English (Silveira et al., 2014; Zeldes, 2017), Estonian (Muischnek et al., 2014, 2019), Finnish (Haverinen et al., 2014; Pyysalo et al., 2015), French (Candito et al., 2014; Seddah and Candito, 2016), Italian (Bosco et al., 2013), Latvian (Pretkalnin , a et al., 2018), Lithuanian (Bielinskienė et al., 2016), Polish (Patejuk and Przepiórkowski, 2018; Wróblewska, 2018), Russian (Droganova et al., 2018), Slovak
Figure 2 :
2Figure 2: The per-language delta ELAS between our submission and the best performing system other than ours, as a function of (the log of the) number of training sentences. (For Italian, the difference is quite small but still positive.) Our models achieve larger improvements on lower-resource languages.
Table 2 :
2Test-set F1 scores for tokenization, sentence segmentation, and MWT expansion, comparing Stanza(Qi et al., 2020), Trankit (Nguyen et al., 2021), and our system submission. Our system results are from the shared task official evaluations; Stanza and Trankit results are reported in the Trankit documentation with models trained on UD 2.5. Caveat: the results may not be strictly comparable due to treebank version mismatch.
Table 3 :
3Official ELAS (%) evaluation results. Our submission ranks first on 16 out of the 17 languages.
Giusepppe Attardi. 2015. WikiExtractor. https:// github.com/attardi/wikiextractor. Efficient EUD parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 192-205, Online. Association for Computational Linguistics. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Association for Computational Linguistics. Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France. OpenReview.net. Timothy Dozat and Christopher D. Manning. 2018. Simpler but more accurate semantic dependency parsing. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 484-490, Melbourne, Australia. Association for Computational Linguistics. Kira Droganova, Olga Lyashevskaya, and Daniel Zeman. 2018. Data conversion and consistency of monolingual corpora: Russian UD treebanks. In Proceedings of the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018), Oslo, Norway. Linköping University Electronic Press. Jack Edmonds. 1967. Optimum branchings. Journal of Research of the National Bureau of Standards, 71B(4):233-240. Adam Ek and Jean-Philippe Bernardy. 2020. How much of enhanced UD is contained in UD? In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 221-226, Online. Association for Computational Linguistics. Stefan Grünewald and Annemarie Friedrich. 2020. RobertNLP at the IWPT 2020 shared task: Surprisingly simple enhanced UD parsing for English. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 245-252, Online. Association for Computational Linguistics. Stefan Grünewald, Prisca Piccirilli, and Annemarie Friedrich. 2021. Coordinate constructions in English enhanced Universal Dependencies: Analysis and computational modeling. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 795-809, Online. Association for Computational Linguistics. Jan Hajič, Otakar Smrž, Petr Zemánek, Petr Pajas, Jan Šnaidauf, Emanuel Beška, Jakub Kracmar, and Kamila Hassanová. 2009. Prague Arabic dependency treebank 1.0. Katri Haverinen, Jenna Nyblom, Timo Viljanen, Veronika Laippala, Samuel Kohonen, Anna Missilä, Stina Ojala, Tapio Salakoski, and Filip Ginter. 2014. Building the essential resources for Finnish: The Turku Dependency Treebank. Language Resources and Evaluation, 48(3):493-531. Han He and Jinho D. Choi. 2020. Adaptation of multilingual transformer encoder for robust enhanced universal dependency parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 181-191, Online. Association for Computational Linguistics. Johannes Heinecke. 2020. Hybrid enhanced Universal Dependencies parsing. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 174-180, Online. Association for Computational Linguistics. Daniel Hershcovich, Miryam de Lhoneux, Artur Kulmizev, Elham Pejhan, and Joakim Nivre. 2020. Køpsala: Transition-based graph parsing via efficient training and effective encoding. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 236-244, Online. Association for Computational Linguistics. Barbora Vidová Hladká, Jan Hajič, Jiří Hana, Jaroslava Hlaváčová, Jiří Mírovský, and Jan Raab. 2010. The Czech academic corpus 2.0 guide. The Prague Bulletin of Mathematical Linguistics, 89(2008):41-96. Tomáš Jelínek. 2017. FicTree: A manually annotated treebank of Czech fiction. In Proceedings of the 17th Conference on Information Technologies -Applications and Theory, pages 181-185, Martinské Hole, Slovakia. Jenna Kanerva, Filip Ginter, and Sampo Pyysalo. 2020. Turku enhanced parser pipeline: From raw text to enhanced graphs in the IWPT 2020 shared task. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies, pages 162-173, Online. Association for Computational Linguistics. Natalia Kotsyba, Bohdan Moskalevskyi, and Mykhailo Romanenko. 2016. Gold standard Universal Dependencies corpus for Ukrainian. https://github.com/ UniversalDependencies/UD_Ukrainian-IU. Lei, Yu Zhang, Sida I. Wang, Hui Dai, and Yoav Artzi. 2018. Simple recurrent units for highly parallelizable recurrence. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4470-4481, Brussels, Belgium. Association for Computational Linguistics. Liyuan Liu, Haoming Jiang, Pengcheng He, Weizhu Chen, Xiaodong Liu, Jianfeng Gao, and Jiawei Han. 2020. On the variance of the adaptive learning rate and beyond. In Proceedings of the Eighth International Conference on Learning Representations, Online. OpenReview.net. Kadri Muischnek, Kaili Müürisep, Tiina Puolakainen, Eleri Aedmaa, Riin Kirt, and Dage Särg. 2014. Estonian dependency treebank and its annotation scheme. In Proceedings of the 13th Workshop on Treebanks and Linguistic Theories (TLT13), pages 285-291, Tübingen, Germany. Kadri Muischnek, Kaili Müürisep, and Dage Särg. 2019. CG roots of UD treebank of Estonian web language. In Proceedings of the NoDaLiDa 2019 Workshop on Constraint Grammar-Methods, Tools and Applications, pages 23-26, Turku, Finland. Minh Van Nguyen, Viet Dac Lai, Amir Pouran Ben Veyseh, and Thien Huu Nguyen. 2021. Trankit: A light-weight Transformer-based toolkit for multilingual natural language processing. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations, pages 80-90, Online. Association for Computational Linguistics.cies, pages 206-214, Online. Association for Com-
putational Linguistics.
James Barry, Joachim Wagner, and Jennifer Foster.
2020. The ADAPT enhanced dependency parser at
the IWPT 2020 shared task. In Proceedings of the
16th International Conference on Parsing Technolo-
gies and the IWPT 2020 Shared Task on Parsing into
Enhanced Universal Dependencies, pages 227-235,
Online. Association for Computational Linguistics.
Eduard Bejček, Eva Hajičová, Jan Hajič, Pavlína
Jínová, Václava Kettnerová, Veronika Kolářová,
Marie Mikulová, Jiří Mírovský, Anna Nedoluzhko,
Jarmila Panevová, Lucie Poláková, Magda
Ševčíková, Jan Štěpánek, and Šárka Zikánová.
2013. Prague dependency treebank 3.0.
Agnė Bielinskienė, Loïc Boizou, Jolanta Ko-
valevskaitė, and Erika Rimkutė. 2016. Lithuanian
dependency treebank ALKSNIS. Human Language
Technologies -The Baltic Perspective, pages
107-114.
Cristina Bosco, Simonetta Montemagni, and Maria
Simi. 2013. Converting Italian treebanks: Towards
an Italian Stanford dependency treebank. In Pro-
ceedings of the 7th Linguistic Annotation Workshop
and Interoperability with Discourse, pages 61-69,
Sofia, Bulgaria. Association for Computational Lin-
guistics.
Gosse Bouma, Djamé Seddah, and Daniel Zeman.
2020. Overview of the IWPT 2020 shared task on
parsing into enhanced Universal Dependencies. In
Proceedings of the 16th International Conference
on Parsing Technologies and the IWPT 2020 Shared
Task on Parsing into Enhanced Universal Dependen-
cies, pages 151-161, Online. Association for Com-
putational Linguistics.
Gosse Bouma, Djamé Seddah, and Daniel Zeman.
2021. From raw text to enhanced Universal Depen-
dencies: The parsing shared task at IWPT 2021. In
Proceedings of the 17th International Conference on
Parsing Technologies (IWPT 2021), pages 146-157,
Online. Association for Computational Linguistics.
Gosse Bouma and Gertjan van Noord. 2017. Increas-
ing return on annotation investment: The automatic
construction of a Universal Dependency treebank
for Dutch. In Proceedings of the NoDaLiDa 2017
Workshop on Universal Dependencies (UDW 2017),
pages 19-26, Gothenburg, Sweden. Association for
Computational Linguistics.
Marie Candito, Guy Perrier, Bruno Guillaume,
Corentin Ribeyre, Karën Fort, Djamé Seddah, and
Éric de la Clergerie. 2014. Deep Syntax Annotation
of the Sequoia French treebank. In Proceedings of
the Ninth International Conference on Language Re-
sources and Evaluation (LREC-2014), pages 2298-
2305, Reykjavik, Iceland. European Languages Re-
sources Association (ELRA).
Yoeng-Jin Chu and Tseng-Hong Liu. 1965. On the
shortest arborescence of a directed graph. Science
Sinica, 14:1396-1400.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
moyer, and Veselin Stoyanov. 2020. Unsupervised
cross-lingual representation learning at scale. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 8440-
8451, Online. Association for Computational Lin-
guistics.
Mathieu Dehouck, Mark Anderson, and Carlos Gómez-
Rodríguez. 2020. Tao
TGIF: Two-Stage Generic-to Individual-Language FinetuningIn addition to the tree-graph integration approach, our system submission also features a two-stage finetuning strategy. We first train a languagegeneric model on the concatenation of all available training treebanks in the shared task data regardless of their source languages, and then finetune on each individual language in a second step.This two-stage finetuning strategy is designed to encourage knowledge sharing across different languages, especially from high-resource languages to lower-resource ones. In our experiment results as reported inTable 1, we find that this strategy is indeed beneficial for the majority of languages, especially those with small training corpora (e.g., 2.13 and 1.01 absolute ELAS improvements on Tamil and French respectively), though this comes at the price of slightly decreased accuracies on high-resource languages (e.g., −0.02 on Estonian and −0.03 on Russian). Additionally, we find that the language-generic model achieves reasonably competitive performance when compared with the set of models directly trained on each individual language. This suggests that practitioners may opt to use a single model for parsing all languages if there is a need to lower disk and memory footprints, without much loss in accuracy.
We find that using lemmas instead of word forms significantly improves coverage of the lexicalized labels.
Comparing the 3 components: multilingual pre-training has a greater effect than the tree-graph parsing design. Sentence segmentation performance (SSP) doesn't necessarily translate to ELAS, so our SSP's large relative improvement at SS doesn't imply that SS is the biggest contributor to our system.
Acknowledgements We thank the anonymous reviewers for their constructive and detailed comments, and the task organizers for their flexibility regarding page limits. This work was supported in part by a Bloomberg Data Science Ph.D. Fellowship to Tianze Shi and a gift from Bloomberg to Lillian Lee.A Hyperparameters
Linear neural parsing and hybrid enhancement for enhanced Universal Dependencies. Giuseppe Attardi, Daniele Sartiano, Maria Simi, 10.18653/v1/2020.iwpt-1.21Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependen. the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal DependenGiuseppe Attardi, Daniele Sartiano, and Maria Simi. 2020. Linear neural parsing and hybrid enhance- ment for enhanced Universal Dependencies. In Pro- ceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependen-
Universal Dependencies v1: A multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Yoav Goldberg, Jan Hajič, Christopher D Manning, Ryan Mcdonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, Daniel Zeman, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources AssociationJoakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Yoav Goldberg, Jan Hajič, Christopher D. Man- ning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth In- ternational Conference on Language Resources and Evaluation (LREC'16), pages 1659-1666, Portorož, Slovenia. European Language Resources Associa- tion.
Universal Dependencies v2: An evergrowing multilingual treebank collection. Joakim Nivre, Marie-Catherine De Marneffe, Filip Ginter, Jan Hajič, Christopher D Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, Daniel Zeman, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationJoakim Nivre, Marie-Catherine de Marneffe, Filip Gin- ter, Jan Hajič, Christopher D. Manning, Sampo Pyysalo, Sebastian Schuster, Francis Tyers, and Daniel Zeman. 2020. Universal Dependencies v2: An evergrowing multilingual treebank collection. In Proceedings of the 12th Language Resources and Evaluation Conference, pages 4034-4043, Mar- seille, France. European Language Resources Asso- ciation.
Bootstrapping a Swedish treebank using cross-corpus harmonization and annotation projection. Joakim Nivre, Beata Megyesi, Proceedings of the 6th International Workshop on Treebanks and Linguistic Theories. the 6th International Workshop on Treebanks and Linguistic TheoriesBergen, NorwayJoakim Nivre and Beata Megyesi. 2007. Bootstrapping a Swedish treebank using cross-corpus harmoniza- tion and annotation projection. In Proceedings of the 6th International Workshop on Treebanks and Linguistic Theories, pages 97-102, Bergen, Norway.
From Lexical Functional Grammar to Enhanced Universal Dependencies: Linguistically Informed Treebanks of Polish. Agnieszka Patejuk, Adam Przepiórkowski, WarsawInstitute of Computer Science, Polish Academy of SciencesAgnieszka Patejuk and Adam Przepiórkowski. 2018. From Lexical Functional Grammar to Enhanced Universal Dependencies: Linguistically Informed Treebanks of Polish. Institute of Computer Science, Polish Academy of Sciences, Warsaw.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
Deriving enhanced universal dependencies from a hybrid dependency-constituency treebank. Lauma Pretkalnin, Laura Rituma, Baiba Saulīte, 10.1007/978-3-030-00794-2_10Proceedings of the 21sh International Conference on Text, Speech, and Dialogue. the 21sh International Conference on Text, Speech, and DialogueBrno, Czech RepublicSpringerLauma Pretkalnin , a, Laura Rituma, and Baiba Saulīte. 2018. Deriving enhanced universal dependencies from a hybrid dependency-constituency treebank. In Proceedings of the 21sh International Conference on Text, Speech, and Dialogue, pages 95-105, Brno, Czech Republic. Springer.
Universal Dependencies for Finnish. Sampo Pyysalo, Jenna Kanerva, Anna Missilä, Veronika Laippala, Filip Ginter, Proceedings of the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015). the 20th Nordic Conference of Computational Linguistics (NODALIDA 2015)Vilnius, Lithuania; SwedenLinköping University Electronic PressSampo Pyysalo, Jenna Kanerva, Anna Missilä, Veronika Laippala, and Filip Ginter. 2015. Univer- sal Dependencies for Finnish. In Proceedings of the 20th Nordic Conference of Computational Linguis- tics (NODALIDA 2015), pages 163-172, Vilnius, Lithuania. Linköping University Electronic Press, Sweden.
Stanza: A Python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, 10.18653/v1/2020.acl-demos.14Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsOnline. Association for Computational LinguisticsPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 101- 108, Online. Association for Computational Linguis- tics.
Prague dependency style treebank for Tamil. Loganathan Ramasamy, Zdeněk Žabokrtský, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). the Eighth International Conference on Language Resources and Evaluation (LREC'12)Istanbul, TurkeyEuropean Language Resources AssociationLoganathan Ramasamy and Zdeněk Žabokrtský. 2012. Prague dependency style treebank for Tamil. In Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12), pages 1888-1894, Istanbul, Turkey. European Lan- guage Resources Association.
Enhanced English Universal Dependencies: An improved representation for natural language understanding tasks. Sebastian Schuster, D Christopher, Manning, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). the Tenth International Conference on Language Resources and Evaluation (LREC 2016)Portorož, SloveniaEuropean Language Resources AssociationSebastian Schuster and Christopher D. Manning. 2016. Enhanced English Universal Dependencies: An im- proved representation for natural language under- standing tasks. In Proceedings of the Tenth Interna- tional Conference on Language Resources and Eval- uation (LREC 2016), pages 2371-2378, Portorož, Slovenia. European Language Resources Associa- tion.
Hard time parsing questions: Building a QuestionBank for French. Djamé Seddah, Marie Candito, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRADjamé Seddah and Marie Candito. 2016. Hard time parsing questions: Building a QuestionBank for French. In Proceedings of the Tenth Inter- national Conference on Language Resources and Evaluation (LREC'16), pages 2366-2370, Portorož, Slovenia. European Language Resources Associa- tion (ELRA).
A gold standard dependency corpus for English. Natalia Silveira, Timothy Dozat, Marie-Catherine De Marneffe, Samuel Bowman, Miriam Connor, John Bauer, Chris Manning, Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC-2014). the Ninth International Conference on Language Resources and Evaluation (LREC-2014)Reykjavik, IcelandEuropean Languages Resources Association (ELRANatalia Silveira, Timothy Dozat, Marie-Catherine de Marneffe, Samuel Bowman, Miriam Connor, John Bauer, and Chris Manning. 2014. A gold standard dependency corpus for English. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC-2014), pages 2897- 2904, Reykjavik, Iceland. European Languages Re- sources Association (ELRA).
Design and implementation of the Bulgarian HPSG-based treebank. Research on Language and Computation. Kiril Simov, Petya Osenova, Alexander Simov, Milen Kouylekov, 10.1007/s11168-004-7427-z2Kiril Simov, Petya Osenova, Alexander Simov, and Milen Kouylekov. 2004. Design and implementa- tion of the Bulgarian HPSG-based treebank. Re- search on Language and Computation, 2(4):495- 522.
The Alpino dependency treebank. Gosse Leonoor Van Der Beek, Rob Bouma, Gertjan Malouf, Van Noord, Proceedings of Computational Linguistics in the Netherlands. Computational Linguistics in the NetherlandsTwente, NetherlandsLeonoor van der Beek, Gosse Bouma, Rob Malouf, and Gertjan Van Noord. 2002. The Alpino dependency treebank. In Proceedings of Computational Linguis- tics in the Netherlands, Twente, Netherlands.
Enhanced universal dependency parsing with secondorder inference and mixture of training data. Xinyu Wang, Yong Jiang, Kewei Tu, 10.18653/v1/2020.iwpt-1.22Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependencies. the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal DependenciesXinyu Wang, Yong Jiang, and Kewei Tu. 2020. En- hanced universal dependency parsing with second- order inference and mixture of training data. In Proceedings of the 16th International Conference on Parsing Technologies and the IWPT 2020 Shared Task on Parsing into Enhanced Universal Dependen- cies, pages 215-220, Online. Association for Com- putational Linguistics.
Extended and enhanced Polish dependency bank in Universal Dependencies format. Alina Wróblewska, 10.18653/v1/W18-6020Proceedings of the Second Workshop on Universal Dependencies (UDW 2018). the Second Workshop on Universal Dependencies (UDW 2018)Brussels, BelgiumAssociation for Computational LinguisticsAlina Wróblewska. 2018. Extended and enhanced Pol- ish dependency bank in Universal Dependencies for- mat. In Proceedings of the Second Workshop on Uni- versal Dependencies (UDW 2018), pages 173-182, Brussels, Belgium. Association for Computational Linguistics.
The GUM corpus: Creating multilayer resources in the classroom. Language Resources and Evaluation. Amir Zeldes, 10.1007/s10579-016-9343-x51Amir Zeldes. 2017. The GUM corpus: Creating mul- tilayer resources in the classroom. Language Re- sources and Evaluation, 51(3):581-612.
Slovak dependency treebank in Universal Dependencies. Jazykovedný casopis. Daniel Zeman, 10.1515/jazcas-2017-0048Journal of Linguistics. 682Daniel Zeman. 2018. Slovak dependency tree- bank in Universal Dependencies. Jazykovedný ca- sopis/Journal of Linguistics, 68(2):385-395.
CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. Daniel Zeman, Jan Hajič, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, Slav Petrov, 10.18653/v1/K18-2001Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesBrussels, BelgiumAssociation for Computational LinguisticsDaniel Zeman, Jan Hajič, Martin Popel, Martin Pot- thast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Mul- tilingual parsing from raw text to Universal Depen- dencies. In Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 1-21, Brussels, Belgium. Association for Computational Linguistics.
CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies. Daniel Zeman, Martin Popel, Milan Straka, Jan Hajič, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajič Jr, Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka Urešová, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung, Marie-Catherine De Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Kira Valeria De Paiva, Droganova, Çagrı Héctor Martínez Alonso, Çöltekin, 10.18653/v1/K17-3001Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung Kwak, Gustavo Mendonça, Tatiana Lando, Rattima Nitisaroj, and Josie Lithe CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesUmut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga; Vancouver, CanadaAssociation for Computational LinguisticsDaniel Zeman, Martin Popel, Milan Straka, Jan Ha- jič, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Fran- cis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinková, Jan Hajič jr., Jaroslava Hlaváčová, Václava Kettnerová, Zdeňka Urešová, Jenna Kanerva, Stina Ojala, Anna Mis- silä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Le- ung, Marie-Catherine de Marneffe, Manuela San- guinetti, Maria Simi, Hiroshi Kanayama, Valeria de Paiva, Kira Droganova, Héctor Martínez Alonso, Çagrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirch- ner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, At- suko Shimada, Sookyoung Kwak, Gustavo Men- donça, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. CoNLL 2017 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies, pages 1-19, Vancouver, Canada. Associa- tion for Computational Linguistics.
| [] |
[
"Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking",
"Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking"
] | [
"Nikita Moghe \nSchool of Informatics\nUniversity of Edinburgh\n\n",
"Mark Steedman steedman@inf.ed.ac.uk \nSchool of Informatics\nUniversity of Edinburgh\n\n",
"Alexandra Birch a.birch@ed.ac.uk \nSchool of Informatics\nUniversity of Edinburgh\n\n"
] | [
"School of Informatics\nUniversity of Edinburgh\n",
"School of Informatics\nUniversity of Edinburgh\n",
"School of Informatics\nUniversity of Edinburgh\n"
] | [
"Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing"
] | Recent progress in task-oriented neural dialogue systems is largely focused on a handful of languages, as annotation of training data is tedious and expensive. Machine translation has been used to make systems multilingual, but this can introduce a pipeline of errors. Another promising solution is using cross-lingual transfer learning through pretrained multilingual models. Existing methods train multilingual models with additional codemixed task data or refine the cross-lingual representations through parallel ontologies. In this work, we enhance the transfer learning process by intermediate fine-tuning of pretrained multilingual models, where the multilingual models are fine-tuned with different but related data and/or tasks. Specifically, we use parallel and conversational movie subtitles datasets to design cross-lingual intermediate tasks suitable for downstream dialogue tasks. We use only 200K lines of parallel data for intermediate fine-tuning which is already available for 1782 language pairs. We test our approach on the cross-lingual dialogue state tracking task for the parallel Mul-tiWoZ (English→Chinese, Chinese→English) and Multilingual WoZ (English→German, English→Italian) datasets. We achieve impressive improvements (> 20% on joint goal accuracy) on the parallel MultiWoZ dataset and the Multilingual WoZ dataset over the vanilla baseline with only 10% of the target language task data and zero-shot setup respectively. | 10.18653/v1/2021.emnlp-main.87 | [
"https://www.aclanthology.org/2021.emnlp-main.87.pdf"
] | 238,198,409 | 2109.13620 | 10be978adaef50ecaa715c6efcd556c7aa9b628d |
Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking
Association for Computational LinguisticsCopyright Association for Computational Linguistics1150 November 7-11, 2021. 2021
Nikita Moghe
School of Informatics
University of Edinburgh
Mark Steedman steedman@inf.ed.ac.uk
School of Informatics
University of Edinburgh
Alexandra Birch a.birch@ed.ac.uk
School of Informatics
University of Edinburgh
Cross-lingual Intermediate Fine-tuning improves Dialogue State Tracking
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational Linguistics11371150 November 7-11, 2021. 20211137
Recent progress in task-oriented neural dialogue systems is largely focused on a handful of languages, as annotation of training data is tedious and expensive. Machine translation has been used to make systems multilingual, but this can introduce a pipeline of errors. Another promising solution is using cross-lingual transfer learning through pretrained multilingual models. Existing methods train multilingual models with additional codemixed task data or refine the cross-lingual representations through parallel ontologies. In this work, we enhance the transfer learning process by intermediate fine-tuning of pretrained multilingual models, where the multilingual models are fine-tuned with different but related data and/or tasks. Specifically, we use parallel and conversational movie subtitles datasets to design cross-lingual intermediate tasks suitable for downstream dialogue tasks. We use only 200K lines of parallel data for intermediate fine-tuning which is already available for 1782 language pairs. We test our approach on the cross-lingual dialogue state tracking task for the parallel Mul-tiWoZ (English→Chinese, Chinese→English) and Multilingual WoZ (English→German, English→Italian) datasets. We achieve impressive improvements (> 20% on joint goal accuracy) on the parallel MultiWoZ dataset and the Multilingual WoZ dataset over the vanilla baseline with only 10% of the target language task data and zero-shot setup respectively.
Introduction
In recent years, task-oriented dialogue systems have achieved remarkable success by leveraging huge amounts of labelled data. This technology is thus limited to a handful of languages as collecting and annotating training dialogue data for different languages is expensive and requires supervision from native speakers .
To avoid having to create large annotated datasets for every new language, recent works focus on transfer learning methods which use neural machine translation systems (Schuster et al., 2019), code-mixed data augmentation Qin et al., 2020) or large multilingual models (Lin and Chen, 2021). Neural machine translation models incur additional overhead of training on millions of parallel sentences that may not be available for all language pairs. Code-mixed data augmentation methods involve replacing individual words from the source language with the target language by using parallel word pairs found in a dictionary. However, a simple synonym replacement may not be sufficient as the tasks become complicated. In this paper, we focus on transfer learning via large multilingual models, which will allow us to extend models to languages with limited labelled training data.
In techniques that use multilingual models, a task-specific architecture uses this pretrained model as one of its components and then is trained with task data from a high resource language (See Fig. 1). It is then evaluated directly or with some labelled examples in a different language. The use of intermediate fine-tuning, which is fine-tuning a large language model with a different but related data/or task and then fine-tuning it for the target task has shown considerable improvements for both monolingual and cross-lingual natural language understanding tasks (Gururangan et al., 2020;. But, it is relatively under-explored for multilingual dialogue systems.
In this work, we demonstrate the effectiveness of using cross-lingual intermediate fine-tuning of multilingual pretrained models to facilitate the development of multilingual conversation systems. Specifically, we look at cross-lingual dialogue state tracking tasks, as they are an indispensable part of task-oriented dialogue systems. In this task, a model needs to map the user's goals and intents in a given conversation to a set of slots and values - Figure 1: Pipeline of our work. A pretrained language model is fine-tuned with the task of predicting masked words on parallel movie subtitles data. A dialogue state tracker is then trained with this new multilingual model and evaluated for cross-lingual dialogue state tracking known as a "dialogue state" based on a pre-defined ontology. Our intermediate tasks are based on interaction between the source and target languages and interaction between the dialogue history and response. These tasks involve the prediction of missing words in different conversational settings. These include monolingual conversations, concatenated parallel bilingual conversations, and crosslingual conversations. Further, we also introduce a task as a proxy for generating a response in a cross-lingual setup. Our intermediate tasks only use 200K lines of parallel data which is available for 1782 language pairs. Using parallel data for intermediate fine-tuning also becomes an important addition in the intermediate fine-tuning literature which has largely focused on related monolingual tasks. Our best method leads to an impressive performance on the standard benchmark of the Multilingual WoZ 2.0 dataset (Mrkšić et al., 2017b)
Related Work
Intermediate fine-tuning of large language models: Training deep neural networks on large unlabelled text data to learn meaningful representations has shown remarkable success on several downstream tasks. These representations can be monolingual (Qiu et al., 2020) or multilingual (Devlin et al., 2019;Conneau and Lample, 2019;Artetxe and Schwenk, 2019) depending on the underlying training data. These representations are further refined to suit the downstream task by finetuning the pretrained model on related data and/or tasks. This "intermediate" fine-tuning is done before fine-tuning the task-specific architecture on the downstream task.
In adaptive intermediate fine-tuning, a pretrained model is fine-tuned with the same objectives used during pretraining on data that is closer to the distribution of the target task. This is referred to as task adaptive pretraining (TAPT) if the unlabeled text of the task dataset is used (Gururangan et al., 2020;Howard and Ruder, 2018;Mehri et al., 2019) and domain adaptive pretraining if unlabelled data of target domain is used (Gururangan et al., 2020;Han and Eisenstein, 2019). Closer to our problem, Lin and Chen (2021) also use TAPT for generative dialogue state tracking. Another popular method is intermediate task training. Instead of fine-tuning with the objectives used during pretraining of the model, the pretrained model is fine-tuned with single or multiple related tasks as an intermediate step Phang et al., 2019;Glavaš and Vulić, 2021). We refer to the umbrella term of intermediate fine-tuning while discussing our methods.
Our work uses OpenSubtitles (Lison and Tiedemann, 2016), a parallel movie subtitle corpus, as the unlabelled target domain resource. Instead of using the pretrained objectives of the underlying language model directly, we experiment with existing and new objectives to leverage the conversational and cross-lingual nature of the parallel data. As there is a dearth of availability of training data for dialogue tasks across different languages, instead of relying on the related task datasets to perform intermediate fine-tuning, we leverage the dialogue data available through OpenSubtitles (See Table 1). Cross-lingual dialogue state tracking: Dialogue state tracking (DST) is one of the most studied problems in task-oriented conversational systems (Mrkšić et al., 2017a;Ren et al., 2018;. The goal of the dialogue state tracker is to accurately identify the user's goals and requests at each turn of the dialogue. These goals and requests are stored in a dialogue state which is predefined based on the ontology of the given domain. For example, the restaurant reservation domain will consist of slot-names like "price-range" and values like "cheap". Dialogue state tracking has been explored extensively for the monolingual setup but there are limited works for a multilingual setting.
A popular benchmark for cross-lingual dialogue state tracking is the Multilingual WoZ 2.0 dataset (Mrkšić et al., 2017b) where a dialogue state tracker is trained only on English data and it is evaluated directly for German and Italian dialogue state tracking. XL-NBT , the first neural cross-lingual dialogue state tracker uses a teacherstudent network where the teacher network has access to task labelled data in the source language. The teacher also has access to parallel data which allows it to transfer knowledge to the student network trained in the target language. A couple of recent works resort to code-mixed data augmentation to enhance transfer learning. In Attention-Informed Mixed Language Training (AMLT) , initially, a dialogue state tracker (Mrkšić et al., 2017a) is trained with English state tracking data. The new code-mixed training data is obtained by replacing the words which receive the highest attention in the given utterance during training of the model with the source language with their respective synonyms in the target language. Another method dubbed as Cross-Lingual Code Switched Augmentation (CLCSA) (Qin et al., 2020) focuses on the dynamic replacement of source language words with target language words during training. In this method, the sentences within a batch are chosen randomly, and then words within these sentences are chosen randomly which are replaced with the synonyms from their target language. This method is state-of-the-art for the Multilingual WoZ dataset.
Another recent benchmark is the parallel Multi-WoZ 2.1 dataset released as a part of the Ninth Dialogue Systems and Technologies Challenge (DSTC-9) (Gunasekara et al., 2020). Both the ontology of the dialogue states and the dialogues were translated from English to Chinese using Google Translate and then corrected manually by expert annotators. Similarly, CrossWoZ (Zhu et al., 2020a), a Chinese dialogue state tracking dataset was translated into English. The challenge was designed to treat the source dataset as a resourcerich dataset and build a cross-lingual dialogue state tracker which would be evaluated for the low resource target dataset. Instead, all the submissions in the shared task used the translated version of the dataset and treated the problem as a monolingual dialogue state tracking setup.
We use the Multilingual WoZ dataset and the parallel MultiWoZ dataset to demonstrate the effectiveness of our methods. As there are no existing bench-marks for cross-lingual dialogue state tracking for the parallel MultiWoZ dataset, we use the slotutterance matching belief tracker (SUMBT) as our baseline, which was the state-ofthe-art for the English MultiWoZ 2.1 dataset (Eric et al., 2020). The SUMBT model uses BERT encoder to obtain contextual semantic vectors for the utterances, slot-names, and slot values. It then uses a multi-head attention network to learn the relationship between slot-names and slot-values appearing in the text to predict the dialogue states.
Intermediate fine-tuning for dialogue tasks
In this section, we will provide details about the training data used for different intermediate tasks, explain existing and proposed intermediate tasks, and detail their integration into the end task.
Adaptive data extraction
The pretrained language models are often trained on news text or Wikipedia which is different from human conversations (Wolf et al., 2019). We choose OpenSubtitles corpus (Lison and Tiedemann, 2016) as the characteristics of this corpus are suitable for our end task.The corpus is huge (beyond 3.2G sentences) and contains parallel movie dialogue data across different language pairs, allowing us to design cross-lingual tasks as well. We extract 200K parallel subtitles for every language pair. These are extracted without modifying the sequence of their occurrence in a particular film, as we intend to work on conversations and not sentences in isolation.
Tasks for intermediate fine-tuning
After extracting the task-related data, we experiment with existing and new intermediate tasks to continue fine-tuning the underlying multilingual representation for the dialogue tasks. These tasks are variants of the Cloze task (Taylor, 1953), where missing words are predicted for a given sentence/context. This task is also known as Masked Language Modelling (MLM) (Devlin et al., 2019). We introduce extensions to the masked language modelling which are more suitable for the dialogue task. Our task designs are based on (i) interaction between the source and target languages and (ii) interaction between the dialogue history and response. In the rest of the work, the use of the word "context" focuses on the role of dialogue history.
Monolingual dialogue modelling (MonoDM):
Dialogue history is an important component of any dialogue task. We select K continuous subtitles from the monolingual subtitles data where K is chosen randomly between 2 to 15 for every example. By choosing a random K, we ensure that the examples contain varied length dialogues as will be the case for any dialogue related task. These examples are created for both the source and the target language and 15% of the words in each example are masked.
We now look at cross-lingual intermediate tasks that leverage the parallel data in OpenSubtitles. The following tasks are designed to exploit the contextual information from the dialogue history as well as cross-lingual information through the parallel data. Please see Table 1 for examples. Translation language modelling (TLM): Translation language modelling (TLM) was introduced while designing the Cross-Lingual Language Model (XLM) (Conneau and Lample, 2019). In TLM, parallel sentences are concatenated and words are masked across them. We further explore the importance of longer context in modelling cross-lingual embedding spaces for the conversational setting by concatenating parallel dialogues with K utterances and then masking words randomly on this concatenated text. The hypothesis is that by predicting masked words in different languages simultaneously, the model improves the alignment in its cross-lingual representation space. For the example in Table 1, the model may learn to align "bat" with "Fledermaus". Cross-lingual dialogue modelling (XDM): This task focuses on improving cross-lingual contextresponse representation space. In TLM, it is difficult to identify if the predicted word used its monolingual context or the bilingual dialogue history. To encourage a cross-lingual interaction between the dialogue history and the response, we concatenate a conversation context (K utterances) from one language and then append the reply to that conversation in the second language. The words are then randomly masked across this chat.
Response masking (RM): We also experiment with a setup that acts as a proxy for generating a response in a cross-lingual setting. The context of the conversation is provided in one language and the task is to predict the words in the response independently in another language. This is a harder task than predicting randomly masked words. Both XDM and RM are new designs for intermediate tasks, tailored for cross-lingual dialogue tasks. We also experimented with combining monolingual and cross-lingual objectives but our pilot experiments did not show any considerable improvement over the individual objectives. For tasks where combining multiple objectives has worked, those tasks required higher reasoning and inference capabilities like coreference resolution or question answering Aghajanyan et al., 2021). Such highly specific task data is not available for all languages and even further limited for conversational tasks. We will explore this direction in future. Similarly, our initial experiments suggested that simply combining data from multiple languages for a multilingual intermediate task has lower performance than individual crosslingual intermediate tasks. Thus, designing multilingual intermediate tasks is far from trivial and we will also explore this in future.
Using intermediate fine-tuning for dialogue state tracking
We create 100K examples for all of the above intermediate tasks for respective language pairs. We use the mBERT (Devlin et al., 2019) model as our starting point and continue training the mBERT model with the above tasks separately. Thus, all of our reported experiments follow a two-step pipeline procedure where (i) mBERT is fine-tuned with one of the tasks listed as above and then (ii) a dialogue state tracking model, that uses the new mBERT model, is trained with source language training data with or without additional training data of the target language. Finally, the trained dialogue state tracking model is evaluated on the target language. Please see Fig. 1 for an illustration.
Experiments
We experiment with the recently released parallel MultiWoZ dataset (Gunasekara et al., 2020) and the Multilingual WoZ dataset (Mrkšić et al., 2017b). As the datasets vary in difficulty and languages, we choose a different amount of target training data and dialogue state tracking architectures for both of them. We briefly provide their description and discuss the results obtained with our methods.
Task description
Parallel German dialogue states, to compare with other approaches in the literature. We use the state tracker in Qin et al. (2020) that treats the problem as a collection of binary prediction tasks, one task for each slot-value combination. The current utterance and the previous dialogue act are concatenated together and passed through the pretrained multilingual encoder. All the slot value pairs are passed through the encoder to obtain their representations respectively. These representations are then fed into a classification layer. We do not use SUMBT for this dataset as the cross-lingual state tracking performance was not as competitive as other models in the literature. The training details are listed in Appendix A.
Metrics
The metrics used for dialogue state tracking tasks are turn-level and generally include Slot Accuracy, Slot F1, and Joint Goal Accuracy (JGA). Their descriptions are as follows: Slot Accuracy: Proportion of the correct slots predicted across all utterances. Slot F1: Macro-average of F1 score computed over the individual slot-types and slot-values for every turn. Joint Goal Accuracy: Proportion of examples (dialogue turns) where the predicted dialogue state matches exactly the ground truth dialogue state.
We report Slot F1 and Joint Goal Accuracy for the parallel MultiWoZ dataset. The En state has 135 slot types while the average number of slot types per utterance is 5. When slot accuracy is computed, it also marks all those slots which were not predicted. Consider 130 not predicted slots, 3 correct slots and 2 incorrect slots. By the definition of accuracy, it would be computed as 133/135 = 0.98 which overlooks the two incorrect slots. Thus, we do not report slot accuracy as it is the least indicator of improvement.
We report Joint Goal Accuracy for Multilingual WoZ dataset, where the state only consists of informable slots. Similarly, Slot Accuracy for informable slots and Request Accuracy for requestable slots are also reported, in line with the literature for this task.
Results
We report the results of models with and without intermediate task learning for the parallel MultiWoZ dataset in Table 2 and the Multilingual WoZ dataset in Table 3. We compare the performances of our intermediate fine-tuning methods with task-adaptive pretraining (TAPT) to distinguish the design of our intermediate tasks against simply using the task training data. We also compare our methods on Multilingual WoZ with XL-NBT , Attention Informed Mixed Language Training and CLCSA (Qin et al., 2020).
Our results show that the use of intermediate finetuning of a language model is indeed helpful for dialogue state tracking. Further, the use of crosslingual objectives (XDM, RM, TLM) is indeed superior to task adaptive pretraining (TAPT) and competitive to the monolingual objective (Mon-oDM) with TLM consistently performing better than all the cross-lingual objective functions in the target language state tracking. This also suggests that the use of bilingual dialogue history (TLM) is superior to the use of cross-lingual context (XDM) or a harder response generation task (RM) for these datasets.
In Table 2 improvement over the vanilla baseline on joint goal accuracy for target languages Zh and En respectively. The best intermediate task (TLM) has an improvement of 20.4% and 24.3% on joint goal accuracy respectively for En → Zh and Zh → En. The Slot F1 score has similar trends as the joint goal accuracy. Intermediate fine-tuning helps to improve the performance for source language state tracking as well, with monolingual objectives (TAPT, Mon-oDM) exhibiting a superior performance as they are trained with monolingual task data.
Comparison with machine translation: As there are no other baselines available for MultiWoZ, we also compare our approach to translation based methods in Table 2. We follow the setup for Inlanguage training, Translate-train, and Translatetest as described in Hu et al. (2020). In Inlanguage training, we fine-tune the mBERT model directly with target language training data. For the Translate-train models, we first translate the source language training data of the dialogue task into the target language and then train a dialogue state tracking model with mBERT on the translated target language data. In Translate train, the dialogue state tracking model is trained with the source language data on source language BERT. At test time, the target language instances are translated into the source language to predict the dialogue states for these given instances. Our machine translation models are large transformer models (Vaswani et al., 2017) trained on Paracrawl data (Bañón et al., 2020) for En → Zh and Zh → En respectively. Our setup improves over the Translate-test approach which uses these additional translation models and mono-lingual BERT models. We also find that Translate Train and In-language training find this setup difficult as the model would map a target language utterance to a source language state instead of a target language state. Further, following guidelines from Hu et al. (2020), these models are trained with multilingual BERT which is trained on 108 languages, leading to a noisier representation space than a monolingual BERT. Overall, we find that the scores are higher for Zh → En than En → Zh. We speculate this trend is due to the presence of translationese when using Zh as the source language as the dataset is originally in English then translated to Chinese, in line with the observations from neural machine translation literature (Edunov et al., 2020).
Additive effect of TLM with CLCSA: In Table 3, we find that TLM has 27.5% and 24.3% improvement over the vanilla baseline on joint goal accuracy for De and It respectively. It also has superior performances over baselines from the literature except for the CLCSA method. The CLCSA method uses dynamic code-mixed data for training the state tracker. We observe that using TLM with the CLCSA model has an additive effect, providing an improvement over a model which does not use the model with TLM as an intermediate finetuning task. Please note that our experiments for both CLCSA and CLCSA + TLM used an uncased version of multilingual BERT as opposed to the cased version of multilingual BERT in the original CLCSA results as it has better performance. We also find that RM is not best suited for this task suggesting that response prediction is not a suitable intermediate task for simple scenarios of the WoZ dataset.
Analysis
We analyse the outputs from the state tracker and design choices for the intermediate tasks. We also provide insights into the difficulty of conducting zero-shot transfer learning using the SUMBT architecture for the MultiWoZ dataset.
Qualitative analysis
We manually analyzed the predicted dialogue states for 200 chats from these models for the MultiWoZ dataset. Overall, we found that models trained with intermediate tasks improve over the vanilla baselines in detecting cuisine names, names of restaurants, and time periods for booking (taxi/restaurant). All models show some confusion in detecting whether a location corresponds to arrival or departure. We observe that predicting a dialogue state wrong at an earlier stage has a cascading effect of errors on the later dialogue states. For the Multilingual WoZ dataset, the baseline models struggled to identify less frequent cuisines. There was confusion between predicting "cheap" and "moderate" in the target languages. These errors were reduced with intermediate fine-tuning.
Please see examples in Appendix C.
Investigating zero-shot transfer for MultiWoZ dataset
We make a case for using 10% of training data in the target language and retaining the language of the source state for the MultiWoZ dataset. We illustrate different training data choices in Table 4. We currently look at the En → Zh setup. The zero-shot setup is difficult for the models -with the vanilla baseline model, it seems nearly impossible to learn a dialogue state tracker for Chinese. Even with TLM, while there is an improvement in the multilingual representation space, it is not adequate for a generalized transfer across languages. However, when a pretrained model which is fine-tuned with a cross-lingual objective, is trained with as little as 1% labelled target language training data (84 chats), we observe 19.3% improvement over the joint goal accuracy for the target language over the zero-shot vanilla baseline. This also indicates the data efficiency of the crosslingual intermediate fine-tuning. With the increase in target training data, the performance for the target language also improves while degrading the source language performance.
We also found that using the target language states during evaluation has lower performance than source language dialogue states for this dataset while using the SUMBT model. Using a dialogue state tracker trained with TLM on zero-shot setup had joint goal accuracy of 1%. We recommend mapping the dialogue states from the source language to the target language directly for use cases that require the dialogue state to be predicted in the target language.
Analysis of intermediate tasks
Domain of adaptive task data
We considered the parallel document level data released for the WMT'19 challenge (Bojar et al., 2019). We look at the En-Zh parallel data consisting of news articles that are aligned by paragraphs. We fine-tune the mBERT model with the TLM task for parallel paragraphs. We report our results for the MultiWoZ dataset in Table 5. We find that using dialogue data has a slight advantage over using parallel news text as seen in Table 5. This sug- gests that cross-lingual alignment itself is largely responsible for the increase in the joint goal accuracy over the baseline than the domain of the intermediate task data. Nevertheless, we recommend the use of OpenSubtitles for intermediate task data as it not only performs better but also is available for 1782 language pairs.
Amount of intermediate task data
We used a fixed number of examples for the intermediate fine-tuning. We now vary the amount of intermediate task data and study its performance on the downstream task. As seen from . As OpenSubtitles is available for 1782 language pairs, we speculate that using these cross-lingual intermediate tasks will be effective for languages where a collection of large training datasets for dialogue tasks is not feasible. We speculate that this setup can be useful for crosslingual domain transfer too -when such benchmark becomes available for dialogue tasks. We hope that our method can serve as a strong baseline for future work in multilingual dialogue.
A Reproducibility Details
Hyperparameters: All the intermediate finetuning models were trained with HuggingFace's transformers library (Wolf et al., 2020). We followed the guidelines from to select the hyperparameters. The fine-tuning was carried out for 20 epochs. The batch size was between {4, 8}. The rest configuration was kept as default in the library. For the SUMBT model, the LSTM size was varied between {100, 300}, the learning rate between {1e − 4, 1e − 5, 5e − 5}, and batch size between {3, 4, 12}. Rest hyperparameters were kept as default as the original work. The final configurations were chosen based on the joint goal accuracy for the development set. The training was carried out for 100 epochs as default with patience of 10 epochs. For the Multilingual WoZ experiments, we followed the hyperparameters listed in Qin et al. (2020) All of our hyperparameters for all the experiments will be made available as config files. We use code from Zhu et al. (2020b) for the SUMBT model and Qin et al. (2020) for the CLCSA model.
Training details: Intermediate fine-tuning takes approx 14 hours on RTX 2080 Ti, training a SUMBT model takes approx six hours, and the base architecture for Multilingual WoZ takes around three hours. The training hours on a different GPU may vary. The inference time for the SUMBT model on the MultiWoZ dataset is 4 minutes while that of the Multilingual WoZ is a minute per language. Similarly, the GPU memory for intermediate fine-tuning and SUMBT takes up the entire ram of RTX 2080 Ti ( approx 11 GB) and the Multilingual WoZ experiments occupy 7 GB RAM. All the experiments require a single GPU. The parameters in the mBERT model are approx 178M. The parameters in the dialogue state trackers without the mBERT model are approx 5.2 M and 0.1 M for the MultiWoZ dataset and Multilingual WoZ dataset respectively.
Dataset details: The dialogue state tracking datasets are available at the code repositories of Zhu et al. (2020b) and Qin et al. (2020) respectively. The OpenSubtitles corpus can be obtained from the corpus website 2 which is based on the subtitles website 3 . We will release the extracted examples and their variants as well. Please see Table 8 for statistics. While creating the 10% of the labelled target language data, all the domains the in the MultiWoZ data were included according to their proportion in the original training data.
B Utterance v/s Dialogue history for Multilingual WoZ
We report the importance of using dialogue history in Table 9.
C Qualitative Examples
In Table 10, the first example demonstrates how TLM can identify named entities such as names of restaurants that the baseline could not predict. Similarly, the baseline has a higher error rate detecting the dialogue states with numbers, as seen in examples one and two. The third example is a continuation of the conversation in the second example. Note that the baseline model is now capable of predicting all the new dialogue states in this example. But it is penalized as it could not predict the train-arriveby state at the start of the conversation leading to cascading of errors.
We analyse the design choices for the intermediate tasks -domain and amount of intermediate training data and use of dialogue history.
Who is it, Martin? A bat, Professor. Very big and black. Don't waste your pellets. It's no use. You'll never harm that bat. Wer ist denn da, Martin? Eine Fledermaus, Herr Professor. Sehr groß und pechschwarz. Verschwenden Sie kein Schrot darauf. Es ist zwecklos. Dieser Fledermaus können Sie nichts anhaben. TLM XDM RM Who is it, Martin A [MASK] . . . [MASK] that bat. [MASK] ist denn da, Martin? . . . können Sie nichts [MASK]. Who is it, Martin? A [MASK] . . . of no use. Dieser Fledermaus können Sie nichts [MASK]. Who is it, Martin? A bat, Professor . . . It's no use. [MASK][MASK][MASK] [MASK] [MASK] [MASK}Subtitle (En)
Subtitle (De)
Table 1 :
1Examples for different cross-lingual intermediate tasks. The top row contains the parallel text converted into examples. The intermediate task is to predict the [MASK] words. TLM -Translation Language Modelling, XDM -Cross-lingual Dialogue Modelling, RM -Response Masking. Italics is the response in the given chat.
Table 2 :
2Performance on the parallel MultiWoZ dataset using encoders with various intermediate fine-tuning strategies and trained with 100% source and 10% target language dialogue state tracking data. Bold marks the best within each column. JGA -Joint goal accuracy. The last two columns indicate average gain over mBERT-none for target languages.
, we find that even the weakest intermediate fine-tuning setup has 15.3% and 16.2%Multilingual Model/
Method
Intermediate
Task Training
Target Language
De
Target Language
It
Average
Gain
Slot
Acc
Joint
Acc
Request
Acc
Slot
Acc
Joint
Acc
Request
Acc
Joint
Acc
XL-NBT (Chen et al., 2018) N/A
55
30.8
68.4
72
41.2
81.2
22.2
AMLT (Liu et al., 2020)
N/A
70.7 34.3
87
71.4 33.3
84.9
20.0
mBERT
none
57.6 15
75.3
54.6 12.6
77.3
00.0
TAPT
68.4 24.8
89
67.5 22.6
83.8
09.9
MonoDM
83.4 14.4
90.3
63.6 14.1
90.2
00.4
XDM
69.7 27.5
90
68
21.5
89.1
10.7
RM
58
8.6
81.6
61.6 11.3
76.4
-3.8
TLM
75.6 42.5
90.2
72.3 36.9
90
25.9
CLCSA (Qin et al., 2020)
none
83.2 62.6
96.1
84
67.6
95.5
51.3
TLM
85.2 65.8
94.4
84.3 66.9
95.5
52.5
Table 3 :
3Zero-shot results of the target languages of Multilingual WoZ 2.0 dataset with and without using various intermediate fine-tuning strategies when trained with English task data. Acc -Accuracy. The last column is average gain over joint accuracy for both the languages over the mBERT-none model. Please see text for details of the methods. Bold indicates the best score in that column. Intermediate fine-tuning is also useful for zero-shot transfer and cross-lingual intermediate fine-tuning (TLM) has the best performance.
Table 4 :
4Comparing different proportions of target state
tracking data along with En training data for En → Zh
MultiWoZ dataset. Zero-shot setup is difficult for this
task but it can be improved with limited Zh data and
intermediate fine-tuning
Table 5 :
5Investigating the domain of intermediate task
data evaluated on the target languages of parallel Multi-
WoZ data. Intermediate fine-tuning on movie subtitles
is slightly advantageous over news texts
Table 6 :
6Comparison of amount of intermediate task
data when used with TLM on MultiWoZ. x: examples
created with 200K data. Using 200K data is indeed
optimal
Table 6 ,
6our setup that uses examples with 200K data has the best or second-best performance across the target languages. There is indeed an increase in performance for target language En with 800K sentences, but fine-tuning a model with 800K sentences also 4x additional GPU training time. We find that the performance drop in addition or removal of intermediate examples is not extreme. This prompts us to design better cross-lingual objectives that can reduce the intermediate data requirement.5.3.3 Utterance-level v/s dialogue historyWe emphasized using dialogue history nformation while designing intermediate tasks. For ablation studies, we fine-tune mBERT with utterance-level intermediate tasks. To replicate the utterance-level version of MonoDM (referred to as MonoDM-chat here), the training data for MonoDM-utterance consists of 100K utterances chosen randomly from the OpenSubtitles data, with equal English and Chinese examples. Similarly, TLM-utterances also uses 100K examples with parallel utterances chosen randomly. The results inTable 7show that the use of dialogue history is important as both MonoDM-sent and TLM-sent have lower performance than MonoDM-chat and TLM-chat respectively. We observe a similar trend for the Multilingual WoZ dataset (reported in Appendix B).Intermediate
Fine-tuning
Target
Zh
Target
En
JGA Slot F1 JGA Slot F1
MonoDM-sent 24.2
75.4
30.9
81.6
MonoDM-chat 28.2
78.8
41.7
87.3
TLM-sent
31.2
81.3
34.7
83.2
TLM-chat
32.7
82.4
41.1
87.7
Table 7 :
7Comparison of amount of dialogue history used in intermediate tasks and evaluated for target languages in MultiWoZ. Sent -sentences. Use of chats in intermediate fine-tuning tasks is beneficial.We demonstrated the effectiveness of cross-lingual intermediate fine-tuning of pretrained multilingual language models for the task of cross-lingual dialogue state tracking. We experimented with existing intermediate tasks and introduced two new cross-lingual intermediate tasks based on the parallel and dialogue-level nature of the movie subtitles corpus. Our best method had significant improvement in performance for the parallel MultiWoZ dataset and Multilingual WoZ dataset. We also demonstrated the data efficiency of our methods.Our intermediate tasks were trained on a generic dataset unlike the related high resource tasks used in6 Conclusion
Table 9 :
9Comparison of amount of dialogue history used in intermediate tasks and evaluated for target languages in Multilingual WoZ. Sent -sentences. Use of chats in intermediate fine-tuning tasks is beneficial.
Our code is available at https://github.com/ nikitacs16/xlift_dst
https://opus.nlpl.eu/
AcknowledgementsWe would like to thank Liane Guillou for feedback on the experiment setup for this work. We thank Barry Haddow for providing us with the machine translation models. We also thank Laurie Burchell, Agostina Calabrese, Tom Hosking, and the anonymous reviewers for their insightful comments and suggestions. This work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh (Moghe). The authors gratefully acknowledge Huawei for their support (Moghe).Table 10: Example outputs from the En-Zh systems. We demonstrate how TLM improves in detecting named entities, numbers, and prevents cascading effect of predicting an example wrong at the start of the dialogue.
Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, Sonal Gupta, Muppet: Massive multi-task representations with pre-finetuning. Armen Aghajanyan, Anchit Gupta, Akshat Shrivastava, Xilun Chen, Luke Zettlemoyer, and Sonal Gupta. 2021. Muppet: Massive multi-task representations with pre-finetuning.
Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Mikel Artetxe, Holger Schwenk, Trans. Assoc. Comput. Linguistics. 7Mikel Artetxe and Holger Schwenk. 2019. Mas- sively multilingual sentence embeddings for zero- shot cross-lingual transfer and beyond. Trans. Assoc. Comput. Linguistics, 7:597-610.
ParaCrawl: Web-scale acquisition of parallel corpora. Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, 10.18653/v1/2020.acl-main.417Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsElsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume ZaragozaOnline. Association for Computational LinguisticsMarta Bañón, Pinzhen Chen, Barry Haddow, Ken- neth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sar- rías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale acquisition of parallel cor- pora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4555-4567, Online. Association for Compu- tational Linguistics.
Ondřej Bojar, Rajen Chatterjee, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Negri, Aurélie Névéol, Proceedings of the Fourth Conference on Machine Translation. Karin Verspoorthe Fourth Conference on Machine TranslationMariana Neves, Matt Post, Marco Turchi; Florence, Italy1Ondřej Bojar, Rajen Chatterjee, Christian Feder- mann, Mark Fishel, Yvette Graham, Barry Had- dow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, André Martins, Christof Monz, Matteo Ne- gri, Aurélie Névéol, Mariana Neves, Matt Post, Marco Turchi, and Karin Verspoor, editors. 2019. Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers). Associa- tion for Computational Linguistics, Florence, Italy.
XL-NBT: A cross-lingual neural belief tracking framework. Wenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, William Yang Wang, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingStroudsburg, PA, USAAssociation for Computational LinguisticsWenhu Chen, Jianshu Chen, Yu Su, Xin Wang, Dong Yu, Xifeng Yan, and William Yang Wang. 2018. XL- NBT: A cross-lingual neural belief tracking frame- work. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 414-424, Stroudsburg, PA, USA. Association for Computational Linguistics.
Crosslingual language model pretraining. Alexis Conneau, Guillaume Lample, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems. Vancouver, BC, CanadaAlexis Conneau and Guillaume Lample. 2019. Cross- lingual language model pretraining. In Advances in Neural Information Processing Systems 32: An- nual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, 8-14 December 2019, Vancouver, BC, Canada, pages 7057-7067.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
On the evaluation of machine translation systems trained with back-translation. Sergey Edunov, Myle Ott, Marc'aurelio Ranzato, Michael Auli, 10.18653/v1/2020.acl-main.253Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSergey Edunov, Myle Ott, Marc'Aurelio Ranzato, and Michael Auli. 2020. On the evaluation of machine translation systems trained with back-translation. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2836- 2846, Online. Association for Computational Lin- guistics.
MultiWOZ 2.1: A consolidated multi-domain dialogue dataset with state corrections and state tracking baselines. Mihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, Dilek Hakkani-Tur, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationMihail Eric, Rahul Goel, Shachi Paul, Abhishek Sethi, Sanchit Agarwal, Shuyang Gao, Adarsh Kumar, Anuj Goyal, Peter Ku, and Dilek Hakkani-Tur. 2020. MultiWOZ 2.1: A consolidated multi-domain dia- logue dataset with state corrections and state track- ing baselines. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 422-428, Marseille, France. European Language Re- sources Association.
Is supervised syntactic parsing beneficial for language understanding tasks? an empirical investigation. Goran Glavaš, Ivan Vulić, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsGoran Glavaš and Ivan Vulić. 2021. Is supervised syn- tactic parsing beneficial for language understanding tasks? an empirical investigation. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 3090-3104, Online. Association for Computational Linguistics.
. R , Chulaka Gunasekara, Seokhwan Kim, Luis Fernando, D' Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakrishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani-Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Liden, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Minlie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David R. Traum, Maxine Eskénazi, Ahmad Beirami, Eunjoon Cho, Paul A. Crook, Ankita DeAlborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddarand Rajen Subba. 2020. Overview of the ninth dialog system technology challenge: DSTC9. CoRR, abs/2011.06486R. Chulaka Gunasekara, Seokhwan Kim, Luis Fer- nando D'Haro, Abhinav Rastogi, Yun-Nung Chen, Mihail Eric, Behnam Hedayatnia, Karthik Gopalakr- ishnan, Yang Liu, Chao-Wei Huang, Dilek Hakkani- Tür, Jinchao Li, Qi Zhu, Lingxiao Luo, Lars Li- den, Kaili Huang, Shahin Shayandeh, Runze Liang, Baolin Peng, Zheng Zhang, Swadheen Shukla, Min- lie Huang, Jianfeng Gao, Shikib Mehri, Yulan Feng, Carla Gordon, Seyed Hossein Alavi, David R. Traum, Maxine Eskénazi, Ahmad Beirami, Eunjoon Cho, Paul A. Crook, Ankita De, Alborz Geramifard, Satwik Kottur, Seungwhan Moon, Shivani Poddar, and Rajen Subba. 2020. Overview of the ninth di- alog system technology challenge: DSTC9. CoRR, abs/2011.06486.
Don't stop pretraining: Adapt language models to domains and tasks. Ana Suchin Gururangan, Swabha Marasović, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, 10.18653/v1/2020.acl-main.740Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsSuchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 8342-8360, Online. Association for Computational Linguistics.
Unsupervised domain adaptation of contextualized embeddings for sequence labeling. Xiaochuang Han, Jacob Eisenstein, 10.18653/v1/D19-1433Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsXiaochuang Han and Jacob Eisenstein. 2019. Unsu- pervised domain adaptation of contextualized em- beddings for sequence labeling. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 4238-4248, Hong Kong, China. Association for Computational Linguistics.
Universal language model fine-tuning for text classification. Jeremy Howard, Sebastian Ruder, 10.18653/v1/P18-1031Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Jeremy Howard and Sebastian Ruder. 2018. Universal language model fine-tuning for text classification. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 328-339, Melbourne, Australia. Association for Computational Linguistics.
XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation. Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson, International Conference on Machine Learning (ICML). Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual gener- alisation. In International Conference on Machine Learning (ICML).
SUMBT: Slot-utterance matching for universal and scalable belief tracking. Hwaran Lee, Jinsik Lee, Tae-Yoon Kim, 10.18653/v1/P19-1546Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsHwaran Lee, Jinsik Lee, and Tae-Yoon Kim. 2019. SUMBT: Slot-utterance matching for universal and scalable belief tracking. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 5478-5483, Florence, Italy. Association for Computational Linguistics.
An empirical study of cross-lingual transferability in generative dialogue state tracker. Yen-Ting Lin, Yun-Nung Chen, Yen-Ting Lin and Yun-Nung Chen. 2021. An empirical study of cross-lingual transferability in generative di- alogue state tracker.
Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. P Lison, J Tiedemann, LREC. P. Lison and J. Tiedemann. 2016. Opensubtitles2016: Extracting large parallel corpora from movie and tv subtitles. In LREC.
Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. Zihan Liu, Zhaojiang Genta Indra Winata, Peng Lin, Pascale Xu, Fung, The Thirty-Second Innovative Applications of Artificial Intelligence Conference. New York, NY, USAAAAI Press2020The Tenth AAAI Symposium on Educational Advances in Artificial IntelligenceZihan Liu, Genta Indra Winata, Zhaojiang Lin, Peng Xu, and Pascale Fung. 2020. Attention-informed mixed-language training for zero-shot cross-lingual task-oriented dialogue systems. In The Thirty- Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Appli- cations of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8433- 8440. AAAI Press.
Pretraining methods for dialog context representation learning. Shikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, Maxine Eskenazi, 10.18653/v1/P19-1373Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsShikib Mehri, Evgeniia Razumovskaia, Tiancheng Zhao, and Maxine Eskenazi. 2019. Pretraining methods for dialog context representation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3836-3845, Florence, Italy. Association for Compu- tational Linguistics.
Neural belief tracker: Data-driven dialogue state tracking. Nikola Mrkšić, Ó Diarmuid, Tsung-Hsien Séaghdha, Blaise Wen, Steve Thomson, Young, 10.18653/v1/P17-1163Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver1Nikola Mrkšić, Diarmuid Ó Séaghdha, Tsung-Hsien Wen, Blaise Thomson, and Steve Young. 2017a. Neural belief tracker: Data-driven dialogue state tracking. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1777-1788, Van- couver, Canada. Association for Computational Lin- guistics.
Semantic specialization of distributional word vector spaces using monolingual and cross-lingual constraints. Nikola Mrkšić, Ivan Vulić, Ó Diarmuid, Ira Séaghdha, Roi Leviant, Milica Reichart, Anna Gašić, Steve Korhonen, Young, 10.1162/tacl_a_00063Transactions of the Association for Computational Linguistics. 5Nikola Mrkšić, Ivan Vulić, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korho- nen, and Steve Young. 2017b. Semantic special- ization of distributional word vector spaces using monolingual and cross-lingual constraints. Transac- tions of the Association for Computational Linguis- tics, 5:309-324.
English intermediate-task training improves zero-shot crosslingual transfer too. Jason Phang, Iacer Calixto, Yada Phu Mon Htut, Haokun Pruksachatkun, Clara Liu, Katharina Vania, Samuel R Kann, Bowman, Proceedings of the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language Processing. the 1st Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 10th International Joint Conference on Natural Language ProcessingSuzhou, ChinaAssociation for Computational LinguisticsJason Phang, Iacer Calixto, Phu Mon Htut, Yada Pruksachatkun, Haokun Liu, Clara Vania, Katha- rina Kann, and Samuel R. Bowman. 2020. English intermediate-task training improves zero-shot cross- lingual transfer too. In Proceedings of the 1st Con- ference of the Asia-Pacific Chapter of the Associa- tion for Computational Linguistics and the 10th In- ternational Joint Conference on Natural Language Processing, pages 557-575, Suzhou, China. Associ- ation for Computational Linguistics.
Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks. Jason Phang, Thibault Févry, Samuel R Bowman, Jason Phang, Thibault Févry, and Samuel R. Bowman. 2019. Sentence encoders on stilts: Supplementary training on intermediate labeled-data tasks.
Intermediate-task transfer learning with pretrained language models: When and why does it work?. Yada Pruksachatkun, Jason Phang, Haokun Liu, Xiaoyi Phu Mon Htut, Richard Yuanzhe Zhang, Clara Pang, Katharina Vania, Samuel R Kann, Bowman, 10.18653/v1/2020.acl-main.467Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsYada Pruksachatkun, Jason Phang, Haokun Liu, Phu Mon Htut, Xiaoyi Zhang, Richard Yuanzhe Pang, Clara Vania, Katharina Kann, and Samuel R. Bowman. 2020. Intermediate-task transfer learning with pretrained language models: When and why does it work? In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 5231-5247, Online. Association for Computational Linguistics.
Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual NLP. Libo Qin, Minheng Ni, Yue Zhang, Wanxiang Che, 10.24963/ijcai.2020/533Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. the Twenty-Ninth International Joint Conference on Artificial Intelligence2020ijcai.orgLibo Qin, Minheng Ni, Yue Zhang, and Wanxiang Che. 2020. Cosda-ml: Multi-lingual code-switching data augmentation for zero-shot cross-lingual NLP. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI 2020, pages 3853-3860. ijcai.org.
Pre-trained models for natural language processing: A survey. Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang, Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. 2020. Pre-trained models for natural language processing: A survey.
Towards universal dialogue state tracking. Liliang Ren, Kaige Xie, Lu Chen, Kai Yu, 10.18653/v1/D18-1299Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsLiliang Ren, Kaige Xie, Lu Chen, and Kai Yu. 2018. Towards universal dialogue state tracking. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 2780- 2786, Brussels, Belgium. Association for Computa- tional Linguistics.
Cross-lingual transfer learning for multilingual task oriented dialog. Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis, 10.18653/v1/N19-1380Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaAssociation for Computational Linguistics1MinneapolisSebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3795-3805, Min- neapolis, Minnesota. Association for Computational Linguistics.
cloze procedure": A new tool for measuring readability. Wilson L Taylor, 10.1177/107769905303000401Journalism Quarterly. 304Wilson L. Taylor. 1953. "cloze procedure": A new tool for measuring readability. Journalism Quar- terly, 30(4):415-433.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ukasz Kaiser, Illia Polosukhin ; I Guyon, U V Luxburg, S Bengio, H Wallach, Fergus, R Vishwanathan, Garnett, Advances in Neural Information Processing Systems. Curran Associates, Inc30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Ł Ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I Guyon, U V Luxburg, S Bengio, H Wallach, R Fergus, S Vishwanathan, and R Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008. Curran Asso- ciates, Inc.
A networkbased end-to-end trainable task-oriented dialogue system. David Tsung-Hsien Wen, Nikola Vandyke, Milica Mrkšić, Lina M Gašić, Pei-Hao Rojas-Barahona, Stefan Su, Steve Ultes, Young, Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. the 15th Conference of the European Chapter of the Association for Computational LinguisticsValencia, Spain1Association for Computational LinguisticsTsung-Hsien Wen, David Vandyke, Nikola Mrkšić, Milica Gašić, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. 2017. A network- based end-to-end trainable task-oriented dialogue system. In Proceedings of the 15th Conference of the European Chapter of the Association for Compu- tational Linguistics: Volume 1, Long Papers, pages 438-449, Valencia, Spain. Association for Computa- tional Linguistics.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Drame, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsQuentin Lhoest, and Alexander RushOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Transfertransfo: A transfer learning approach for neural network based conversational agents. Thomas Wolf, Victor Sanh, Julien Chaumond, Clement Delangue, Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A trans- fer learning approach for neural network based con- versational agents.
Crosswoz: A large-scale chinese cross-domain task-oriented dialogue dataset. Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, Minlie Huang, Trans. Assoc. Comput. Linguistics. 8Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang. 2020a. Crosswoz: A large-scale chinese cross-domain task-oriented dialogue dataset. Trans. Assoc. Comput. Linguistics, 8:281-295.
ConvLab-2: An open-source toolkit for building, evaluating, and diagnosing dialogue systems. Qi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, Minlie Huang, 10.18653/v1/2020.acl-demos.19Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsQi Zhu, Zheng Zhang, Yan Fang, Xiang Li, Ryuichi Takanobu, Jinchao Li, Baolin Peng, Jianfeng Gao, Xiaoyan Zhu, and Minlie Huang. 2020b. ConvLab- 2: An open-source toolkit for building, evaluating, and diagnosing dialogue systems. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 142-149, Online. Association for Computa- tional Linguistics.
| [] |
[
"The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection",
"The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection"
] | [
"Arya D Mccarthy \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Ekaterina Vylomova \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Shijie Wu \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"♣ \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Chaitanya Malaviya \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Lawrence Wolf-Sonkin \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Garrett Nicolai \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Christo Kirov \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Miikka Silfverberg \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Sebastian Mielke \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Jeffrey Heinz \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Ryan Cotterell \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n",
"Mans Hulden \nJohns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n\n"
] | [
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n",
"Johns Hopkins University ♥ University of Melbourne\nAllen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado\n"
] | [] | The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines. | 10.18653/v1/w19-4226 | [
"https://www.aclweb.org/anthology/W19-4226.pdf"
] | 201,679,015 | 1910.11493 | 777c0048586f6fbba9617a0a003ef28de3b3b5d9 |
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection
Arya D Mccarthy
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Ekaterina Vylomova
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Shijie Wu
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
♣
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Chaitanya Malaviya
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Lawrence Wolf-Sonkin
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Garrett Nicolai
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Christo Kirov
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Miikka Silfverberg
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Sebastian Mielke
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Jeffrey Heinz
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Ryan Cotterell
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
Mans Hulden
Johns Hopkins University ♥ University of Melbourne
Allen Institute for AI ♦ Google University of Helsinki Stony Brook University University of Colorado
The SIGMORPHON 2019 Shared Task: Morphological Analysis in Context and Cross-Lingual Transfer for Inflection
The SIGMORPHON 2019 shared task on cross-lingual transfer and contextual analysis in morphology examined transfer learning of inflection between 100 language pairs, as well as contextual lemmatization and morphosyntactic description in 66 languages. The first task evolves past years' inflection tasks by examining transfer of morphological inflection knowledge from a high-resource language to a low-resource language. This year also presents a new second challenge on lemmatization and morphological feature analysis in context. All submissions featured a neural component and built on either this year's strong baselines or highly ranked systems from previous years' shared tasks. Every participating team improved in accuracy over the baselines for the inflection task (though not Levenshtein distance), and every team in the contextual analysis task improved on both state-of-the-art neural and non-neural baselines.
Introduction
While producing a sentence, humans combine various types of knowledge to produce fluent outputvarious shades of meaning are expressed through word selection and tone, while the language is made to conform to underlying structural rules via syntax and morphology. Native speakers are often quick to identify disfluency, even if the meaning of a sentence is mostly clear.
Automatic systems must also consider these constraints when constructing or processing language. Strong enough language models can often reconstruct common syntactic structures, but are insufficient to properly model morphology. Many languages implement large inflectional paradigms that mark both function and content words with a varying levels of morphosyntactic information. * Now at Google For instance, Romanian verb forms inflect for person, number, tense, mood, and voice; meanwhile, Archi verbs can take on thousands of forms (Kibrik, 1998). Such complex paradigms produce large inventories of words, all of which must be producible by a realistic system, even though a large percentage of them will never be observed over billions of lines of linguistic input. Compounding the issue, good inflectional systems often require large amounts of supervised training data, which is infeasible in many of the world's languages.
This year's shared task is concentrated on encouraging the construction of strong morphological systems that perform two related but different inflectional tasks. The first task asks participants to create morphological inflectors for a large number of under-resourced languages, encouraging systems that use highly-resourced, related languages as a cross-lingual training signal. The second task welcomes submissions that invert this operation in light of contextual information: Given an unannotated sentence, lemmatize each word, and tag them with a morphosyntactic description. Both of these tasks extend upon previous morphological competitions, and the best submitted systems now represent the state of the art in their respective tasks.
Tasks and Evaluation
Task 1: Cross-lingual transfer for morphological inflection
Annotated resources for the world's languages are not distributed equally-some languages simply have more as they have more native speakers willing and able to annotate more data. We explore how to transfer knowledge from high-resource languages that are genetically related to low-resource languages.
The first task iterates on last year's main task: morphological inflection .
Instead of giving some number of training examples in the language of interest, we provided only a limited number in that language. To accompany it, we provided a larger number of examples in either a related or unrelated language. Each test example asked participants to produce some other inflected form when given a lemma and a bundle of morphosyntactic features as input. The goal, thus, is to perform morphological inflection in the low-resource language, having hopefully exploited some similarity to the high-resource language. Models which perform well here can aid downstream tasks like machine translation in lowresource settings. All datasets were resampled from UniMorph, which makes them distinct from past years.
The mode of the task is inspired by Zoph et al. (2016), who fine-tune a model pre-trained on a high-resource language to perform well on a lowresource language. We do not, though, require that models be trained by fine-tuning. Joint modeling or any number of methods may be explored instead.
Example The model will have access to typelevel data in a low-resource target language, plus a high-resource source language. We give an example here of Asturian as the target language with Spanish as the source language.
Test input (Asturian) baxar
V;V.PTCP;PRS Test output (Asturian) "baxando" Evaluation We score the output of each system in terms of its predictions' exact-match accuracy and the average Levenshtein distance between the predictions and their corresponding true forms.
Task 2: Morphological analysis in context
Although inflection of words in a context-agnostic manner is a useful evaluation of the morphological quality of a system, people do not learn morphology in isolation.
In 2018, the second task of the CoNLL-SIGMORPHON Shared Task required submitting systems to complete an inflectional cloze task (Taylor, 1953) given only the sentential context and the desired lemma -an example of the problem is given in the following lines: A successful system would predict the plural form "dogs". Likewise, a Spanish word form "ayuda" may be a feminine noun or a third-person verb form, which must be disambiguated by context.
The are barking.
This year's task extends the second task from last year. Rather than inflect a single word in context, the task is to provide a complete morphological tagging of a sentence: for each word, a successful system will need to lemmatize and tag it with a morphsyntactic description (MSD). Context is critical-depending on the sentence, identical word forms realize a large number of potential inflectional categories, which will in turn influence lemmatization decisions. If the sentence were instead "The barking dogs kept us up all night", "barking" is now an adjective, and its lemma is also "barking".
Data
Data for Task 1
Language pairs We presented data in 100 language pairs spanning 79 unique languages. Data for all but four languages (Basque, Kurmanji, Murrinhpatha, and Sorani) are extracted from English Wiktionary, a large multi-lingual crowd-sourced dictionary with morphological paradigms for many lemmata. 1 20 of the 100 language pairs are either distantly related or unrelated; this allows speculation into the relative importance of data quantity and linguistic relatedness.
Data format For each language, the basic data consists of triples of the form (lemma, feature bundle, inflected form), as in Table 1. The first feature in the bundle always specifies the core part of speech (e.g., verb). For each language pair, separate files contain the high-and low-resource training examples.
All features in the bundle are coded according to the UniMorph Schema, a cross-linguistically consistent universal morphological feature set (Sylak-Glassman et al., 2015a,b).
Extraction from Wiktionary For each of the Wiktionary languages, Wiktionary provides a number of tables, each of which specifies the full inflectional paradigm for a particular lemma. As in the previous iteration, tables were extracted using a template annotation procedure described in (Kirov et al., 2018).
Sampling data splits From each language's collection of paradigms, we sampled the training, development, and test sets as in 2018. 2 Crucially, while the data were sampled in the same fashion, the datasets are distinct from those used for the 2018 shared task.
Our first step was to construct probability distributions over the (lemma, feature bundle, inflected form) triples in our full dataset. For each triple, we counted how many tokens the inflected form has in the February 2017 dump of Wikipedia for that language. To distribute the counts of an observed form over all the triples that have this token as its form, we follow the method used in the previous shared task , training a neural network on unambiguous forms to estimate the distribution over all, even ambiguous, forms. We then sampled 12,000 triples without replacement from this distribution. The first 100 were taken as training data for low-resource settings. The first 10,000 were used as high-resource training sets. As these sets are nested, the highest-count triples tend to appear in the smaller training sets. 3 is discussed in Mansfield (2019). Data for Kurmanji Kurdish and Sorani Kurdish were created as part of the Alexina project .
2 These datasets can be obtained from https:// sigmorphon.github.io/sharedtasks/2019/ 3 Several high-resource languages had necessarily fewer, but on a similar order of magnitude. Bengali, Uzbek, Kannada, The final 2000 triples were randomly shuffled and then split in half to obtain development and test sets of 1000 forms each. 4 The final shuffling was performed to ensure that the development set is similar to the test set. By contrast, the development and test sets tend to contain lower-count triples than the training set. 5
Other modifications We further adopted some changes to increase compatibility. Namely, we corrected some annotation errors created while scraping Wiktionary for the 2018 task, and we standardized Romanian t-cedilla and t-comma to t-comma. (The same was done with s-cedilla and s-comma.)
Data for Task 2
Our data for task 2 come from the Universal Dependencies treebanks (UD; Nivre et al., 2018, v2.3), which provides pre-defined training, development, and test splits and annotations in a unified annotation schema for morphosyntax and dependency relationships. Unlike the 2018 cloze task which used UD data, we require no manual data preparation and are able to leverage all 107 monolingual treebanks. As is typical, data are presented in CoNLL-U format, 6 although we modify the morphological feature and lemma fields.
Data conversion
The morphological annotations for the 2019 shared task were converted to the Uni-Morph schema (Kirov et al., 2018) according to McCarthy et al. (2018), who provide a deterministic mapping that increases agreement across languages. This also moves the part of speech into the bundle of morphological features. We do not attempt to individually correct any errors in the UD source material. Further, some languages received additional pre-processing. In the Finnish data, we removed morpheme boundaries that were present in the lemmata (e.g., puhe#kieli → puhekieli 'spoken+language'). Russian lemmata in the GSD treebank were presented in all uppercase; to match Swahili. Likewise, the low-resource language Telugu had fewer than 100 forms. 4 When sufficient data are unavailable, we instead use 50 or 100 examples. 5 This mimics a realistic setting, as supervised training is usually employed to generalize from frequent words that appear in annotated resources to less frequent words that do not. Unsupervised learning methods also tend to generalize from more frequent words (which can be analyzed more easily by combining information from many contexts) to less frequent ones. 6 https://universaldependencies.org/format. html the 2018 shared task, we lowercased these. In development and test data, all fields except for form and index within the sentence were struck.
Baselines
Task 1 Baseline
We include four neural sequence-to-sequence models mapping lemma into inflected word forms: soft attention (Luong et al., 2015), non-monotonic hard attention (Wu et al., 2018), monotonic hard attention and a variant with offset-based transition distribution . Neural sequenceto-sequence models with soft attention (Luong et al., 2015) have dominated previous SIGMOR-PHON shared tasks (Cotterell et al., 2017). Wu et al. (2018) instead models the alignment between characters in the lemma and the inflected word form explicitly with hard attention and learns this alignment and transduction jointly. shows that enforcing strict monotonicity with hard attention is beneficial in tasks such as morphological inflection where the transduction is mostly monotonic. The encoder is a biLSTM while the decoder is a left-to-right LSTM. All models use multiplicative attention and have roughly the same number of parameters. In the model, a morphological tag is fed to the decoder along with target character embeddings to guide the decoding. During the training of the hard attention model, dynamic programming is applied to marginalize all latent alignments exactly.
Task 2 Baselines
Non-neural (Müller et al., 2015): The Lemming model is a log-linear model that performs joint morphological tagging and lemmatization. The model is globally normalized with the use of a second order linear-chain CRF. To efficiently calculate the partition function, the choice of lemmata are pruned with the use of pre-extracted edit trees.
Neural (Malaviya et al., 2019): This is a stateof-the-art neural model that also performs joint morphological tagging and lemmatization, but also accounts for the exposure bias with the application of maximum likelihood (MLE). The model stitches the tagger and lemmatizer together with the use of jackknifing (Agić and Schluter, 2017) to expose the lemmatizer to the errors made by the tagger model during training. The morphological tagger is based on a character-level biLSTM embedder that produces the embedding for a word, and a word-level biLSTM tagger that predicts a morphological tag sequence for each word in the sentence. The lemmatizer is a neural sequenceto-sequence model ) that uses the decoded morphological tag sequence from the tagger as an additional attribute. The model uses hard monotonic attention instead of standard soft attention, along with a dynamic programming based training scheme.
Results
The SIGMORPHON 2019 shared task received 30 submissions-14 for task 1 and 16 for task 2from 23 teams. In addition, the organizers' baseline systems were evaluated.
Task 1 Results
Five teams participated in the first Task, with a variety of methods aimed at leveraging the crosslingual data to improve system performance. The University of Alberta (UAlberta) performed a focused investigation on four language pairs, training cognate-projection systems from external cognate lists. Two methods were considered: one which trained a high-resource neural encoderdecoder, and projected the test data into the HRL, and one that projected the HRL data into the LRL, and trained a combined system. Results demonstrated that certain language pairs may be amenable to such methods. The Tuebingen University submission (Tuebingen) aligned source and target to learn a set of editactions with both linear and neural classifiers that independently learned to predict action sequences for each morphological category. Adding in the cross-lingual data only led to modest gains.
AX-Semantics combined the low-and highresource data to train an encoder-decoder seq2seq model; optionally also implementing domain adaptation methods to focus later epochs on the target language.
The CMU submission first attends over a decoupled representation of the desired morphological sequence before using the updated decoder state to attend over the character sequence of the lemma. Secondly, in order to reduce the bias of the decoder's language model, they hallucinate two types of data that encourage common affixes and character copying. Simply allowing the model to learn to copy characters for several epochs significantly outperforms the task baseline, while further improvements are obtained through fine-tuning. Making use of an adversarial language discriminator, cross lingual gains are highly-correlated to linguistic similarity, while augmenting the data with hallucinated forms and multiple related target language further improves the model.
The system from IT-IST also attends separately to tags and lemmas, using a gating mechanism to interpolate the importance of the individual attentions. By combining the gated dual-head attention with a SparseMax activation function, they are able to jointly learn stem and affix modifications, improving significantly over the baseline system.
The relative system performance is described in Table 5, which shows the average per-language accuracy of each system. The table reflects the fact that some teams submitted more than one system (e.g. Tuebingen-1 & Tuebingen-2 in the table).
Task 2 Results
Nine teams submitted system papers for Task 2, with several interesting modifications to either the baseline or other prior work that led to modest improvements.
Charles-Saarland achieved the highest overall tagging accuracy by leveraging multi-lingual BERT embeddings fine-tuned on a concatenation of all available languages, effectively transporting the cross-lingual objective of Task 1 into Task 2. Lemmas and tags are decoded separately (with a joint encoder and separate attention); Lemmas are a sequence of edit-actions, while tags are calculated jointly. (There is no splitting of tags into features; tags are atomic.) CBNU instead lemmatize using a transformer network, while performing tagging with a multilayer perceptron with biaffine attention. Input words are first lemmatized, and then pipelined to the tagger, which produces atomic tag sequences (i.e., no splitting of features).
The team from Istanbul Technical University (ITU) jointly produces lemmatic edit-actions and morphological tags via a two level encoder (first word embeddings, and then context embeddings) and separate decoders. Their system slightly improves over the baseline lemmatization, but significantly improves tagging accuracy.
The team from the University of Groningen (RUG) also uses separate decoders for lemmatization and tagging, but uses ELMo to initialize the contextual embeddings, leading to large gains in performance. Furthermore, joint training on related languages further improves results.
CMU approaches tagging differently than the multi-task decoding we've seen so far (baseline is used for lemmatization). Making use of a hierarchical CRF that first predicts POS (that is subsequently looped back into the encoder), they then seek to predict each feature separately. In particular, predicting POS separately greatly improves results. An attempt to leverage gold typological information led to little gain in the results; experiments suggest that the system is already learning the pertinent information.
The team from Ohio State University (OHIOSTATE) concentrates on predicting tags; the baseline lemmatizer is used for lemmatization. To that end, they make use of a dual decoder that first predicts features given only the word embedding as input; the predictions are fed to a GRU seq2seq, which then predicts the sequence of tags.
The UNT HiLT+Ling team investigates a lowresource setting of the tagging, by using parallel Bible data to learn a translation matrix between English and the target language, learning morphological tags through analogy with English.
The UFAL-Prague team extends their submission from the UD shared task (multi-layer LSTM), replacing the pretrained embeddings with BERT, to great success (first in lemmatization, 2nd in tag- ging). Although they predict complete tags, they use the individual features to regularize the decoder. Small gains are also obtained from joining multilingual corpora and ensembling.
CUNI-Malta performs lemmatization as operations over edit actions with LSTM and ReLU. Tagging is a bidirectional LSTM augmented by the edit actions (i.e., two-stage decoding), predicting features separately.
The Edinburgh system is a character-based LSTM encoder-decoder with attention, implemented in OpenNMT. It can be seen as an extension of the contextual lemmatization system Lematus (Bergmanis and Goldwater, 2018) to include morphological tagging, or alternatively as an adaptation of the morphological re-inflection system MED (Kann and Schütze, 2016) to incorporate context and perform analysis rather than re-inflection. Like these systems it uses a completely generic encoderdecoder architecture with no specific adaptation to the morphological processing task other than the form of the input. In the submitted version of the system, the input is split into short chunks corresponding to the target word plus one word of context on either side, and the system is trained to output the corresponding lemmas and tags for each three-word chunk.
Several teams relied on external resources to improve their lemmatization and feature analysis. Several teams made use of pre-trained embeddings. CHARLES-SAARLAND-2 and UFALPRAGUE-1 used pretrained contextual embeddings (BERT) provided by Google (Devlin et al., 2019). CBNU-1 used a mix of pre-trained embeddings from the CoNLL 2017 shared task and fastText. Further, some teams trained their own embeddings to aid performance.
Future Directions
In general, the application of typology to natural language processing (e.g., Gerz et al., 2018; provides an interesting avenue for multilinguality. Further, our shared task was designed to only leverage a single helper language, though many may exist with lexical or morphological overlap with the target language. Techniques like those of Neubig and Hu (2018) may aid in designing universal inflection architectures. Neither task this year included unannotated monolingual corpora. Using such data is well-motivated from an L1-learning point of view, and may affect the performance of low-resource data settings.
In the case of inflection an interesting future topic could involve departing from orthographic representation and using more IPA-like representations, i.e. transductions over pronunciations. Differ- Table 9: Task 2 Morph F1 scores ent languages, in particular those with idiosyncratic orthographies, may offer new challenges in this respect. 7 Only one team tried to learn inflection in a multilingual setting-i.e. to use all training data to train one model. Such transfer learning is an interesting avenue of future research, but evaluation could be difficult. Whether any cross-language transfer is actually being learned vs. whether having more data better biases the networks to copy strings is an evaluation step to disentangle. 8 Creating new data sets that accurately reflect learner exposure (whether L1 or L2) is also an important consideration in the design of future shared tasks. One pertinent facet of this is information about inflectional categories-often the inflectional information is insufficiently prescribed by the lemma, as with the Romanian verbal inflection classes or nominal gender in German.
As we move toward multilingual models for morphology, it becomes important to understand which representations are critical or irrelevant for adapting to new languages; this may be probed in the style of (Thompson et al., 2018), and it can be used as a first step toward designing systems that avoid "catastrophic forgetting" as they learn to inflect new languages (Thompson et al., 2019).
Future directions for Task 2 include exploring cross-lingual analysis-in stride with both Task 1 and Malaviya et al. (2018)-and leveraging these analyses in downstream tasks.
Conclusions
The SIGMORPHON 2019 shared task provided a type-level evaluation on 100 language pairs in 79 languages and a token-level evaluation on 107 treebanks in 66 languages, of systems for inflection and analysis. On task 1 (low-resource inflection with cross-lingual transfer), 14 systems were submitted, while on task 2 (lemmatization and morphological feature analysis), 16 systems were submitted. All used neural network models, completing a trend in past years' shared tasks and other recent work on morphology.
In task 1, gains from cross-lingual training were generally modest, with gains positively correlating with the linguistic similarity of the two languages. 7 Although some work suggests that working with IPA or phonological distinctive features in this context yields very similar results to working with graphemes (Wiemerslage et al., 2018). 8 This has been addressed by Jin and Kann (2017). In the second task, several methods were implemented by multiple groups, with the most successful systems implementing variations of multiheaded attention, multi-level encoding, multiple decoders, and ELMo and BERT contextual embeddings.
We have released the training, development, and test sets, and expect these datasets to provide a useful benchmark for future research into learning of inflectional morphology and string-to-string transduction.
High-resource source language training data (Spanish) tocar "tocando" V;V.PTCP;PRS bailar "bailaba"V;PST;IPFV;3;SG;IND mentir "mintió" V;PST;PFV;3;SG;IND . . . . . . . . .
N;PL V;PRS;3;PL V;V.PTCP;PRS PUNCT
Table 1 :
1Sample language pair and data format for Task 1
Table 2 :
2Task 1 Team Scores, averaged across all Lan-
guages; * indicates submissions were only applied to a
subset of languages, making scores incomparable. † in-
dicates that additional resources were used for training.
Table 3 :
3Task 1 Accuracy scoresHRL-LRL
Baseline Best Team
HRL-LRL
Baseline Best Team
adyghe-kabardian
0.04 0.03 Tuebingen-02
hungarian-livonian
2.56 1.81 it-ist-02
albanian-breton
1.30 0.44 it-ist-02
hungarian-votic
2.47 1.11 it-ist-01
arabic-classical-syriac
0.46 0.10 CMU-03
irish-breton
1.57 0.38 CMU-03
arabic-maltese
1.42 1.37 CMU-03
irish-cornish
2.00 1.56 it-ist-01
arabic-turkmen
0.46 0.32 CMU-03
irish-old-irish
3.30 3.12 it-ist-02
armenian-kabardian
0.21 0.14 CMU-03 / it-ist-01 irish-scottish-gaelic
0.96 1.06 CMU-03
asturian-occitan
1.74 0.80 it-ist-01
italian-friulian
1.03 0.72 it-ist-02
bashkir-azeri
1.64 0.69 it-ist-02
italian-ladin
0.79 0.60 CMU-03
bashkir-crimean-tatar
0.39 0.42 CMU-03
italian-maltese
1.39 1.23 CMU-03
bashkir-kazakh
0.32 0.10 it-ist-01
italian-neapolitan
0.40 0.36 it-ist-02
bashkir-khakas
0.18 0.04 it-ist-02
kannada-telugu
0.60 0.14 CMU-03
bashkir-tatar
0.46 0.33 CMU-03
kurmanji-sorani
2.56 0.65 CMU-03
bashkir-turkmen
0.10 0.12 it-ist-01
latin-czech
2.77 1.14 CMU-03
basque-kashubian
1.16 0.42 CMU-03
latvian-lithuanian
2.21 1.69 CMU-03
belarusian-old-irish
3.90 3.14 CMU-03
latvian-scottish-gaelic
1.16 1.00 CMU-03
bengali-greek
2.86 0.59 CMU-03
persian-azeri
1.35 0.74 CMU-03
bulgarian-old-church-slavonic
1.14 1.06 CMU-03
persian-pashto
1.70 1.54 CMU-03
czech-kashubian
0.84 0.36 CMU-03
polish-kashubian
0.34 0.34 CMU-03
czech-latin
2.95 1.36 CMU-03
polish-old-church-slavonic
1.22 0.96 CMU-03
danish-middle-high-german
0.50 0.38 it-ist-02
portuguese-russian
1.70 1.16 CMU-03
danish-middle-low-german
1.44 1.26 it-ist-01
romanian-latin
3.05 1.35 CMU-03
danish-north-frisian
2.78 2.11 CMU-03
russian-old-church-slavonic
1.33 0.86 CMU-03
danish-west-frisian
1.57 1.27 it-ist-02
russian-portuguese
1.04 0.66 CMU-03
danish-yiddish
0.91 0.72 Tuebingen-01
sanskrit-bengali
1.79 1.13 CMU-03
dutch-middle-high-german
0.44 0.36 it-ist-02
sanskrit-pashto
1.54 1.27 it-ist-02
dutch-middle-low-german
1.34 1.16 it-ist-02
slovak-kashubian
0.60 0.34 CMU-03
dutch-north-frisian
2.67 1.99 CMU-03
slovene-old-saxon
2.23 1.14 CMU-03
dutch-west-frisian
2.18 1.18 it-ist-02
sorani-irish
2.40 0.99 CMU-03
dutch-yiddish
0.53 0.72 Tuebingen-01
spanish-friulian
1.01 0.61 CMU-03
english-murrinhpatha
1.68 1.10 it-ist-02
spanish-occitan
1.14 0.57 it-ist-01
english-north-frisian
2.73 2.22 it-ist-02
swahili-quechua
3.90 0.56 CMU-03
english-west-frisian
1.48 1.26 it-ist-02
turkish-azeri
0.35 0.22 it-ist-01
estonian-ingrian
1.56 1.24 it-ist-02
turkish-crimean-tatar
0.24 0.14 CMU-03
estonian-karelian
0.52 0.62 it-ist-02
turkish-kazakh
0.34 0.16 it-ist-02
estonian-livonian
1.87 1.47 it-ist-02
turkish-khakas
0.80 0.06 it-ist-01
estonian-votic
1.55 1.17 it-ist-02
turkish-tatar
0.37 0.21 it-ist-02
finnish-ingrian
1.08 1.20 it-ist-02
turkish-turkmen
0.24 0.02 it-ist-01
finnish-karelian
0.64 0.42 it-ist-01
urdu-bengali
1.12 0.98 CMU-03
finnish-livonian
2.48 1.71 it-ist-01
urdu-old-english
1.72 1.20 CMU-03
finnish-votic
1.25 1.02 it-ist-02
uzbek-azeri
1.23 0.70 CMU-03
french-occitan
1.22 0.69 it-ist-01
uzbek-crimean-tatar
0.49 0.45 CMU-03
german-middle-high-german
0.44 0.32 it-ist-02
uzbek-kazakh
0.20 0.32 CMU-03
german-middle-low-german
1.24 1.16 it-ist-02
uzbek-khakas
0.24 0.18 it-ist-01
german-yiddish
0.46 0.72 Tuebingen-01
uzbek-tatar
0.48 0.35 CMU-03
greek-bengali
1.21 1.02 CMU-03
uzbek-turkmen
0.32 0.42 CMU-03
hebrew-classical-syriac
0.14 0.06 CMU-03
welsh-breton
0.90 0.31 CMU-03
hebrew-maltese
1.24 1.10 CMU-03
welsh-cornish
2.44 1.50 it-ist-01
hindi-bengali
1.18 0.72 UAlberta-02
welsh-old-irish
3.36 3.08 CMU-03
hungarian-ingrian
2.60 1.46 it-ist-01
welsh-scottish-gaelic
1.22 1.08 CMU-03
hungarian-karelian
0.90 0.50 it-ist-01
zulu-swahili
1.24 0.33 CMU-03
Table 4 :
4Task 1 Levenshtein scores
Table 5 :
5Task 2 Team Scores, averaged across all treebanks; * indicates submissions were only applied to a subset of languages, making scores incomparable. † indicates that additional external resources were used for training, and ‡ indicates that training data were shared across languages or treebanks.
Table 6 :
6Task 2 Lemma Accuracy scoresLanguage (Treebank)
Baseline Best Team
Language (Treebank)
Baseline Best Team
Table 7 :
7Task 2 Lemma Levenshtein scoresLanguage (Treebank)
Baseline
Best Team
Language (Treebank)
Baseline
Best Team
Tamil-TTB 73.33 91.63 UFALPRAGUE-01 UD Greek-GDT 77.44 95.95 UFALPRAGUE-01 UD Turkish-IMST 62.94 92.27 UFALPRAGUE-01 UD Hebrew-HTB 81.15 97.67 CHARLES-SAARLAND-02 UD Turkish-PUD 66.30 87.63 post deadline RUG-01 UD Hindi-HDTB 80.60 93.65 CHARLES-SAARLAND-02 UD Ukrainian-IU 63.59 95.78 CHARLES-SAARLAND-02 UD Hungarian-Szeged 65.90 95.03 UFALPRAGUE-01 UD Upper Sorbian-UFAL 57.70 87.02 UFALPRAGUE-01 UD Indonesian-GSD 71.73 92.48 CHARLES-SAARLAND-02 UD Urdu-UDTB 69.97 80.90 UFALPRAGUE-01 UD Irish-IDT 67.66 86.37 UFALPRAGUE-01 UD Vietnamese-VTB 69.42 94.54 CHARLES-SAARLAND-02 UD Italian-ISDT 83.72 98.49 CHARLES-SAARLAND-02 UD Yoruba-YTB 73.26 93.80 CMU-DataAug-01 UD Italian-ParTUT 83.51 98.72 UFALPRAGUE-01
Table 8 :
8Task 2 Morph Accuracy scores Language (Treebank) UD Afrikaans-AfriBooms 92.87 99.40 UFALPRAGUE-01 UD Italian-PoSTWITA 87.98 97.90 CHARLES-SAARLAND-02 UD Akkadian-PISANDUB 80.41 89.06 CHARLES-SAARLAND-02 UD Italian-PUD 92.24 98.42 CHARLES-SAARLAND-02 UD Amharic-ATT 87.57 93.15 UFALPRAGUE-01 UD Japanese-GSD 90.64 98.21 CHARLES-SAARLAND-02 UD Ancient Greek-Perseus 88.97 96.72 UFALPRAGUE-01 UD Japanese-Modern 95.64 97.50 CHARLES-SAARLAND-02 UD Ancient Greek-PROIEL 93.55 97.88 UFALPRAGUE-01 UD Japanese-PUD 89.64 98.49 UFALPRAGUE-01 UD Arabic-PADT 91.82 97.65 CHARLES-SAARLAND-02 UD Komi Zyrian-IKDP 59.52 82.99 UFALPRAGUE-01 UD Arabic-PUD 86.35 94.66 RUG-01 UD Komi Zyrian-Lattice 74.12 82.99 RUG-01 / RUG-02 UD Armenian-ArmTDP 86.74 96.66 CHARLES-SAARLAND-02 UD Korean-GSD 85.90 96.27 CHARLES-SAARLAND-02 UD Bambara-CRB 88.94 95.55 UFALPRAGUE-01 UD Korean-Kaist 89.45 97.58 CHARLES-SAARLAND-02 UD Basque-BDT 87.54 96.30 CHARLES-SAARLAND-02 UD Korean-PUD 88.15 96.76 CHARLES-SAARLAND-02 UD Belarusian-HSE 78.80 95.68 CHARLES-SAARLAND-02 UD Kurmanji-MG 86.54 91.28 UFALPRAGUE-01 UD Breton-KEB 88.34 93.79 UFALPRAGUE-01 UD Latin-ITTB 93.12 98.96 CHARLES-SAARLAND-02 UD Bulgarian-BTB 93.85 99.18 CHARLES-SAARLAND-02 UD Latin-Perseus 78.91 94.65 UFALPRAGUE-01 UD Buryat-BDT 80.94 90.50 UFALPRAGUE-01 UD Latin-PROIEL 91.42 97.87 CHARLES-SAARLAND-02 UD Cantonese-HK 76.80 92.83 CHARLES-SAARLAND-02 UD Latvian-LVTB 89.55 98.04 CHARLES-SAARLAND-02 UD Catalan-AnCora 95.73 99.45 CHARLES-SAARLAND-02 UD Lithuanian-HSE 67.39 87.97 CHARLES-SAARLAND-02 UD Chinese-CFL 82.05 93.21 UFALPRAGUE-01 UD Marathi-UFAL 69.71 80.19 CHARLES-SAARLAND-02 UD Chinese-GSD 83.79 97.04 CHARLES-SAARLAND-02 UD Naija-NSC 76.73 95.47 UFALPRAGUE-01 UD Coptic-Scriptorium 93.56 97.17 UFALPRAGUE-01 UD North Sami-Giella 85.45 95.33 CHARLES-SAARLAND-02 UD Croatian-SET 90.39 97.82 CHARLES-SAARLAND-02 UD Norwegian-Bokmaal 93.17 99.02 CHARLES-SAARLAND-02 UD Czech-CAC 93.94 99.48 CHARLES-SAARLAND-02 UD Norwegian-Nynorsk 92.85 98.97 CHARLES-SAARLAND-02 UD Czech-CLTT 92.61 98.32 UFALPRAGUE-01 UD Norwegian-NynorskLIA 89.21 97.39 CHARLES-SAARLAND-02 UD Czech-FicTree 90.32 98.90 CHARLES-SAARLAND-02 UD Old Church Slavonic-PROIEL 91.17 97.13 UFALPRAGUE-01 UD Czech-PDT 94.23 99.47 CHARLES-SAARLAND-02 UD Persian-Seraji 93.76 98.68 UFALPRAGUE-01 UD Czech-PUD 85.73 98.23 UFALPRAGUE-01 UD Polish-LFG 88.73 98.86 CHARLES-SAARLAND-02Baseline
Best Team
Language (Treebank)
Baseline
Best Team
The Basque language data was extracted from a manually designed finite-state morphological analyzer(Alegria et al., 2009). Murrinhpatha data was donated by JohnMansfield; it
AcknowledgmentsMS has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 771113).
How (not) to train a dependency parser: The curious case of jackknifing part-of-speech taggers. Zeljko Agić, Natalie Schluter, 10.18653/v1/P17-2107Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics2Short Papers)Zeljko Agić and Natalie Schluter. 2017. How (not) to train a dependency parser: The curious case of jack- knifing part-of-speech taggers. In Proceedings of the 55th Annual Meeting of the Association for Com- putational Linguistics (Volume 2: Short Papers), pages 679-684, Vancouver, Canada. Association for Computational Linguistics.
Porting Basque morphological grammars to foma, an open-source tool. Izaskun Inaki Alegria, Etxeberria, International Workshop on Finite-State Methods and Natural Language Processing. SpringerMans Hulden, and Montserrat MaritxalarInaki Alegria, Izaskun Etxeberria, Mans Hulden, and Montserrat Maritxalar. 2009. Porting Basque mor- phological grammars to foma, an open-source tool. In International Workshop on Finite-State Methods and Natural Language Processing, pages 105-113. Springer.
Context sensitive neural lemmatization with Lematus. Toms Bergmanis, Sharon Goldwater, 10.18653/v1/N18-1126Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Toms Bergmanis and Sharon Goldwater. 2018. Con- text sensitive neural lemmatization with Lematus. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1391-1400, New Orleans, Louisiana. Association for Computational Linguistics.
. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, D Arya, Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Arya D.
The CoNLL-SIGMORPHON 2018 shared task: Universal morphological reinflection. Katharina Mccarthy, Sebastian Kann, Garrett Mielke, Miikka Nicolai, David Silfverberg, Yarowsky, Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological Reinflection. the CoNLL-SIGMORPHON 2018 Shared Task: Universal Morphological ReinflectionBrusselsAssociation for Computational LinguisticsJason Eisner, and Mans HuldenMcCarthy, Katharina Kann, Sebastian Mielke, Gar- rett Nicolai, Miikka Silfverberg, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. The CoNLL- SIGMORPHON 2018 shared task: Universal mor- phological reinflection. In Proceedings of the CoNLL-SIGMORPHON 2018 Shared Task: Univer- sal Morphological Reinflection, pages 1-27, Brus- sels. Association for Computational Linguistics.
Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Gėraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. Ryan Cotterell, Christo Kirov, John Sylak-Glassman, Gėraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2017.
- Conll, Sigmorphon, Universal morphological reinflection in 52 languages. Proceedings of the CoNLL SIGMORPHON 2017. CoNLL-SIGMORPHON 2017 shared task: Uni- versal morphological reinflection in 52 languages. Proceedings of the CoNLL SIGMORPHON 2017
Shared Task: Universal Morphological Reinflection. Shared Task: Universal Morphological Reinflection, pages 1-30.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Language modeling for morphologically rich languages: Character-aware modeling for word-level prediction. Daniela Gerz, Ivan Vulić, Edoardo Ponti, Jason Naradowsky, Roi Reichart, Anna Korhonen, 10.1162/tacl_a_00032Transactions of the Association for Computational Linguistics. 6Daniela Gerz, Ivan Vulić, Edoardo Ponti, Jason Narad- owsky, Roi Reichart, and Anna Korhonen. 2018. Language modeling for morphologically rich lan- guages: Character-aware modeling for word-level prediction. Transactions of the Association for Com- putational Linguistics, 6:451-465.
Exploring cross-lingual transfer of morphological knowledge in sequence-to-sequence models. Huiming Jin, Katharina Kann, 10.18653/v1/W17-4110Proceedings of the First Workshop on Subword and Character Level Models in NLP. the First Workshop on Subword and Character Level Models in NLPCopenhagen, DenmarkAssociation for Computational LinguisticsHuiming Jin and Katharina Kann. 2017. Exploring cross-lingual transfer of morphological knowledge in sequence-to-sequence models. In Proceedings of the First Workshop on Subword and Character Level Models in NLP, pages 70-75, Copenhagen, Den- mark. Association for Computational Linguistics.
MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. Katharina Kann, Hinrich Schütze, 10.18653/v1/W16-2010Proceedings of the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. the 14th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and MorphologyBerlin, GermanyAssociation for Computational LinguisticsKatharina Kann and Hinrich Schütze. 2016. MED: The LMU system for the SIGMORPHON 2016 shared task on morphological reinflection. In Proceedings of the 14th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology, pages 62-70, Berlin, Germany. Associa- tion for Computational Linguistics.
Archi. E Aleksandr, Kibrik, The Handbook of Morphology. Andrew Spencer and Arnold M. ZwickyOxfordBlackwell PublishersAleksandr E. Kibrik. 1998. Archi. In Andrew Spencer and Arnold M. Zwicky, editors, The Handbook of Morphology, pages 455-476. Oxford: Blackwell Publishers.
UniMorph 2.0: Universal Morphology. Christo Kirov, Ryan Cotterell, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian J Mielke, Arya D Mccarthy, Sandra Kübler, David Yarowsky, Proceedings of the 11th Language Resources and Evaluation Conference. the 11th Language Resources and Evaluation ConferenceMiyazaki, JapanEuropean Language Resource AssociationChristo Kirov, Ryan Cotterell, John Sylak-Glassman, Géraldine Walther, Ekaterina Vylomova, Patrick Xia, Manaal Faruqui, Sebastian J. Mielke, Arya D. McCarthy, Sandra Kübler, David Yarowsky, Jason Eisner, and Mans Hulden. 2018. UniMorph 2.0: Universal Morphology. In Proceedings of the 11th Language Resources and Evaluation Conference, Miyazaki, Japan. European Language Resource As- sociation.
Effective approaches to attention-based neural machine translation. Thang Luong, Hieu Pham, Christopher D Manning, 10.18653/v1/D15-1166Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsThang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natu- ral Language Processing, pages 1412-1421, Lis- bon, Portugal. Association for Computational Lin- guistics.
Neural factor graph models for cross-lingual morphological tagging. Chaitanya Malaviya, Matthew R Gormley, Graham Neubig, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, Australia1Association for Computational LinguisticsChaitanya Malaviya, Matthew R. Gormley, and Gra- ham Neubig. 2018. Neural factor graph models for cross-lingual morphological tagging. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 2653-2663, Melbourne, Australia. As- sociation for Computational Linguistics.
A simple joint model for improved contextual neural lemmatization. Chaitanya Malaviya, Shijie Wu, Ryan Cotterell, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Chaitanya Malaviya, Shijie Wu, and Ryan Cotterell. 2019. A simple joint model for improved contextual neural lemmatization. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1517-1528, Minneapolis, Minnesota. Association for Computational Linguistics.
Murrinhpatha morphology and phonology. John Mansfield, Walter de Gruyter GmbH & Co KG653John Mansfield. 2019. Murrinhpatha morphology and phonology, volume 653. Walter de Gruyter GmbH & Co KG.
Marrying Universal Dependencies and Universal Morphology. D Arya, Miikka Mccarthy, Ryan Silfverberg, Mans Cotterell, David Hulden, Yarowsky, Proceedings of the Second Workshop on Universal Dependencies (UDW 2018). the Second Workshop on Universal Dependencies (UDW 2018)Brussels, BelgiumAssociation for Computational LinguisticsArya D. McCarthy, Miikka Silfverberg, Ryan Cotterell, Mans Hulden, and David Yarowsky. 2018. Marrying Universal Dependencies and Universal Morphology. In Proceedings of the Second Workshop on Univer- sal Dependencies (UDW 2018), pages 91-101, Brus- sels, Belgium. Association for Computational Lin- guistics.
Joint lemmatization and morphological tagging with LEMMING. Thomas Müller, Ryan Cotterell, Alexander Fraser, Hinrich Schütze, 10.18653/v1/D15-1272Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsThomas Müller, Ryan Cotterell, Alexander Fraser, and Hinrich Schütze. 2015. Joint lemmatization and morphological tagging with LEMMING. In Proceed- ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 2268- 2274, Lisbon, Portugal. Association for Computa- tional Linguistics.
Rapid adaptation of neural machine translation to new languages. Graham Neubig, Junjie Hu, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsGraham Neubig and Junjie Hu. 2018. Rapid adapta- tion of neural machine translation to new languages. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 875-880, Brussels, Belgium. Association for Com- putational Linguistics.
. Joakim Nivre, Mitchell Abrams, Željko Agić, Lars Ahrenberg, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitziber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Victoria Verginica Barbu Mititelu, John Basmov, Sandra Bauer, Kepa Bellato, Yevgeni Bengoetxea, Berzak, Ahmad Irshad, Riyaz Ahmad Bhat, Erica Bhat, Eckhard Biagetti, Rogier Bick, Victoria Blokland, Carl Bobicev, Cristina Börstell, Gosse Bosco, Sam Bouma, Adriane Bowman, Aljoscha Boyd, Marie Burchardt, Bernard Candito, Gauthier Caron, Gülşen Caron, Flavio Cebiroglu Eryigit, Giuseppe G A Massimiliano Cecchini, Celano, Savas Slavomírčéplö, Fabricio Cetin, Jinho Chalub, Yongseok Choi, Jayeol Cho, Silvie Chun, Cinková, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam EphremTomaž ErjavecMiriam Connor, Marine Courtin, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson; AlineJoakim Nivre, Mitchell Abrams,Željko Agić, Lars Ahrenberg, Lene Antonsen, Katya Aplonova, Maria Jesus Aranzabe, Gashaw Arutie, Masayuki Asahara, Luma Ateyah, Mohammed Attia, Aitz- iber Atutxa, Liesbeth Augustinus, Elena Badmaeva, Miguel Ballesteros, Esha Banerjee, Sebastian Bank, Verginica Barbu Mititelu, Victoria Basmov, John Bauer, Sandra Bellato, Kepa Bengoetxea, Yev- geni Berzak, Irshad Ahmad Bhat, Riyaz Ahmad Bhat, Erica Biagetti, Eckhard Bick, Rogier Blok- land, Victoria Bobicev, Carl Börstell, Cristina Bosco, Gosse Bouma, Sam Bowman, Adriane Boyd, Aljoscha Burchardt, Marie Candito, Bernard Caron, Gauthier Caron, Gülşen Cebiroglu Eryigit, Flavio Massimiliano Cecchini, Giuseppe G. A. Celano, SlavomírČéplö, Savas Cetin, Fabricio Chalub, Jinho Choi, Yongseok Cho, Jayeol Chun, Silvie Cinková, Aurélie Collomb, Ç agrı Çöltekin, Miriam Connor, Marine Courtin, Elizabeth David- son, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Carly Dickerson, Pe- ter Dirix, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Ali Elkahky, Binyam Ephrem, Tomaž Erjavec, Aline
Richárd Etienne, Hector Farkas, Jennifer Fernandez Alcalde, Cláudia Foster, Katarína Freitas, Daniel Gajdošová, Marcos Galbraith, Moa Garcia, Sebastian Gärdenfors, Kim Garza, Filip Gerdes, Iakes Ginter, Koldo Goenaga, Memduh Gojenola, Yoav Gökırmak, Xavier Gómez Goldberg, Berta Gonzáles Guinovart, Matias Saavedra, Normunds Grioni, Bruno Grūzītis, Céline Guillaume, Nizar Guillot-Barbance, Jan Habash, Jan Hajič, Hajič Jr, Na-Rae Linh Hà Mỹ, Kim Han, Dag Harris, Barbora Haug, Jaroslava Hladká, Florinel Hlaváčová, Petter Hociung, Jena Hohle, Radu Hwang, Elena Ion, O Irimia, Tomáš Ishola, Anders Jelínek, Fredrik Johannsen, Hüner Jørgensen, Kaşıkara, Hiroshi Sylvain Kahane, Jenna Kanayama, Boris Kanerva, Tolga Katz, Jessica Kayadelen, Václava Kenney, Jesse Kettnerová, Kamil Kirchner, Natalia Kopacewicz, Simon Kotsyba, Sookyoung Krek, Veronika Kwak, Lorenzo Laippala, Lucia Lambertino, Tatiana Lam, Lando, Alexei Septina Dian Larasati, John Lavrentiev, Phuong Lee, Alessandro Lê H`ông, Saran Lenci, Herman Lertpradit, Leung, Ying Cheuk, Josie Li, Keying Li, Kyungtae Li, Nikola Lim, Olga Ljubešić, Olga Loginova, Teresa Lyashevskaya, Vivien Lynn, Aibek Macketanz, Michael Makazhanov, Christopher Mandl, Ruli Manning, Cȃtȃlina Manurung, David Mȃrȃnduc, Katrin Mareček, Marheinecke, André Héctor Martínez Alonso, Jan Martins, Yuji Mašek, Ryan Matsumoto, Gustavo Mcdonald, Niko Mendonça, Margarita Miekka, Anna Misirpashayeva, Cȃtȃlin Missilä, Yusuke Mititelu, Simonetta Miyao, Amir Montemagni, Laura Moreno More, Keiko Sophie Romero, Shinsuke Mori, Bjartur Mori, Bohdan Mortensen, Kadri Moskalevskyi, Yugo Muischnek, Kaili Murawaki, Pinkey Müürisep, Juan Ignacio Navarro Nainwani, Anna Horñiacek, Gunta Nedoluzhko, Nešpore-Bērzkalne, Luong Nguy˜ên Thi, Huy`ên Nguy˜ên Thi, Vitaly Minh, Rattima Nikolaev, Hanna Nitisaroj, Stina Nurmi, Adédayo Ojala, Mai Olúòkun, Georg Omura ; Siva Reddy, Michael Rehm, Larissa Rießler, Laura Rinaldi, Luisa Rituma, Mykhailo Rocha, Rudolf Romanenko, Davide Rosa, Rovati, Faculty of Mathematics and Physics. Yan, Marat M. Yavrumyan, Zhuoran Yu, ZdeněkŽabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi ZhuPetya Osenova, RobertÖstling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Patejuk, Guilherme Paulino-Passos, Siyao Peng, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalniņa, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Tiina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real; Valentin Roca, Olga Rudina, Jack Rueter, Shoval Sadde; Maria Simi, Radu Simionescu, Katalin Simkó, MáriaŠimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeňka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van NiekerkGertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Jing Xian Wang, Jonathan North Washington, Seyi Williams, Mats Wirén, Tsegay Woldemariam, Tak-sum Wong ; Charles UniversityUniversal dependencies 2.3. LIN-DAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFALEtienne, Richárd Farkas, Hector Fernandez Al- calde, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Garcia, Moa Gärdenfors, Sebastian Garza, Kim Gerdes, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guino- vart, Berta Gonzáles Saavedra, Matias Grioni, Nor- munds Grūzītis, Bruno Guillaume, Céline Guillot- Barbance, Nizar Habash, Jan Hajič, Jan Hajič jr., Linh Hà Mỹ, Na-Rae Han, Kim Harris, Dag Haug, Barbora Hladká, Jaroslava Hlaváčová, Florinel Hociung, Petter Hohle, Jena Hwang, Radu Ion, Elena Irimia, O . lájídé Ishola, Tomáš Jelínek, An- ders Johannsen, Fredrik Jørgensen, Hüner Kaşıkara, Sylvain Kahane, Hiroshi Kanayama, Jenna Kan- erva, Boris Katz, Tolga Kayadelen, Jessica Ken- ney, Václava Kettnerová, Jesse Kirchner, Kamil Kopacewicz, Natalia Kotsyba, Simon Krek, Sooky- oung Kwak, Veronika Laippala, Lorenzo Lam- bertino, Lucia Lam, Tatiana Lando, Septina Dian Larasati, Alexei Lavrentiev, John Lee, Phuong Lê H`ông, Alessandro Lenci, Saran Lertpradit, Her- man Leung, Cheuk Ying Li, Josie Li, Keying Li, KyungTae Lim, Nikola Ljubešić, Olga Logi- nova, Olga Lyashevskaya, Teresa Lynn, Vivien Macketanz, Aibek Makazhanov, Michael Mandl, Christopher Manning, Ruli Manurung, Cȃtȃlina Mȃrȃnduc, David Mareček, Katrin Marheinecke, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Gus- tavo Mendonça, Niko Miekka, Margarita Misir- pashayeva, Anna Missilä, Cȃtȃlin Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Laura Moreno Romero, Keiko Sophie Mori, Shinsuke Mori, Bjartur Mortensen, Bohdan Moskalevskyi, Kadri Muischnek, Yugo Murawaki, Kaili Müürisep, Pinkey Nainwani, Juan Ignacio Navarro Horñiacek, Anna Nedoluzhko, Gunta Nešpore-Bērzkalne, Lu- ong Nguy˜ên Thi . , Huy`ên Nguy˜ên Thi . Minh, Vitaly Nikolaev, Rattima Nitisaroj, Hanna Nurmi, Stina Ojala, Adédayo . Olúòkun, Mai Omura, Petya Osen- ova, RobertÖstling, Lilja Øvrelid, Niko Partanen, Elena Pascual, Marco Passarotti, Agnieszka Pate- juk, Guilherme Paulino-Passos, Siyao Peng, Cenel- Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitu- lainen, Emily Pitler, Barbara Plank, Thierry Poibeau, Martin Popel, Lauma Pretkalniņa, Sophie Prévost, Prokopis Prokopidis, Adam Przepiórkowski, Ti- ina Puolakainen, Sampo Pyysalo, Andriela Rääbis, Alexandre Rademaker, Loganathan Ramasamy, Taraka Rama, Carlos Ramisch, Vinit Ravishankar, Livy Real, Siva Reddy, Georg Rehm, Michael Rießler, Larissa Rinaldi, Laura Rituma, Luisa Rocha, Mykhailo Romanenko, Rudolf Rosa, Davide Rovati, Valentin Roca, Olga Rudina, Jack Rueter, Shoval Sadde, Benoît Sagot, Shadi Saleh, Tanja Samardžić, Stephanie Samson, Manuela Sanguinetti, Baiba Saulīte, Yanin Sawanakunanon, Nathan Schneider, Sebastian Schuster, Djamé Seddah, Wolf- gang Seeker, Mojgan Seraji, Mo Shen, Atsuko Shi- mada, Muh Shohibussirri, Dmitry Sichinava, Na- talia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, MáriaŠimková, Kiril Simov, Aaron Smith, Isabela Soares-Bastos, Carolyn Spadine, Antonio Stella, Milan Straka, Jana Strnadová, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Yuta Takahashi, Takaaki Tanaka, Isabelle Tellier, Trond Trosterud, Anna Trukhina, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Zdeňka Urešová, Larraitz Uria, Hans Uszkoreit, Sowmya Vajjala, Daniel van Niekerk, Gertjan van Noord, Viktor Varga, Eric Villemonte de la Clergerie, Veronika Vincze, Lars Wallin, Jing Xian Wang, Jonathan North Washing- ton, Seyi Williams, Mats Wirén, Tsegay Wolde- mariam, Tak-sum Wong, Chunxiao Yan, Marat M. Yavrumyan, Zhuoran Yu, ZdeněkŽabokrtský, Amir Zeldes, Daniel Zeman, Manying Zhang, and Hanzhi Zhu. 2018. Universal dependencies 2.3. LIN- DAT/CLARIN digital library at the Institute of For- mal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
Modeling language variation and universals: A survey on typological linguistics for natural language processing. Helen O' Edoardo Maria Ponti, Yevgeni Horan, Ivan Berzak, Roi Vulic, Thierry Reichart, Ekaterina Poibeau, Anna Shutova, Korhonen, abs/1807.00914CoRREdoardo Maria Ponti, Helen O'Horan, Yevgeni Berzak, Ivan Vulic, Roi Reichart, Thierry Poibeau, Ekaterina Shutova, and Anna Korhonen. 2018. Modeling lan- guage variation and universals: A survey on typo- logical linguistics for natural language processing. CoRR, abs/1807.00914.
A universal feature schema for rich morphological annotation and fine-grained cross-lingual part-of-speech tagging. John Sylak-Glassman, Christo Kirov, Matt Post, Roger Que, David Yarowsky, https:/link.springer.com/chapter/10.1007/978-3-319-23980-4_5Proceedings of the 4th Workshop on Systems and Frameworks for Computational Morphology (SFCM), Communications in Computer and Information Science. Cerstin Mahlow and Michael Piotrowskithe 4th Workshop on Systems and Frameworks for Computational Morphology (SFCM), Communications in Computer and Information ScienceBerlinSpringerJohn Sylak-Glassman, Christo Kirov, Matt Post, Roger Que, and David Yarowsky. 2015a. A universal feature schema for rich morphological annotation and fine-grained cross-lingual part-of-speech tag- ging. In Cerstin Mahlow and Michael Piotrowski, editors, Proceedings of the 4th Workshop on Sys- tems and Frameworks for Computational Morphol- ogy (SFCM), Communications in Computer and In- formation Science, pages 72-93. Springer, Berlin.
A language-independent feature schema for inflectional morphology. John Sylak-Glassman, Christo Kirov, David Yarowsky, Roger Que, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics2Short Papers)John Sylak-Glassman, Christo Kirov, David Yarowsky, and Roger Que. 2015b. A language-independent feature schema for inflectional morphology. In Pro- ceedings of the 53rd Annual Meeting of the Associ- ation for Computational Linguistics and the 7th In- ternational Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 674- 680, Beijing, China. Association for Computational Linguistics.
Cloze procedure": A new tool for measuring readability. L Wilson, Taylor, Journalism Bulletin. 304Wilson L Taylor. 1953. "Cloze procedure": A new tool for measuring readability. Journalism Bulletin, 30(4):415-433.
Overcoming catastrophic forgetting during domain adaptation of neural machine translation. Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, Philipp Koehn, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Brian Thompson, Jeremy Gwinnup, Huda Khayrallah, Kevin Duh, and Philipp Koehn. 2019. Overcoming catastrophic forgetting during domain adaptation of neural machine translation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 2062-2068, Minneapolis, Min- nesota. Association for Computational Linguistics.
Freezing subnetworks to analyze domain adaptation in neural machine translation. Brian Thompson, Huda Khayrallah, Antonios Anastasopoulos, Arya D Mccarthy, Kevin Duh, Rebecca Marvin, Paul Mcnamee, Jeremy Gwinnup, Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBelgium, BrusselsAssociation for Computational LinguisticsTim Anderson, and Philipp KoehnBrian Thompson, Huda Khayrallah, Antonios Anasta- sopoulos, Arya D. McCarthy, Kevin Duh, Rebecca Marvin, Paul McNamee, Jeremy Gwinnup, Tim An- derson, and Philipp Koehn. 2018. Freezing subnet- works to analyze domain adaptation in neural ma- chine translation. In Proceedings of the Third Con- ference on Machine Translation: Research Papers, pages 124-132, Belgium, Brussels. Association for Computational Linguistics.
Developing a large-scale lexicon for a less-resourced language: General methodology and preliminary experiments on Sorani Kurdish. Géraldine Walther, Benoît Sagot, Proceedings of the 7th. the 7thGéraldine Walther and Benoît Sagot. 2010. Develop- ing a large-scale lexicon for a less-resourced lan- guage: General methodology and preliminary exper- iments on Sorani Kurdish. In Proceedings of the 7th
SaLTMiL Workshop on Creation and use of basic lexical resources for less-resourced languages (LREC 2010 Workshop). Valetta, MaltaSaLTMiL Workshop on Creation and use of basic lex- ical resources for less-resourced languages (LREC 2010 Workshop), Valetta, Malta.
Fast development of basic NLP tools: Towards a lexicon and a POS tagger for Kurmanji Kurdish. Géraldine Walther, Benoît Sagot, Karën Fort, In International conference on lexis and grammarGéraldine Walther, Benoît Sagot, and Karën Fort. 2010. Fast development of basic NLP tools: Towards a lex- icon and a POS tagger for Kurmanji Kurdish. In In- ternational conference on lexis and grammar.
Phonological features for morphological inflection. Adam Wiemerslage, Miikka Silfverberg, Mans Hulden, Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology. the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and MorphologyBrussels, BelgiumAssociation for Computational LinguisticsAdam Wiemerslage, Miikka Silfverberg, and Mans Hulden. 2018. Phonological features for morpho- logical inflection. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 161-166, Brus- sels, Belgium. Association for Computational Lin- guistics.
Exact hard monotonic attention for character-level transduction. Shijie Wu, Ryan Cotterell, arXiv:1905.06319v1arXiv preprintShijie Wu and Ryan Cotterell. 2019. Exact hard monotonic attention for character-level transduction. arXiv preprint arXiv:1905.06319v1.
Hard non-monotonic attention for character-level transduction. Shijie Wu, Pamela Shapiro, Ryan Cotterell, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsShijie Wu, Pamela Shapiro, and Ryan Cotterell. 2018. Hard non-monotonic attention for character-level transduction. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 4425-4438, Brussels, Belgium. Association for Computational Linguistics.
Transfer learning for low-resource neural machine translation. Barret Zoph, Deniz Yuret, Jonathan May, Kevin Knight, 10.18653/v1/D16-1163Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsBarret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low-resource neural machine translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas. Association for Computational Linguistics.
| [] |
[
"Synergistic Union of Word2Vec and Lexicon for Domain Specific Semantic Similarity",
"Synergistic Union of Word2Vec and Lexicon for Domain Specific Semantic Similarity"
] | [
"Keet Sugathadasa \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n",
"Buddhi Ayesha \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n",
"Nisansa De Silva \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n",
"Amal Shehan Perera \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n",
"Vindula Jayawardana \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n",
"Dimuthu Lakmal \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n",
"Madhavi Perera \nDepartment of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n\n"
] | [
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n",
"Department of Computer Science & Engineering\nUniversity of Moratuwa\nUniversity of London International Programmes University of London\n"
] | [] | Semantic similarity measures are an important part in Natural Language Processing tasks. However Semantic similarity measures built for general use do not perform well within specific domains. Therefore in this study we introduce a domain specific semantic similarity measure that was created by the synergistic union of word2vec, a word embedding method that is used for semantic similarity calculation and lexicon based (lexical) semantic similarity methods. We prove that this proposed methodology out performs word embedding methods trained on generic corpus and methods trained on domain specific corpus but do not use lexical semantic similarity methods to augment the results. Further, we prove that text lemmatization can improve the performance of word embedding methods. | 10.1109/iciinfs.2017.8300343 | [
"https://arxiv.org/pdf/1706.01967v2.pdf"
] | 3,500,363 | 1706.01967 | 2cb6a9db4ce27bede0abd5a6a90470df2b1e8b3e |
Synergistic Union of Word2Vec and Lexicon for Domain Specific Semantic Similarity
Keet Sugathadasa
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Buddhi Ayesha
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Nisansa De Silva
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Amal Shehan Perera
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Vindula Jayawardana
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Dimuthu Lakmal
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Madhavi Perera
Department of Computer Science & Engineering
University of Moratuwa
University of London International Programmes University of London
Synergistic Union of Word2Vec and Lexicon for Domain Specific Semantic Similarity
Word EmbeddingSemantic SimilarityNeural Net- worksLexiconword2vec
Semantic similarity measures are an important part in Natural Language Processing tasks. However Semantic similarity measures built for general use do not perform well within specific domains. Therefore in this study we introduce a domain specific semantic similarity measure that was created by the synergistic union of word2vec, a word embedding method that is used for semantic similarity calculation and lexicon based (lexical) semantic similarity methods. We prove that this proposed methodology out performs word embedding methods trained on generic corpus and methods trained on domain specific corpus but do not use lexical semantic similarity methods to augment the results. Further, we prove that text lemmatization can improve the performance of word embedding methods.
I. INTRODUCTION
Semantic Similarity measurements based on linguistic features are a fundamental component of almost all Natural Language Processing (NLP) tasks: Information Retrieval, Information Extraction, and Natural Language Understanding [1]. In the case of NLP based Information Retrieval (IR), it plays into the task of obtaining the items that are most relevant to the query whereas Information Extraction (IE), plays into the task of correctly recognizing the linguistic elements to be extracted be it the Part of Speech (PoS) tags or Named Entities in Named Entity Recognition (NER). In the case of Text Understanding which is also known as Natural Language Understanding (NLU), it helps in identifying semantic connections between elements on the document that is being analyzed.
Law and order could be rather regarded as the cloak of invisibility that operates and controls the human behavior to its possible extents in the name of justice. Thus in terms of maintaining social order, quiddity of law within the society is mandatory. John Stuart Mill articulated a principle in On Liberty, where he stated that The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others [2]. Being such a vital field, acquisition of laws and legal documents through technological means is on the list of necessities on the rise. Also, ample justification for the need of semantic disambiguation in the legal domain can be found in seminal cases such as: Fagan v MPC and R v Woollin. Therefore we selected the law as the domain for this study.
The legal domain contains a considerable amount of domain specific jargon where the etymology lies with mainly Latin and English. To complicate this fact, in certain cases, the meaning of the words and context differs by the legal officers interpretations.
Another field which suffers similarly from this issue is the medical industry [3]. Non-systematic organization and complexity of the medical documents result in medical domain falling victim to loss of having proper representations for its terminology. PubMed [4] attempts to remedy this. Later studies such as [5] utilize these repositories. Therefore, it is possible to claim that the problem addressed in this study is one that is not just limited to the legal domain but is one that transcends into a numerous other domains as well.
Methods that treat words as independent atomic units is not sufficient to capture the expressiveness of language [6]. A solution to this is word context learning methods [7]- [9]. Another solution is lexical semantic similarity based methods [10]. Both of these approaches try to capture semantic and syntactic information of a word. In this study we propose a methodology to have a synergistic union of both of these methods. First for word context learning, we used a Word Embedding [7] method, word2vec [6]. Then we used a number of lexical semantic similarity measures [11]- [13] to augment and improve the result.
The hypothesis of this study has three main claims: (1) A word embedding model trained on a small domain specific corpus can outperform a word embedding model trained on a large but generic corpus, (2) Word lemmatization, which removes inflected forms of words, would improve the performance of a word embedding model, (3) Usage of lexical semantic similarity measures trained over a machine learning system can improve the overall system performance. Our results sufficiently prove all of these claims to be true.
The structure of the paper is as follows. Section II gives a brief overview on the current tools being used and domains that have tackled this problem of word representations. Section III gives a description on the methodology being used in this research in order to obtain the results and conclusions as necessary. That is followed by Section IV that presents and analyses results. The paper ends with Section V which gives the conclusion and discusses future work.
II. BACKGROUND AND RELATED WORK
This section illustrates the background of the techniques used in this study and work carried out by others in various areas relevant to this research. The subsections given below are the important key areas in this study.
A. Lexical Semantic Similarity Measures
Lexical Semantic similarity of two entities is a measure of the likeness of the semantic content of those entities, most commonly calculated with the help of topological similarity existing within an ontology such as WordNet [14]. Wu and Palmer proposed a method to give the similarity between two words in the 0 to 1 range [11]. In comparison, Jiang and Conrath proposed a method to measure the lexical semantic similarity between word pairs using corpus statistics and lexical taxonomy [12]. Hirst & St-Onge's system [13] quantifies the amount that the relevant synsets are connected by a path that is not too long and that does not change direction often. The strengths of each of these algorithms were evaluated in [10] by means of [15].
B. Word Vector Embedding
Traditionally, in Natural Language Processing systems, words are treated as atomic units ignoring the correlation between the words as they are represented just by indices in a vocabulary [6]. To solve the inadequacies of that approach, distributed representation of words and phrases through word embeddings was proposed [16]. The idea is to create vector representations for each of the words in a text document along with word meanings and relationships between the words all mapped to a common vector space.
A number of Word Vector Embedding systems have been proposed such as: GloVe [17], Latent Dirichlet Allocation (LDA) [18], and word2vec 1 [19]. GloVe uses a word to neighboring word mapping when learning dense embeddings, which uses a matrix factorization mechanism. LDA uses a similar approach via matrices, but the concept is based on mapping words with relevant sets of documents. word2vec uses a neural network based approach that uses word to neighboring word mapping. Due to the flexibility and features it provided in terms of parameter passing when training the model using a text corpus, we use word2vec in this study. word2vec supports two main training models: Skip-gram [20] and Continuous Bag Of Words (CBOW) [19].
C. Legal Information Systems
Schweighofer [21], claims that there is a huge vacuum that should be addressed in eradicating the information crisis that the applications in the field of law suffer from. This vacuum is evident by the fact that, despite being important, there is a 1 https://code.google.com/p/word2vec/ scarcity of legal information systems. Even though the two main commercial systems; WestLaw 2 and LexisNexis 3 are widely used, they only provide query based searching, where legal officers need to remember keywords which are predefined when querying for relevant legal information, there is still a hassle in accessing this information.
One of the most popular legal information retrieval systems is KONTERM [21], which was developed to represent document structures and contents. However, it too suffered from scalability issues. The currently existing implementation that is closest to our proposed model is Gov2Vec [22], which is a system that creates vector representations of words in the legal domain, by creating a vocabulary from across all corpora on, supreme court opinions, presidential actions and official summaries of congressional bills. It uses a neural network [7] to predict the target word with the mean of its context words' vectors. However, the text copora used here, itself was not sufficient enough to represent the entire legal domain. In addition to that, the Gov2Vec trained model is not available to use used by legal professionals or to be tested against.
III. METHODOLOGY
This section describes the research that was carried out. Each section below, addresses a component in the overall methodology. An overview of the methodology we propose is illustrated in Fig. 1.
The first phase of this study was to gather the necessary legal cases from on-line repositories. We obtained over 35000 legal case documents, pertaining to various areas of practices in law, from Findlaw [23] by web crawling. Due to this reason, the system tends to be generalized well over many aspects of law.
A. Text Lemmatization
The linguistic process of mapping inflected forms of a word to the word's core lemma is called lemmatization [24]. The crawled natural language text would contain words of all inflected forms. However, the default word2vec model does not run a lemmatizer before the word vector embeddings are calculated; which results in each of the inflected forms of a single word ending up with a separate embedding vector. This in turn leads to many drawbacks and inefficiencies. Maintaining a separate vector for each inflected form of each word makes the model bloat up an consume memory unnecessarily. This especially leads to problems in building the model. Further, having separate vector for each inflected form weakens the model because the values for words originating from the same lemma will be distributed over those multiple vectors. For example, when we search for similar words to the noun input "judge", we get the following words as similar words: Judge, judges, Judges. Similarly, for verb input "train", the model would return the following set of words: training, trains, trained. As shown in Fig. 1, we use the raw legal text to train the word2vec LR model. But for both word2vec LL model and word2vec LLS model, we first lemmatize the legal document corpus to map all inflected forms of the words to their respective lemmas. For this task we use the Stanford CoreNLP library [25].
B. Training word2vec models
In this stage we trained wod2vec models. One was on the raw legal text corpus and the other was on the lemmatized law text corpus. Each input legal text corpus included text from over 35000 legal case documents amounting to 20 billion words in all. The training took over 2 days. As mentioned in the Section III-A, the model trained by the raw legal text corpus is the word2vec LR model shown in Fig. 1. The model trained on lemmatized law text corpus is word2vec LL . Further, a clone of the trained word2vec LL is passed forward to III-C in order to build the word2vec LLS model. Following are the important parameters we specified in training these models.
• size (dimensionality): 200 • context window size: 10 • learning model: CBOW • min-count: 5 • training algorithm: hierarchical softmax For this phase, we used a neural network with a single hidden layer as depicted in figure 2. As mentioned above, to train the system to output a user-specified dimensional vector space, we picked the Continuous Bag Of Words approach for learning weights. The rationale as to why CBOW learning model was picked over skip-gram is that, Skip-gram is said to be accurate for infrequent words whereas CBOW is faster by a factor of window size which is also more appropriate for larger text corpora.
In CBOW, the current word or the target word is being predicted based on the context words, within the specified context window size.
In addition to the above trained models, we obtained the word2vec G model which was trained by Google on the Google News dataset. As shown in Fig. 1, this is a generic text corpus that contain data pertaining to a large number of topics. It is not fine tuned to the legal domain as the three models (word2vec LR , word2vec LL , and word2vec LLS ) that we train. However, Google's model was trained over on around 100 billion words that add up to 3,000,000 unique phrases. This model was built with layer size of 300. Due to the general nature and the massive case of word2vec G model, it is possible to use the comparison of results obtained by our models against the results obtained by this model to showcase the effectiveness of a model trained using a specific domain in the applications of that domain over a model trained using a generic domain in application on the same specific domain.
C. Lexical Semantic Similarity Enhancements
At this step we used established lexical semantic similarity measures to enhance the output. As mentioned in Fig. 2. word2vec neural network model: This is a neural network model with a single hidden layer, which is used in the word2vec training process. Input layer and output layer both consist of v neurons where v is the number of words in the vocabulary of the given raw text corpus. The hidden input layer consists of n neurons where n is the intended dimensionality of the vector representation of each word. Section III-B, we get a clone of the trained word2vec LL model to use in this process. Unlike in the cases of the previous models where the training system is internal to the word2vec model, here it was important that the model is trained explicitly using n-fold cross validation. As such, a training dataset where each entry is a key value pair is obtained. The key k is a lemma of a word in the legal domain and the value is an array of lemmas of words G that are most relevant in the legal domain to the word lemma used as the key. The said array is of length l. Following paragraphs explain how the training of the model and the testing happen for the first fold in n-fold cross validation. It is imperative to understand that the same process will be done for each of the folds subsequently. 1) Mathematical model: The first step was to obtain an entry from the training set and to query the word2vec LL model with the key lemma. Let us define an integer n such that n = Cl where C > 1. When executing the query, word2vec LL model was instructed to return the first n elements that matches the query best along with the word2vec similarity values. Let us call this resultant Matrix R. R has n columns where w i is the word similar to k and d i is the word2vec similarity between k and w i . Equation 1 shows R.
R = w 1 w 2 w 3 . . . w n d 1 d 2 d 3 . . . d n(1)
From R, we created the vectors W and D as shown in Equation 2 and Equation 3 respectively. Each w i and d i has the same value as they had in R.
W = {w 1 , w 2 , w 3 , ..., w n } (2) D = {d 1 , d 2 , d 3 , ..., d n }(3)
We defined the following functions for words w i and w j . The lexical semantic similarity calculated between w i and w j using the Wu & Palmer's model [11] was given by wup(w i , w j ). The lexical semantic similarity calculated by Jiang & Conrath's model [12] was given by jcn(w i , w j ). hso(w i , w j ) was used to indicate the lexical semantic similarity calculated using the Hirst & St-Onge's system [13].
Next we created the lexical semantic similarity matrix M . wup(k, w 1 ) wup(k, w 2 ) . . . wup(k, w n ) jcn(k, w 1 ) jcn(k, w 2 ) . . . jcn(k, w n ) hso(k, w 1 ) hso(k, w 2 ) . . . hso(k, w n )
1 1 . . . 1 (4)
We defined the normalizing function given in Equation 5 to return a value x norm when given a value x raw that exists between the maximum value v max and the minimum value v min . The returned x norm is a double value that has the range [0, 1].
x norm = normalize(x raw , v min , v max )(5)
We created the matrix SM from the matrix M by calculating each sm i,j using the Equation 5 on each m i,j . For this we used the min and max values shown in table I. We defined the value matrix V using the vector D and the Matrix SM as shown by Equation 6.
V = D SM T(6)
We defined the vector E where each element e i is given by activating a neural network by values v i , where v i is the ith row of V . The training of the aforementioned neural network is explained in Section III-C2.
2) Machine Learning for weight calculation: The motive behind using machine learning is to train the system to take into account how each similarity measure value can be used to derive a new, compound, and better representative value for similarity. As mentioned in Section III-C1, a neural network was chosen as the machine learning method. Initially the weight values were initialized to random values and E was calculated. Next the matrix M I was defined as shown in Equation 7.
M I = W E (7)
The matrix Y was obtained by sorting the columns of matrix M I in the descending order of elements in the second row. We defined a seeking function given in Equation 8 to return the index of word w in the first row of matrix P . If the word w does not exist in the first row of matrix P , it returns the column count of the matrix P . Also, observe that the new matrix Y has the same form as the initial matrix R shown in Equation 1. This symmetrical representation is important because it gives us the opportunity to use the same accuracy measures on all the word2vec models in Section IV to achieve a fair comparison.
s = seek(w, P )(8)
Next we defined the error err according to Equation 10 where the value of x i was derived from Equation 9 and is a small constant.
x i = 1, if seek(g i , Y ) < l n−seek(gi,Y ) n−l , otherwise(9)err = 1 − + l i=1 x i |G W | +(10)
This error err is used to adjust the neural network. This training cycle is continued until convergence. The completed model that is trained this way is named word2vec LLS model.
D. Query processing
In order to use and test our models, we built a query processing system. A user can enter a query in the legal domain using the provided interface. The system then takes the query and applies natural language processing techniques such as PoS tagging until the query is through the NLP pipeline to be lemmatized. We used the same Stanford Core NLP pipeline that we used in Section III-A for this task. The reason for this is to bring all the models to the same level to be compared equally. We have shown this step in Fig. 1.
E. Experiments
We got experts in the legal field to create a golden standard to test our models. The golden standard includes 100 concepts with each containing 5 words that are most related to the given concept in the legal domain picked from a pool of over 1500 words by the legal experts.
The accuracy levels of these experiments are measured in terms of precision and recall [1], which are common place measures in information retrieval. The logical functionality of these two are based on the comparison of an expected result and the effective result of the evaluated system.
If the Golden Standard Word Vector is G and the word vector returned by the model is W (Same naming conventions as Section III-C1), the recall in this study is calculated with equation 11. This measures the completeness of the model. Our recall calculation used the same function suggested in [1].
recall = |G W | |G|(11)
The precision calculation in this study is not as clear cut as it is described in [1]. This is because in those systems the precision is only a matter of set membership and thus would simply be ratio between the correctly found similar words over the total number of returned words. However, in the case of word2vec models, it is the user who input the number of matching words to retrieve. Thus, it is wrong to use the total number of returned words to calculate the precision in cases where there is prefect recall. In the cases of imperfect recall, the classical precision is adequate.
While word2vec make precision calculation difficult as shown above, it also has a quality that makes finding the solution to that problem somewhat easy. That is the fact that the returning word list of similar words is sorted in the descending order. This is the same property that we used in the above Section III-C2 to calculate the error. Thus it is logical to derive that the precision is given by equation 12 where err is the error calculated by equation 10.
precision = 1 − err(12)
IV. RESULTS
This section includes results obtained from the four different models (word2vec G , word2vec LR , word2vec LL , and word2vec LLS ) as introduced in Section III-B. Results shown in table II were obtained for different k values, where k is the number of words requested from each of the models. As expected, the F1 of each model increases with k, where the possibility of finding the correct similar words against the golden standard increases. Given that the task here is to return the expected set of words, the recall is more important than precision (i.e: False-Negatives are more harmful than False-Positives). In that light, it is obvious that the word2vec LLS performs better than all other models because it consistently has the highest recall for all values of k. In addition to that, the word2vec LLS model also has the highest F 1 for all values of k, which is sufficient proof that the small loss in precision does not adversely affect the overall result. A graphical comparison of the changes in the F 1 measure is shown in Fig. 3. As shown, the domain specific models such as word2vec LR , word2vec LL , and word2vec LLS , show better results than the general word2vec G model. It should be noted that this performance enhancement has happened despite the fact that word2vec G was trained on a text corpus 3 times bigger than the text corpora we have used in this study for the models word2vec LR , word2vec LL , and word2vec LLS .
In the comparison of domain specific models, we can see a clear distinction between the word2vec LR and word2vec LL models, where the word2vec LL generally performs better. Further, it can be observed that the word2vec LLS model outperforms both word2vec LR and word2vec LL models.
V. CONCLUSION AND FUTURE WORKS
The hypothesis of this study had three main claims and each of these claims were justified by the results presented in Section IV. The first claim of the hypothesis is that a word embedding model trained on a small domain specific corpus can out perform a word embedding model trained on a large but generic corpus. The success of word2vec LR model over the word2vec G model justifies this claim. In Section III-A we proposed the second claim: word lemmatization, which removes inflected forms of words and improve the performance of a word embedding model. word2vec LL model obtaining better results than word2vec LR model proved this claim to be true. The third claim was made in Section III-C. There, we proposed that usage of lexical semantic similarity measures trained over a machine learning system can improve the overall system performance. The significant improvement that we show for the word2vec LLS model over the word2vec LL model verified this claim also to be accurate. Therefore we can conclude that the proposed methodology of word vector embedding augmented by lexical semantic similarity measures, gives a more accurate evaluation of the extent of which a given pair of words is semantically similar in the considered domain.
Semantic similarity measures are important in many areas of applications. Out of those, for future work, we expect to extend the findings of this study to the document level. Word based semantic similarity is the building block for sentence similarity measures, which in turn aggregates to build document similarity measures. This is the direction in which we intend to move. We will be using this word semantic similarity measure to build up to a document similarity measure which can be used for more efficient domain based document retrieval systems.
Fig. 1 .
1Flow diagram for the Overall Methodology
The matrix M is a 4×N matrix where N has the same value as in Equation 1. The first row element m 1,i of M was calculated by taking the Wu & Palmer similarity between k and w i . Here w i element at index i of W . Thus, m 1,i is equal to wup(k, w i ). Similarly, the second row element m 2,i of M was calculated by taking the Jiang & Conrath similarity between k and w i . The third row consists of Hirst & St-Onge similarities while the forth row contains a series of 1s for the sake of the bias. The matrix M is shown in Equation 4.
Fig. 3 .
3F1 value comparison
TABLE I LEXICAL
ISEMANTIC SIMILARITY STATISTICSModel
Min
Max
Wu & Palmer
0.0
1.0
Jiang & Conrath
0.0
∞
Hirst & St-Onge
0.0
16.0
TABLE II RESULTS
IICOMPARISON (P=PRECISION, R=RECALL)Model
k=20
k=50
k=100
k=200
P
R
F1
P
R
F1
P
R
F1
P
R
F1
word2vecG
0.57
0.19
0.29
0.62
0.33
0.43
0.67
0.41
0.51
0.74
0.46
0.57
word2vecLR
0.75
0.19
0.30
0.71
0.31
0.43
0.74
0.38
0.51
0.77
0.44
0.56
word2vecLL
0.73
0.22
0.34
0.72
0.32
0.45
0.75
0.40
0.52
0.76
0.47
0.58
word2vecLLS
0.66
0.24
0.36
0.73
0.33
0.52
0.72
0.43
0.54
0.74
0.50
0.60
https://www.westlaw.com/ 3 https://www.lexisnexis.com/
Ontology-based information extraction: An introduction and a survey of current approaches. D C Wimalasuriya, D Dou, Journal of Information Science. D. C. Wimalasuriya and D. Dou, "Ontology-based information extrac- tion: An introduction and a survey of current approaches," Journal of Information Science, 2010.
On liberty. J S Mill, Broadview PressJ. S. Mill, On liberty. Broadview Press, 1999.
Representation of change in controlled medical terminologies. D E Oliver, Y Shahar, E H Shortliffe, M A Musen, Artificial intelligence in medicine. 151D. E. Oliver, Y. Shahar, E. H. Shortliffe, and M. A. Musen, "Rep- resentation of change in controlled medical terminologies," Artificial intelligence in medicine, vol. 15, no. 1, pp. 53-76, 1999.
Pubmed help. N , for Biotechnology Information. N. C. for Biotechnology Information. (2017, Mar.) Pubmed help. [Online]. Available: https://www.ncbi.nlm.nih.gov/books/NBK3827/
Discovering inconsistencies in pubmed abstracts through ontology-based information extraction. N Silva, D Dou, J Huang, ACM Conference on Bioinformatics, Computational Biology, and Health Informatics. ACM BCBp. to appearN. de Silva, D. Dou, and J. Huang, "Discovering inconsistencies in pubmed abstracts through ontology-based information extraction," ACM Conference on Bioinformatics, Computational Biology, and Health Informatics (ACM BCB), p. to appear, 2017.
Efficient estimation of word representations in vector space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781arXiv preprintT. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013.
A neural probabilistic language model. Y Bengio, R Ducharme, P Vincent, C Jauvin, Journal of machine learning research. 3Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, "A neural proba- bilistic language model," Journal of machine learning research, vol. 3, no. Feb, pp. 1137-1155, 2003.
Semap-mapping dependency relationships into semantic frame relationships. N Silva, C Fernando, M Maldeniya, D Wijeratne, A Perera, B Goertzel, 17th ERU Research Symposium. Sri Lanka17Faculty of Engineering, University of MoratuwaN. de Silva, C. Fernando, M. Maldeniya, D. Wijeratne, A. Perera, and B. Goertzel, "Semap-mapping dependency relationships into semantic frame relationships," in 17th ERU Research Symposium, vol. 17. Faculty of Engineering, University of Moratuwa, Sri Lanka, 2011.
Integrating multiple knowledge sources to disambiguate word sense: An exemplar-based approach. H T Ng, H B Lee, Proceedings of the 34th annual meeting on Association for Computational Linguistics. the 34th annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsH. T. Ng and H. B. Lee, "Integrating multiple knowledge sources to dis- ambiguate word sense: An exemplar-based approach," in Proceedings of the 34th annual meeting on Association for Computational Linguistics. Association for Computational Linguistics, 1996, pp. 40-47.
Safs3 algorithm: Frequency statistic and semantic similarity based semantic classification use case. N Silva, Advances in ICT for Emerging Regions (ICTer), 2015 Fifteenth International Conference on. IEEEN. de Silva, "Safs3 algorithm: Frequency statistic and semantic similarity based semantic classification use case," in Advances in ICT for Emerging Regions (ICTer), 2015 Fifteenth International Conference on. IEEE, 2015, pp. 77-83.
Verbs semantics and lexical selection. Z Wu, M Palmer, Proceedings of the 32Nd Annual Meeting on Association for Computational Linguistics, ser. ACL '94. the 32Nd Annual Meeting on Association for Computational Linguistics, ser. ACL '94Stroudsburg, PA, USAAssociation for Computational LinguisticsZ. Wu and M. Palmer, "Verbs semantics and lexical selection," in Proceedings of the 32Nd Annual Meeting on Association for Computational Linguistics, ser. ACL '94. Stroudsburg, PA, USA: Association for Computational Linguistics, 1994, pp. 133-138. [Online].
. 10.3115/981732.981751Available: http://dx.doi.org/10.3115/981732.981751
Semantic similarity based on corpus statistics and lexical taxonomy. J J Jiang, D W Conrath, Proc of 10th International Conference on Research in Computational Linguistics, ROCLING97. of 10th International Conference on Research in Computational Linguistics, ROCLING97J. J. Jiang and D. W. Conrath, "Semantic similarity based on corpus statistics and lexical taxonomy," in Proc of 10th International Confer- ence on Research in Computational Linguistics, ROCLING97, 1997.
Lexical chains as representations of context for the detection and correction of malapropisms. G Hirst, D St-Onge, WordNet: An electronic lexical database. 305G. Hirst, D. St-Onge et al., "Lexical chains as representations of context for the detection and correction of malapropisms," WordNet: An electronic lexical database, vol. 305, pp. 305-332, 1998.
Introduction to wordnet: An on-line lexical database. G A Miller, R Beckwith, C Fellbaum, D Gross, K J Miller, International journal of lexicography. 34G. A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K. J. Miller, "Introduction to wordnet: An on-line lexical database," International journal of lexicography, vol. 3, no. 4, pp. 235-244, 1990.
Wordnet similarity for java (ws4j). H Shima, H. Shima. (2016) Wordnet similarity for java (ws4j). [Online]. Available: https://code.google.com/p/ws4j/
Distributed representations of words and phrases and their compositionality. T Mikolov, I Sutskever, K Chen, G S Corrado, J Dean, Advances in neural information processing systems. T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, "Distributed representations of words and phrases and their composi- tionality," in Advances in neural information processing systems, 2013, pp. 3111-3119.
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, EMNLP. 14J. Pennington, R. Socher, and C. D. Manning, "Glove: Global vectors for word representation." in EMNLP, vol. 14, 2014, pp. 1532-1543.
Gaussian lda for topic models with word embeddings. R Das, M Zaheer, C Dyer, ACL. R. Das, M. Zaheer, and C. Dyer, "Gaussian lda for topic models with word embeddings." in ACL (1), 2015, pp. 795-804.
word2vec explained: Deriving mikolov et al.'s negative-sampling word-embedding method. Y Goldberg, O Levy, arXiv:1402.3722arXiv preprintY. Goldberg and O. Levy, "word2vec explained: Deriving mikolov et al.'s negative-sampling word-embedding method," arXiv preprint arXiv:1402.3722, 2014.
A simple word embedding model for lexical substitution. O Melamud, O Levy, I Dagan, I Ramat-Gan, Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing. the 1st Workshop on Vector Space Modeling for Natural Language ProcessingO. Melamud, O. Levy, I. Dagan, and I. Ramat-Gan, "A simple word embedding model for lexical substitution," in Proceedings of the 1st Workshop on Vector Space Modeling for Natural Language Processing, 2015, pp. 1-7.
Legal expert system kontermautomatic representation of document structure and contents. E Schweighofer, W Winiwarter, International Conference on Database and Expert Systems Applications. SpringerE. Schweighofer and W. Winiwarter, "Legal expert system konter- mautomatic representation of document structure and contents," in International Conference on Database and Expert Systems Applications. Springer, 1993, pp. 486-497.
Gov2vec: Learning distributed representations of institutions and their legal text. J J Nay, arXiv:1609.06616arXiv preprintJ. J. Nay, "Gov2vec: Learning distributed representations of institutions and their legal text," arXiv preprint arXiv:1609.06616, 2016.
Rules for mediation in findlaw for legal professionals. J Hughes, J. Hughes, "Rules for mediation in findlaw for legal professionals," 1999.
Stemming and lemmatization in the clustering of finnish text documents. T Korenius, J Laurikkala, K Järvelin, M Juhola, Proceedings of the thirteenth ACM international conference on Information and knowledge management. the thirteenth ACM international conference on Information and knowledge managementACMT. Korenius, J. Laurikkala, K. Järvelin, and M. Juhola, "Stemming and lemmatization in the clustering of finnish text documents," in Proceed- ings of the thirteenth ACM international conference on Information and knowledge management. ACM, 2004, pp. 625-633.
The stanford corenlp natural language processing toolkit. C D Manning, M Surdeanu, J Bauer, J R Finkel, S Bethard, D Mcclosky, ACL (System Demonstrations). C. D. Manning, M. Surdeanu, J. Bauer, J. R. Finkel, S. Bethard, and D. McClosky, "The stanford corenlp natural language processing toolkit." in ACL (System Demonstrations), 2014, pp. 55-60.
| [] |
[
"Correlating Twitter Language with Community-Level Health Outcomes",
"Correlating Twitter Language with Community-Level Health Outcomes"
] | [
"Arno Schneuwly epflarno.schneuwly@epfl.ch ",
"Ralf Grubenmann Spinningbytes ",
"Mark Cieliebak ",
"Martin Jaggi epflmartin.jaggi@epfl.ch "
] | [] | [] | We study how language on social media is linked to diseases such as atherosclerotic heart disease (AHD), diabetes and various types of cancer. Our proposed model leverages stateof-the-art sentence embeddings, followed by a regression model and clustering, without the need of additional labelled data. It allows to predict community-level medical outcomes from language, and thereby potentially translate these to the individual level. The method is applicable to a wide range of target variables and allows us to discover known and potentially novel correlations of medical outcomes with life-style aspects and other socioeconomic risk factors. | 10.18653/v1/w19-3210 | [
"https://arxiv.org/pdf/1906.06465v2.pdf"
] | 189,927,854 | 1906.06465 | 939dc938f3db3925d786b4407b9d1220aab9c7c3 |
Correlating Twitter Language with Community-Level Health Outcomes
Arno Schneuwly epflarno.schneuwly@epfl.ch
Ralf Grubenmann Spinningbytes
Mark Cieliebak
Martin Jaggi epflmartin.jaggi@epfl.ch
Correlating Twitter Language with Community-Level Health Outcomes
Séverine Rion Logean Swiss Re
We study how language on social media is linked to diseases such as atherosclerotic heart disease (AHD), diabetes and various types of cancer. Our proposed model leverages stateof-the-art sentence embeddings, followed by a regression model and clustering, without the need of additional labelled data. It allows to predict community-level medical outcomes from language, and thereby potentially translate these to the individual level. The method is applicable to a wide range of target variables and allows us to discover known and potentially novel correlations of medical outcomes with life-style aspects and other socioeconomic risk factors.
Introduction
Surveys and empirical studies have long been a cornerstone of psychological, sociological and medical research, but each of these traditional methods pose challenges for researchers. They are time-consuming, costly, may introduce a bias or suffer from bad experiment design.
With the advent of big data and the increasing popularity of the internet and social media, larger amounts of data are now available to researchers than ever before. This offers strong promise new avenues of research using analytic procedures, obtaining a more fine-grained and at the same time broader picture of communities and populations as a whole (Salathé, 2018). Such methods allow for faster and more automated investigation of demographic variables. It has been shown that Twitter data can predict atherosclerotic heart-disease risk at the community level more accurately than traditional demographic data (Eichstaedt et al., 2015). The same method has also been used to capture and accurately predict patterns of excessive alcohol consumption (Curtis et al., 2018).
In this study, we utilize Twitter data to predict various health target variables (AHD, diabetes, various types of cancers) to see how well language patterns on social media reflect the geographic variations of those targets. Furthermore, we propose a new method to study social media content by characterizing disease-related correlations of language, by leveraging available demographic and disease information on the community level. In contrast to (Eichstaedt et al., 2015), our method is not relying on word-based topic models, but instead leverages modern state-of-theart text representation methods, in particular sentence embeddings, which have been in increasing use in the Natural Language Processing, Information Retrieval and Text Analytics fields in the past years. We demonstrate that our approach helps capturing the semantic meaning of tweets as opposed to features merely based on word frequencies, which come with robustness problems (Brown and Coyne, 2018;. We examine the effectiveness of sentence embeddings in modeling language correlates of the medical target variables (disease outcome).
Section 2 gives a generalized description of our method. We apply the previously described method to the tweets and health data in Section 3 The system's performance is evaluated in Section 4 followed by the discussion in Section 5. Our code is available on github.com/epfml/correlatingtweets.
Method
We are given a large quantity of text (sentences or tweets) in the form of social media messages by individuals. Each individual-and therefore each sentence-is assigned to a predefined category, for example a geographic region or a population subset. We assume the number of sentences to be sig-nificantly larger than the number of communities. Furthermore, we assume that the target variable of interest, for example disease mortality or prevalence rate, is available for each community (but not for each individual). Our system consists of two subsystems:
1. (Prediction) The predictive subsystem makes predictions of target variables (e.g. AHD mortality rate) based on aggregated language features. The resulting linear predictions are applicable on the community level (e.g. counties) or on the individual level, and are trained using k-fold cross-validated Ridge regression.
2. (Interpretability) The averaged regression weights from the prediction system allow for interpretation of the system: We use a fixed clustering (which was obtained from all sentences without any target information), and then rank each topic cluster with respect to a prediction weight vector from point 1). The top and bottom ranked topic clusters for each target variable give insights into known and potentially novel correlations of topics with the target medical outcome.
In summary, the community association is used as a proxy or weak labelling to correlate individual language with community-level target variables. The following subsections give a more detailed description of the two subsystems.
System Description
Let S be the set of sentences (e.g. tweets), with their total number denoted as |S| = S. Each sentence is associated to exactly one of the A communities A = {a 1 , . . . , a A } (e.g. geographic regions). The function δ : S → A defines this mapping. Let y ∈ R A be the target vector for an arbitrary target variable, so that each community a j has a corresponding target value y a j ∈ R.
Preprocessing and Embeddings. The complete linguistic preprocessing pipeline of a sentence is incorporated by the function ρ(s i ), ∀ i ∈ {1, . . . , S}, which represents an arbitrary sentence s i as a sequence of tokens. Each sentence s i then is represented by a D-dimensional embedding vector providing a numerical representation of the semantics for the given short text: While our method is generic for any text representation method, here Sent2Vec (Pagliardini et al., 2018) was chosen for its computational efficiency and scalability to large datasets.
x i = Sent2Vec(ρ(s i )) ∈ R D .(1)
Feature Aggregation
We use averaging of the sentence embedding vectors over each community to obtain the language features for each community. Formally, the complete feature matrix of all sentences is denoted as X ∈ R S×D . For our approach, the sentence embedding features are averaged over each community a j . Formally, an individual feature x a j ,d of the averaged embedding x a j ∈ R 1×D for a given community a j is defined as
x a j ,d = 1 N a j x i :s i ∈S ∧ δ(s i )=a j x i,d ,(2)
where N a j = |{s i : s i ∈ S ∧ δ(s i ) = a j }| is the number of sentences belonging to community a j . Consequently, the aggregated communitylevel embedding matrix is given by
X = x a 1 . . . x a A ∈ R A×D .(3)
Train-Test Split
Leveraging the targets available for each community, our regression method is applied to the aggregated features X and the target y. We employ Kfold cross-validation: the previously defined set A is split into K as equally sized pairwise disjoint subsets A k as possible such that:
A = K k=1 A k , A i ∩ A j = ∅ ∀i, j ∈ 1, . . . , K, i = j and |A 1 | ≈ · · · ≈ |A K |. The training set for a fold k is TR k = K i=1 A i \ A k with the corresponding test set TE k = A k , where N θ k = |TR k | and N Λ k = |TE k |.
The operators θ k : {1, . . . , N θ k } → TR k and Λ k : {1, . . . , N Λ k } → TE k uniquely map the indexes to the corresponding communities a j for the k th train-test split. For each split k the train and test embedding matrices respectively are defined as
X θ k = x θ k (1) , . . . , x θ k (N θ k ) ,(4)X Λ k = x Λ k (1) , . . . , x Λ k (N Λ k ) .(5)
Accordingly, we define the target vectors
y θ k = y θ k (1) , . . . , y θ k (N θ k ) ,(6)y Λ k = y Λ k (1) , . . . , y Λ k (N Λ k ) .(7)
Ridge Regression
For each train-test split k we perform linear regression from the community-level textual features X θ k to the health target variable y θ k . We employ Ridge regression (Hoerl and Kennard, 1970). In our context, the Ridge regression is defined as the following optimization problem:
min ω k ∈R D 1 2A N θ k i=1 y θ k (i) − x θ k ω k 2 + λ ω k 2 2 ,(8)
where the optimal solution is
ω k = X θ k X θ k + 2N θ k λI −1 X θ k ∈ R D . (9)
Within each each fold we tune the regularization parameter λ.
Prediction Subsystem
Let y Λ k = X Λ k ω k = [y Λ k (1) , . . . , y Λ k (N Λ k ) ]
be the predicted values for the test set of the split k. The concatenated prediction vector for all splits is
y Λ = y Λ 1 . . . y Λ K ∈ R A(10)
Accordingly, we define the concatenated true target vector as
y Λ = y Λ 1 . . . y Λ K ∈ R A ,(11)
i.e., the set of individual scalars is identical to the entries in the original target vector y. The predictive performance of the system can be assessed through the following metrics:
• Pearson Correlation Coefficient
• Mean Average Error of prediction (MAE)
• Classification Accuracy for Quantile Prediction
The first two metrics are evaluated with the vectors y Λ and y Λ from all folds. In the quantilebased assessment we independently bin the true values y Λ and the predicted values y Λ into C different quantiles. Each individual true and predicted value is assigned to a quantile c j ∈ {c 1 , . . . , c C }. These assignments can be used to visually compare results on a heat-map or as regular evaluation scores in terms of accuracy.
Ridge-Weight Aggregation
For the final prediction model, the regression weights ω k from Ridge regression are averaged over the K folds, i.e. ω = 1 K K k=1 ω k . For every sentence embedding x q , the prediction is computed as y q = x q ω ∈ R.
Interpretation Subsystem: Cluster Ranking
We employ predefined textual topic clusterswhich are independent of any target values-in order to enable interpretation of the textual correlates. Each cluster is a collection of sentences and should, intuitively, be interpretable as a topic, e.g. separate topics about indoor and outdoor activities as shown in Fig. 4. For each cluster m a ranking score can be computed with respect to a linear prediction model ω such as defined above. Let Q m = {q : ζ(q) = m ∧ q ∈ Q} be the set of sentences assigned to cluster m. The score ι m for the cluster m is the average of all predictions y q = x q ω within the cluster m:
ι m = 1 |Q m | y q : q∈Qm y q(12)
By ordering the scores ι m of all clusters, we obtain the final ranking sequence of all clusters, with respect to the target-specific model ω.
Clustering Preprocessing. For obtaining the fixed clustering, as X is a very large matrix, clustering might require subsampling to reduce computational complexity. Hence, Q out of the S embeddings in S are randomly subsampled into the set Q. The mapping Φ(Q) = [φ(1), . . . , φ(Q)] is a uniformly random selection of row indexes in X out of N Q . We define the subsampled data matrix as X Q = x φ(1) , . . . , x φ(Q) ∈ R Q×D .
The subset X Q is clustered with the Yinyang K-Means algorithm (Ding et al., 2015). We use M centroids and the cosine similarity as a distance function. The cluster assignment vector M ∈ [1, . . . , M ] assigns one cluster for each embedding in X Q . Accordingly, the operator ζ : {1, . . . , Q} → {1, . . . , M } indicates the assigned cluster m for a given sentence s in Q (see cluster ranking above). The cluster centers are defined in M Q ∈ R Q×D .
Data sources
We apply the method described in Section 2 to the following setting: The pool of sentences S consists of geotagged Tweets. The assigned locations are in the United States. The geotags are categorized into US-counties which represent the set of communities A. The target variables y are healthrelated variables, for example normalized mortality or prevalence rates. We focus on cancer and AHD mortality as well as on diabetes prevalence. Hence, the quantile-based predictions give a categorization of the Ridge regression predictions on a US-county level. The ranked topics assess what language might relate to higher or lower rates of the corresponding disease. Table 1 provides an overview of the size of the data sources, the year the data was collected in and the mean µ and standard deviation σ of the target variables. Not all counties are covered in the publicly available datasets, usually being limited to more populous counties. The collected Tweets are from 2014 and 2015. The target variables are the union-averaged values from 2014 and 2015: if the target variable is available for both years the two values are averaged. Conversely, if a county data point is only available for one, but not both years, we use this standalone value.
Datorium Tweets
Tweets are short messages of no more than 140 characters 1 published by users of the Twitter platform. They reflect discussions, thoughts and activities of its users. We use a dataset of approximately 144 million tweets collected from first of June 2014 to first of June 2015 (Datorium, 2017 Each tweet was geotagged by the submitting user with exact GPS coordinates and all tweets are from within the US, allowing accurate countylevel mapping of individual tweets.
AHD & Cancer Mortality
Our source of the statistical county-level target variables is the CDC WONDER 2 database (CDC, 2018) for AHD and cancer. Values are given as deaths per capita (100'000).
Diabetes Prevalence
We use county-wise age-adjusted diabetes prevalence data from the year 2013 (CDC, 2016), provided as percent of the population afflicted with type II diabetes. The data is available for almost all the 3144 US counties, making it a valuable target to use.
Results
The results of our method for the various target variables are listed in Table 2 along with the performance of the baseline model outlined in Section 4.1. We provide the Pearson correlation (ρ) and the mean absolute error (MAE) of our system along with the baseline model's Pearson correlation.
LDA Baseline Model
We reimplemented the approach proposed by Eichstaedt et al. (2015) as a baseline for comparison, and were able to reproduce their findings about AHD with recent data: similar results were found with the Datorium Twitter dataset (Datorium, 2017) and CDC AHD data from 2014 and 2015. Their approach averages topics generated with Latent Dirichlet Allocation (LDA) of tweets per county as features for Ridge regression. We do not use any hand-curated emotion-specific dictionaries, as these did not impact performance in our experiments. We used the predefined Facebook LDA coefficients of Eichstaedt et al. (2015), updated them with the word frequencies of our collected Twitter data (Datorium, 2017). Our results are computed with a 10-fold cross-validation and without any feature selection. Table 2: Results of predictions on different health targets. ρ: our system (Section 2.5), ρ LDA: topic model baseline (Eichstaedt et al. (2015), Section 4.1), MAE: mean absolute error of our system (Section 2.5).
Detailed Results
In this section we discuss a selection of our results in detail, with additional information available in Appendix A.1. Diabetes has a strong demographic bias, with a higher prevalence in the south-east of the US, the so called diabetes belt. Compared to the national average, the african-american population in the diabetes belt has a higher risk of diabetes by a factor of more than 2 (Barker et al., 2011) and the southeast of the US has a large african-american population. Therefore, linguistic features (Green, 2002) common in african-american are a strong predictor of diabetes rates. The model learns these linguistic features, as seen in Figure 3, and its predictions closely match the actual geographic distribution, as seen in Figure 2. A moderate alcohol consumption is linked to a low risk of type II diabetes compared to no or excessive consumption (Koppes et al., 2005). The strongest negatively correlated word clouds in Figure 3 support this finding.
The most positively related word clouds for melanoma in Figure 4 are related to outdoor activities (Elwood et al., 1985). Conversely, the strongest negatively correlated word clouds suggest indoor activity related language.
Discussion
In this paper, we introduced a novel approach for language-based predictions and correlation of community-level health variables. For various health-related demographic variables, our approach outperforms in most cases (Table 2) similar models based on traditional demographic data by using only geolocated tweets. Our approach provides a method for discovering novel correlations between open-vocabulary topics and health variables, allowing researchers to discover yet unknown contributing factors based on large collections of data with minimal effort.
Our findings, when applying our method to AHD risk, diabetes prevalence and the risk of various types of cancers, using geolocated tweets from the US only, show that a large variety of healthrelated variables can be predicted with surprisingly high precision based solely on social media data. Furthermore, we show that our model identifies known and novel risk or protective factors in the form of topics. Both aspects are of interest to researchers and policy makers. Our model proved to be robust for the majority of targets it was applied to.
For AHD risk, we show that our approach significantly outperforms previous models based on topic models such as LDA or traditional statistical models (Eichstaedt et al., 2015), achieving a ρ-value of 0.46, an increase of 0.09 over previous approaches. For diabetes prevalence our model correctly predicts its geographic distribution by identifying linguistic features common in high-prevalence areas among other features, with a ρ-value of 0.73. For melanoma risk, it finds a high-correlation with the popularity of outdoor activities, corresponding to exposure to sunlight being one of the main risk factors in skin cancer, with an overall ρ-value of 0.72.
One of the main limitations of our approach is the need for a large collection of sentences for each community as well as a large number of communities with target variables, leading to potentially unreliable results when this is not the case, such as for social media posts by individuals or when modeling target values which are only available in e.g. few counties. Further research is needed to ascertain whether significant results can also be achieved in such scenarios, and if robustness of our approach is improved compared to bag-of-words-based baselines (Eichstaedt et al., 2015;Brown and Coyne, 2018;. Furthermore, all mentioned approaches rely on correlation, and thus do not provide a way to determine any causation, or ruling out of potential underlying factors not captured by the model. Even though using social media data introduces a non-negligible bias towards users of social media, our approach was able to predict target variables tied to very different age-groups, which is encouraging and supports the robustness of our approach.
Our method captures language features on a community scale. This raises the question of how these findings can be translated to the individual person. Theoretically, a community-based model as described above could be used to rank social media posts or messages of an individual user, with respect to specific health risks. However, as we currently do not have ground truth values on the individual level, and since user's social media history has very high variance, this is left for future investigation.
Future research should also address the applicability of our model to textual data other than Twitter and potentially from non-social media sources, to communities that are not geography based, to the time evolution of topics and health/lifestyle statistics, as well as to targets that are not health related. The general methodology offers promise for new avenues for data-driven discovery in fields such as medicine, sociology and psychology.
Figure 1 :
1System Description.
Figure 2 :Figure 3 :Figure 4 :
234Quantiles of the prevalence of diabetes. (a) Target values (b) Predicted values from tweets Word clouds of topics correlating with diabetes: (a) (b) strongest positive correlation (c) (d) strongest negative correlation among M = 2000 clusters. Word clouds of topics correlating with melanoma: (a) (b) strongest positive correlation (c) (d) strongest negative correlation among M = 2000 clusters.
). Twitter increased the limit to 280 characters in 2017, which doesn't affect our data.1 Name
# tweets Year
Datorium
147M 14/15
Name
# counties Year
µ, σ
AHD
803 14/15 43.0, 16.1
Diabetes
3129
13 9.7, 2.2
Breast
487 13/14 12.4, 2.8
Colon
490 13/14 12.1, 3.0
Liver
293 13/14 7.5, 2.4
Lung
1612 13/14 52.4, 16.2
Melanoma
162 13/14 3.8, 1.2
Prostate
351 13/14 8.5, 2.0
Stomach
136 13/14 3.6, 0.9
Table 1: Overview of data sources.
US Centers for Disease Control and Prevention -Wideranging Online Data for Epidemiologic Research.
Acknowledgements. We would like to thank Ahmed Kulovic and Maxime Delisle for valuable input and discussions.A AppendicesA.2 Implementation DetailsTweets were collected according to the provided datorium IDs using the Tweepy 3 library. The tweets were then imported into Google BigQuery 4 and processed using Apache Beam 5 . The sentence embeddings were computed using the official Sent2Vec source code and the provided 700dimensional pre-trained model for tweets (using bigrams) 6 . Clustering was performed by libKM-CUDA 7 . Scikit-learn 8 was used for 10-fold cross validation, Ridge regression, calculating the correlation and hyperparameter search.
. Lawrence E Barker, Karen A Kirtland, W Edward, Lawrence E. Barker, Karen A. Kirtland, Edward W.
Geographic distribution of diagnosed diabetes in the us: a diabetes belt. American journal of preventive medicine. Linda S Gregg, Theodore J Geiss, Thompson, 40Gregg, Linda S. Geiss, and Theodore J. Thompson. 2011. Geographic distribution of diagnosed diabetes in the us: a diabetes belt. American journal of pre- ventive medicine, 40(4):434-439.
Does Twitter language reliably predict heart disease? a commentary on eichstaedt. J L Nicholas, James C Brown, ; Coyne, PeerJ. 65656Nicholas JL. Brown and James C. Coyne. 2018. Does Twitter language reliably predict heart disease? a commentary on eichstaedt et al.(2015a). PeerJ, 6:e5656.
County data. National Center for Chronic Disease Prevention and Health Promotion, Division of Diabetes Translation. Cdc, CDC. 2016. County data. National Center for Chronic Disease Prevention and Health Promotion, Division of Diabetes Translation.
CDC WONDER. WONDER -Wideranging Online Data for Epidemiologic Research. CDCCDC. 2018. CDC WONDER. WONDER -Wide- ranging Online Data for Epidemiologic Research.
Can Twitter be used to predict county excessive alcohol consumption rates?. Brenda Curtis, Salvatore Giorgi, Anneke Ek Buffone, Lyle H Ungar, Robert D Ashford, Jessie Hemmons, Dan Summers, Casey Hamilton, H Andrew Schwartz, PloS one. 134194290Brenda Curtis, Salvatore Giorgi, Anneke EK. Buffone, Lyle H. Ungar, Robert D. Ashford, Jessie Hem- mons, Dan Summers, Casey Hamilton, and H. An- drew Schwartz. 2018. Can Twitter be used to predict county excessive alcohol consumption rates? PloS one, 13(4):e0194290.
Geotagged Twitter posts from the united states: A tweet collection to investigate representativeness. Datorium, https:/datorium.gesis.org/xmlui/handle/10.7802/1166Datorium. Geotagged Twitter posts from the united states: A tweet collection to investigate representa- tiveness [online]. 2017.
Yinyang Kmeans: A drop-in replacement of the classic Kmeans with consistent speedup. Yufei Ding, Yue Zhao, Xipeng Shen, Madanlal Musuvathi, Todd Mytkowicz, ICML'15 -Proceedings of the 32nd International Conference on International Conference on Machine Learning. Yufei Ding, Yue Zhao, Xipeng Shen, Madanlal Musu- vathi, and Todd Mytkowicz. 2015. Yinyang K- means: A drop-in replacement of the classic K- means with consistent speedup. In ICML'15 -Pro- ceedings of the 32nd International Conference on International Conference on Machine Learning.
. Johannes C Eichstaedt, Andrew Hansen, Margaret L Schwartz, Gregory Kern, Park, R Darwin, Johannes C. Eichstaedt, Hansen Andrew Schwartz, Margaret L. Kern, Gregory Park, Darwin R.
Psychological language on Twitter predicts county-level heart disease mortality. Raina M Labarthe, Sneha Merchant, Megha Jha, Agrawal, A Lukasz, Maarten Dziurzynski, Sap, Psychological science. 262Labarthe, Raina M. Merchant, Sneha Jha, Megha Agrawal, Lukasz A. Dziurzynski, Maarten Sap, et al. 2015. Psychological language on Twitter predicts county-level heart disease mortality. Psychological science, 26(2):159-169.
Cutaneous melanoma in relation to intermittent and constant sun exposurethe western canada melanoma study. International journal of cancer. J Mark Elwood, Richard P Gallagher, G B Hill, Jcg Pearson, 35J. Mark Elwood, Richard P. Gallagher, GB. Hill, and JCG. Pearson. 1985. Cutaneous melanoma in re- lation to intermittent and constant sun exposurethe western canada melanoma study. International jour- nal of cancer, 35(4):427-433.
African American English: a linguistic introduction. Lisa J Green, Cambridge University PressLisa J. Green. 2002. African American English: a lin- guistic introduction. Cambridge University Press.
Ridge regression: Biased estimation for nonorthogonal problems. Arthur E Hoerl, Robert W Kennard, Technometrics. 121Arthur E. Hoerl and Robert W. Kennard. 1970. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1):55-67.
Moderate alcohol consumption lowers the risk of type 2 diabetes: a meta-analysis of prospective observational studies. L J Lando, Jacqueline M Koppes, Dekker, F J Henk, Lex M Hendriks, Robert J Bouter, Heine, Diabetes care. 283Lando LJ. Koppes, Jacqueline M. Dekker, Henk FJ. Hendriks, Lex M. Bouter, and Robert J. Heine. 2005. Moderate alcohol consumption lowers the risk of type 2 diabetes: a meta-analysis of prospective ob- servational studies. Diabetes care, 28(3):719-725.
Unsupervised Learning of Sentence Embeddings using Compositional n-Gram Features. Matteo Pagliardini, Prakhar Gupta, Martin Jaggi, NAACL 2018 -Conference of the North American Chapter of the Association for Computational Linguistics. Matteo Pagliardini, Prakhar Gupta, and Martin Jaggi. 2018. Unsupervised Learning of Sentence Embed- dings using Compositional n-Gram Features. In NAACL 2018 -Conference of the North American Chapter of the Association for Computational Lin- guistics.
Digital epidemiology: what is it, and where is it going? Life sciences, society and policy. Marcel Salathé, 141Marcel Salathé. 2018. Digital epidemiology: what is it, and where is it going? Life sciences, society and policy, 14(1):1.
More evidence that Twitter language predicts heart disease: a response and replication. H , Andrew Schwartz, Salvatore Giorgi, Margaret L Kern, Gregory Park, Maarten Sap, Darwin R Labarthe, Emily E Larson, Martin Seligman, Lyle H Ungar, H. Andrew Schwartz, Salvatore Giorgi, Margaret L. Kern, Gregory Park, Maarten Sap, Darwin R. Labarthe, Emily E. Larson, Martin Seligman, Lyle H. Ungar, et al. 2018. More evidence that Twit- ter language predicts heart disease: a response and replication.
| [] |
[
"An Annotation Scheme for Reichenbach's Verbal Tense Structure",
"An Annotation Scheme for Reichenbach's Verbal Tense Structure"
] | [
"Leon Derczynski \nDepartment of Computer Science\nUniversity of Sheffield\nUK\n",
"Robert Gaizauskas robertg@dcs.shef.ac.uk \nDepartment of Computer Science\nUniversity of Sheffield\nUK\n"
] | [
"Department of Computer Science\nUniversity of Sheffield\nUK",
"Department of Computer Science\nUniversity of Sheffield\nUK"
] | [] | In this paper we present RTMML, a markup language for the tenses of verbs and temporal relations between verbs. There is a richness to tense in language that is not fully captured by existing temporal annotation schemata. Following Reichenbach we present an analysis of tense in terms of abstract time points, with the aim of supporting automated processing of tense and temporal relations in language. This allows for precise reasoning about tense in documents, and the deduction of temporal relations between the times and verbal events in a discourse. We define the syntax of RTMML, and demonstrate the markup in a range of situations. 1 http://www.timeml.org; Boguraev et al. (2005). 2 See Han et al. (2006). | null | [
"https://arxiv.org/pdf/1203.5062v1.pdf"
] | 9,991,512 | 1203.5062 | 824e7216cfbb0855b3919e0456172ec8fbb63476 |
An Annotation Scheme for Reichenbach's Verbal Tense Structure
22 Mar 2012
Leon Derczynski
Department of Computer Science
University of Sheffield
UK
Robert Gaizauskas robertg@dcs.shef.ac.uk
Department of Computer Science
University of Sheffield
UK
An Annotation Scheme for Reichenbach's Verbal Tense Structure
22 Mar 2012
In this paper we present RTMML, a markup language for the tenses of verbs and temporal relations between verbs. There is a richness to tense in language that is not fully captured by existing temporal annotation schemata. Following Reichenbach we present an analysis of tense in terms of abstract time points, with the aim of supporting automated processing of tense and temporal relations in language. This allows for precise reasoning about tense in documents, and the deduction of temporal relations between the times and verbal events in a discourse. We define the syntax of RTMML, and demonstrate the markup in a range of situations. 1 http://www.timeml.org; Boguraev et al. (2005). 2 See Han et al. (2006).
Introduction
In his 1947 account, Reichenbach offered an analysis of the tenses of verbs, in terms of abstract time points. Reichenbach details nine tenses (see Table 1). The tenses detailed by Reichenbach are past, present or future, and may take a simple, anterior or posterior form. In English, these apply to single verbs and to verbal groups (e.g. will have run, where the main verb is run).
To describe a tense, Reichenbach introduces three abstract time points. Firstly, there is the speech time, S. This represents the point at which the verb is uttered or written. Secondly, event time E is the time that the event introduced by the verb occurs. Thirdly, there is reference time R; this is an abstract point, from which events are viewed. In Example 1, speech time S is when the author created the discourse (or perhaps when the reader interpreted it). Reference time R is then -an abstract point, before speech time, but after the event time E, which is the leaving of the building. In this sentence, one views events from a point in time later than they occurred.
(1) By then, she had left the building.
While we have rich annotation languages for time in discourse, such as TimeML 1 and TCNL 2 , none can mark the time points in this model, or the relations between them. Though some may provide a means for identifying speech and event times in specific situations, there is nothing similar for reference times. All three points from Reichenbach's model are sometimes necessary to calculate the information used in these rich annotation languages; for example, they can help determine the nature of a temporal relation, or a calendrical reference for a time. We will illustrate this with two brief examples.
(2) By April 26 th , it was all over.
In Example 2, there is an anaphoric temporal expression describing a date. The expression is ambiguous because we cannot position it absolutely without an agreed calendar and a particular year. This type of temporal expression is interpreted not with respect to speech time, but with respect to reference time (Ahn et al., 2005). Without a time frame for the sentence (presumably provided earlier in the discourse), we cannot determine which year the date is in. If we are able to set bounds for R in this case, the time in Example 2 will be the April 26 th adjacent to or contained in R; as the word by is used, we know that the time is the April 26 th following R, and can normalise the temporal expression, associating it with a time on an absolute scale.
Temporal link labelling is the classification of relations between events or times. We might say an event of the airport closed occurred after another event of the aeroplane landed; in this case, we have specified the type of temporal relation between two events. This task is difficult to automate (Verhagen et al., 2010). There are clues in discourse that human readers use to temporally relate events or times. One of these clues is tense. For example:
(3) John told me the news, but I had already sent the letter.
Example 3 shows a sentence with two verb events -told and had sent. Using Reichenbach's model, these share their speech time S (the time of the sentence's creation) and reference time R, but have different event times. In the first verb, reference and event time have the same position. In the second, viewed from when John told the news, the letter sending had already happened -that is, event time is before reference time. As reference time R is the same throughout the sentence, we know that the letter was sent before John mentioned the news. Describing S, E and R for verbs in a discourse and linking these points with each other (and with times) is the only way to ensure correct normalisation of all anaphoric and deictic temporal expressions, as well as enabling high-accuracy labelling of some temporal links.
Some existing temporal expression normalisation systems heuristically approximate reference time. GUTime (Mani and Wilson, 2000) interprets the reference point as "the time currently being talked about", defaulting to document creation date. Over 10% of errors in this system were directly attributed to having an incorrect reference time, and correctly tracking reference time is the only way to resolve them. TEA (Han et al., 2006) approximates reference time with the most recent time temporally before the expression being evaluated, excluding noun-modifying temporal expressions; this heuristic yields improved performance in TEA when enabled, showing that modelling reference time helps normalisation. Hei-delTime (Strötgen and Gertz, 2010) uses a similar approach to TEA but does not exclude nounmodifying expressions.
The recently created WikiWars corpus of TIMEX2 annotated text prompted the comment that there is a "need to develop sophisticated methods for temporal focus tracking if we are to extend current time-stamping technologies" (Mazur and Dale, 2010). Resources that explicitly annotate reference time will be direct contributions to the completion of this task. Elson and McKeown (2010) describe how to relate events based on a "perspective" which is calculated from the reference and event times of an event pair. They construct a natural language generation system that requires accurate reference times in order to correctly write stories. Portet et al. (2009) also found reference point management critical to medical summary generation.
These observations suggest that the ability to automatically determine reference time for verbal expressions is useful for a number of computational language processing tasks. Our work in this area -in which we propose an annotation scheme including reference time -is a first step in this direction.
In Section 2 we describe some crucial points of Reichenbach's model and the requirements of an annotation schema for tense in natural language. We also show how to reason about speech, event and reference times. Then, in Section 3, we present an overview of our markup. In Section 4 we give examples of annotated text (fictional prose and newswire text that we already have another temporal annotation for), event ordering and temporal expression normalisation. Finally we conclude in Section 5 and discuss future work.
Exploring Reichenbach's model
Each tensed verb can be described with three points; speech time, event time and reference time. We refer to these as S, E and R respectively. Speech time is when the verb is uttered. Event time is when the action described by the verb occurs. Reference time is a viewpoint from where the event is perceived. A summary of the relative positions of these points is given in Table 1.
While each tensed verb involves a speech, event and reference time, multiple verbs may share one or more of these points. For example, all narrative in a news article usually has the same speech time (that of document creation). Further, two events linked by a temporal conjunction (e.g. after) are very likely to share the same reference time.
From Table 1, we can see that conventionally English only distinguishes six tenses. Therefore, some English tenses will suggest more than one arrangement of S, E and R. Reichenbach's tense names suffer from this ambiguity too, but to a much lesser degree. When following Reichenbach's tense names, it is the case that for past tenses, R always occurs before S; in the future, R is always after S; and in the present, S and R are simultaneous. Further, "anterior" suggests E before R, "simple" that R and E are simultaneous, and "posterior" that E is after R. The flexibility of this model permits the full set of available tenses (Song and Cohen, 1988), and this is sufficient to account for the observed tenses in many languages.
Our goal is to define an annotation that can describe S, E and R (speech, event and reference time) throughout a discourse. The lexical entities that these times are attached to are verbal events and temporal expressions. Therefore, our annotation needs to locate these entities in discourse, and make the associated time points available.
Special properties of the reference point
The reference point R has two special uses. When sentences or clauses are combined, grammatical rules require tenses to be adjusted. These rules operate in such a way that the reference point is the same in all cases in the sequence. Reichenbach names this principle permanence of the reference point.
Secondly, when temporal expressions (such as a TimeML TIMEX3 of type DATE, but not DURA-TION) occur in the same clause as a verbal event, the temporal expression does not (as one might expect) specify event time E, but instead is used to position reference time R. This principle is named positional use of the reference point.
Context and the time points
In the linear order that events and times occur in discourse, speech and reference points persist until changed by a new event or time. That is, the reference time from one sentence will roll over to the next sentence, until it is repositioned explicitly by a tensed verb or time. To cater for subordinate clauses in cases such as reported speech, we add a caveat -S and R persist as a discourse is read in textual order, for each context. We can define a context as an environment in which events occur, such as the main body of the document, reported speech, or the conditional world of an if clause (Hornstein, 1990). For example:
(4) Emmanuel had said "This will explode!", but changed his mind.
Here, said and changed share speech and reference points. Emmanuel's statement occurs in a separate context, which the opening quote instantiates, ended by the closing quote (unless we continue his reported speech later), and begins with an S that occurs at the same time as said's E. This persistence must be explicitly stated in RTMML.
Capturing the time points with TimeML
TimeML is a rich, developed standard for temporal annotation. There exist valuable resources annotated with TimeML that have withstood significant scrutiny. However TimeML does not address the issue of annotating Reichenbach's tense model with the goal of understanding reference time or creating resources that enable detailed examination of the links between verbal events in discourse.
Although TimeML permits the annotation of tense for <EVENT>s, it is not possible to unambiguously map its tenses to Reichenbach's model. This restricts how well we can reason about verbal events using TimeML-annotated documents. Of the usable information for mapping TimeML annotations to Reichenbach's time points, TimeML's tense attribute describes the relation between S and E, and its aspect attribute can distinguish between PERFECTIVE and NONE -that is, between E < R and a conflated class of (E = R)∨(R < E). Cases where R < E are often awkward in English (as in Table 1), and may even lack a distinct syntax; the French Je vais dormir and Je dormirai both have the same TimeML representation and both translate to I will sleep in English, despite having different time point arrangements.
It is not possible to describe or build relations to reference points at all in TimeML. It may be possible to derive the information about S, E and R directly represented in our scheme from a TimeML annotation, though there are cases -especially outside of English -where it is not possible to capture the full nuance of Reichenbach's model using TimeML. An RTMML annotation permits simple reasoning about reference time, and assist the labelling of temporal links between verb events in cases where TimeML's tense and aspect annotation is insufficient. This is why we propose an annotation, and not a technique for deriving S, E, and R from TimeML.
Overview of RTMML
The annotation schema RTMML is intended to describe the verbal event structure detailed in Reichenbach (1947), in order to permit the relative temporal positioning of reference, event, and speech times. A simple approach is to define a markup that only describes the information that we are interested in, and can be integrated with TimeML. For expositional clarity we use our own tags but it is possible (with minor modifications) to integrate them with TimeML as an extension to the standard.
Our procedure is as follows. Mark all times and verbal events (e.g. TimeML TIMEX3s and those EVENTs whose lexical realisation is a verb) in a discourse, as T 1 ..T n and V 1 ..V n respectively. We mark times in order to resolve positional uses of the reference point. For each verbal event V i , we may describe or assign three time points S i , E i , and R i . Further, we will relate T , S, E and R points using disjunctions of the operators <, = and >. It is not necessary to define a unique set of these points for each verb -in fact, linking them across a discourse helps us temporally order events and track reference time. We can also define a "discourse creation time," and call this S D .
(5) John said, "Yes, we have left".
If we let said be V 1 and left be V 2 :
• S 1 = S D From the tense of V 1 (simple past), we can say:
• R 1 = S 1 • E 1 < R 1
As V 2 is reported speech, it is true that:
• S 2 = E 1
Further, as V 2 is anterior present:
• R 2 = S 2 • E 2 < R 2
As the = and < relations are transitive, we can deduce an event ordering E 2 < E 1 .
Annotation schema
The annotation language we propose is called RTMML, for Reichenbach Tense Model Markup Language. We use standoff annotation. This keeps the text uncluttered, in the spirit of ISO LAF and ISO SemAF-Time. Annotations reference tokens by index in the text, as can be seen in the examples below. Token indices begin from zero. We explicitly state the segmentation plan with the <seg> element, as described in Lee and Romary (2010) and ISO DIS 24614-1 WordSeg-1.
The general speech time of a document is defined with the <doc> element, which has one or two attributes: an ID, and (optionally) @time. The latter may have a normalised value, formatted according to TIMEX3 (Boguraev et al., 2005) or TIDES (Ferro et al., 2005), or simply be the string now.
Each <verb> element describes a tensed verbal group in a discourse.
The @target attribute references token offsets; it has the form target="#token0" or target="#range(#token7,#token10)" for a 4-token sequence. Comma-separated lists of offsets are valid, for situations where verb groups are non-contiguous. Every verb has a unique value in its @id attribute. The tense of a verb Relation name Table 2: The meaning of a certain link type between verbs or times a and b.
Interpretation POSITIONS T a = R b SAME TIMEFRAME R a = R b [, R c , ..R x ] REPORTS E a = S b
group is described using the attributes @view (with values simple, anterior or posterior) and @tense (past, present or future).
The <verb> element has optional @s, @e and @r attributes; these are used for directly linking a verb's speech, event or reference time to a time point specified elsewhere in the annotation. One can reference document creation time with a value of doc or a temporal expression with its id (for example, t1). To reference the speech, event or reference time of other verbs, we use hash references to the event followed by a dot and then the character s, e or r; e.g., v1's reference time is referred to as #v1.r.
As every tensed verb always has exactly one S, E and R, and these points do not hold specific values or have a position on an absolute scale, we do not attempt to directly annotate them or place them on an absolute scale. One might think that the relations should be expressed in XML links; however this requires reifying time points when the information is stored in the relations between time points, so we focus on the relations between these points for each <verb>. To capture these internal relations (as opposed to relations between the S, E and R of different verbs), we use the attributes se, er and sr. These attributes take a value that is a disjunction of <, = and >.
Time-referring expressions are annotated using the <timerefx> element. This has an @id attribute with a unique value, and a @target, as well as an optional @value which works in the same was as the <doc> element's @time attribute.
<rtmml> Yesterday, John ate well. <seg type="token" /> <doc time="now" /> <timerefx xml:id="t1" target=" #token0" /> <verb xml:id="v1" target="#token3" view="simple" tense="past" sr=">" er="=" se=">" r="t1" s="doc" /> </rtmml>
In this example, we have defined a time Yesterday as t1 and a verbal event ate as v1. We have categorised the tense of v1 within Reichenbach's nomenclature, using the verb element's @view and @tense attributes.
Next, we directly describe the reference point of v1, as being the same as the time t1. Finally, we say that this verb is uttered at the same time as the whole discourse -that is, S v1 = S D . In RTMML, if the speech time of a verb is not otherwise defined (directly or indirectly) then it is S D . In cases of multiple voices with distinct speech times, if a speech time is not defined elsewhere, a new one may be instantiated with a string label; we recommend the formatting s, e or r followed by the verb's ID.
This sentence includes a positional use of the reference point, annotated in v1 when we say r="t1". To simplify the annotation task, and to verbosely capture a use of the reference point, RTMML permits an alternative annotation with the <rtmlink> element. This element takes as arguments a relation and a set of times and/or verbs. Possible relation types are POSITIONS, SAME TIMEFRAME (annotating permanence of the reference point) and REPORTS for reported speech; the meanings of these are given in Table 2. In the above markup, we could replace the <verb> element with the following: <verb xml:id="v1" target="#token3" view="simple" tense="past" sr=">" er="=" se=">" s="doc" /> <rtmlink xml:id="l1" type="POSITIONS"> <link source="#t1" /> <link target="#v1" /> </rtmlink> When more than two entities are listed as targets, the relation is taken as being between an optional source entity and each of the target entities. Moving inter-verbal links to the <rtmlink> element helps fulfil TEI p5 and the LAF requirements that referencing and content structures are separated. Use of the <rtmlink> element is not compulsory, as not all instances of positional use or permanence of the reference point can be annotated using it; Reichenbach's original account gives an example in German.
Reasoning and inference rules
Our three relations <, = and > are all transitive. A minimal annotation is acceptable. The S, E and R points of all verbs, S D and all T s can represent nodes on a graph, connected by edges labelled with the relation between nodes.
To position all times in a document with maximal accuracy, that is, to label as many edges in such a graph as possible, one can generate a closure by means of deducing relations. An agendabased algorithm is suitable for this, such as the one given in Setzer et al. (2005).
Integration with TimeML
To use RTMML as an ISO-TimeML extension, we recommend that instead of annotating and referring to <timerefx>s, one refers to <TIMEX3> elements using their tid attribute; references to <doc> will instead refer to a <TIMEX3> that describes document creation time. The attributes of <verb> elements (except xml:id and target) may be be added to <MAKEINSTANCE> or <EVENT> elements, and <rtmlink>s will refer to event or event instance IDs.
Examples
In this section we will give developed examples of the RTMML notation, and show how it can be used to order events and position events on an external temporal scale.
Annotation example
Here we demonstrate RTMML annotation of two short pieces of text.
Fiction
From David Copperfield by Charles Dickens:
(6) When he had put up his things for the night he took out his flute, and blew at it, until I almost thought he would gradually blow his whole being into the large hole at the top, and ooze away at the keys.
We give RTMML for the first five verbal events from Example 6 RTMML in Figure 1. The fifth, v5, exists in a context that is instantiated by v4; its reference time is defined as such. We can use one link element to show that v2, v3 and v4 all use the same reference time as v1. The temporal relation between event times of v1 and v2 can be inferred from their shared reference time and their tenses; that is, given that v1 is anterior past and v2 simple past, we know E v1 < R v1 and E v2 = R v2 . As our <rtmlink> states R v1 = R v2 , then E v1 < E v2 . Finally, v5 and v6 happen in the same context, described with a second SAME TIMEFRAME link.
Editorial news
From an editorial piece in Time-Bank (Pustejovsky et al., 2003) (AP900815-0044.tml):
(7) Saddam appeared to accept a border demarcation treaty he had rejected in peace talks following the August 1988 cease-fire of the eight-year war with Iran.
<doc time="1990-08-15T00:44" /> <!--appeared --> <verb xml:id="v1" target="#token1" view="simple" tense="past" /> <!--had rejected --> <verb xml:id="v2" target="#range(#token9,#token10)" view="anterior" tense="past" /> <rtmlink xml:id="l1" type="SAME_TIMEFRAME"> <link target="#v1" /> <link target="#v2" /> </rtmlink> Here, we relate the simple past verb appeared with the anterior past (past perfect) verb had rejected, permitting the inference that the first verb occurs temporally after the second. The corresponding TimeML (edited for conciseness) is: Saddam <EVENT eid="e74" class="I_STATE"> appeared</EVENT> to accept a border demarcation treaty he had <EVENT eid="e77" class="OCCURRENCE">rejected</EVENT> <MAKEINSTANCE eventID="e74" eiid="ei1568" tense="PAST" aspect="NONE" polarity="POS" pos="VERB"/> <MAKEINSTANCE eventID="e77" eiid="ei1571" tense="PAST" aspect="PERFECTIVE" polarity="POS" pos="VERB"/> In this example, we can see that the TimeML annotation includes the same information, but a significant amount of other annotation detail is present, cluttering the information we are trying to see. Further, these two <EVENT> elements are not directly linked, requiring transitive closure of the network described in a later set of <TLINK> elements, which are omitted here for brevity.
Linking events to calendrical references
RTMML makes it possible to precisely describe the nature of links between verbal events and times, via positional use of the reference point. We will link an event to a temporal expression, <doc time="1850" mod="BEFORE" /> <!--had put --> <verb xml:id="v1" target="#range(#token2,#token3)" view="anterior" tense="past" /> <!--took --> <verb xml:id="v2" target="#token11" view="simple" tense="past" /> <!--blew --> <verb xml:id="v3" target="#token17" view="simple" tense="past" /> <!--thought --> <verb xml:id="v4" target="#token24" view="simple" tense="past" /> <!--would gradually blow --> <verb xml:id="v5" target="#range(#token26,#token28)" view="posterior" tense="past" se="=" er=">" sr=">" r="#v4.e" /> <!--ooze --> <verb xml:id="v6" target="#range(#token26,#token28)" view="posterior" tense="past" se="=" er=">" sr=">" /> <rtmlink xml:id="l1" type="SAME_TIMEFRAME"> <link target="#v1" /> <link target="#v2" /> <link target="#v3" /> <link target="#v4" /> </rtmlink> <rtmlink xml:id="l2" type="SAME_TIMEFRAME"> <link target="#v5" /> <link target="#v6" /> </rtmlink> Figure 1: RTMML for a passage from David Copperfield. and suggest a calendrical reference for that expression, allowing the events to be placed on a calendar. Consider the below text, from wsj 0533.tml in TimeBank.
(8) At the close of business Thursday,5,745,188 shares of Connaught and C$44.3 million face amount of debentures, convertible into 1,826,596 common shares, had been tendered to its offer.
<doc time="1989-10-30" /> <!--close of business Thursday --> <timerefx xml:id="t1" target="#range(#token2,#token5)" /> <!--had been tendered --> <verb xml:id="v1" target="#range(#token25,#token27)" view="anterior" tense="past" /> <rtmlink xml:id="l1" target="#t1 #v1"> <link target="#t1" /> <link target="#v1" /> </rtmlink> This shows that the reference time of v1 is t1. As v1 is anterior, we know that the event mentioned occurred before close of business Thursday. Normalisation is not a task that RTMML addresses, but there are existing methods for deciding which Thursday is being referenced given the document creation date (Mazur and Dale, 2008); a time of day for close of business may be found in a gazetteer.
Comments on annotation
As can be seen in Table 1, there is not a one-toone mapping from English tenses to the nine specified by Reichenbach. In some annotation cases, it is possible to see how to resolve such ambiguities. Even if view and tense are not clearly determinable, it is possible to define relations between S, E and R; for example, for arrangements corresponding to the simple future, S < E. In cases where ambiguities cannot be resolved, one may annotate a disjunction of relation types; in this example, we might say "S < R or S = R" with sr="<=".
Contexts seem to have a shared speech time, and the S − R relationship seems to be the same throughout a context. Sentences which contravene this (e.g. "By the time I ran, John will have arrived") are rather awkward.
RTMML annotation is not bound to a particular language. As long as a segmentation scheme (e.g. WordSeg-1) is agreed and there is a compatible system of tense and aspect, the model can be applied and an annotation created.
Conclusion and Future Development
Being able to recognise and represent reference time in discourse can help in disambiguating temporal reference, determining temporal relations between events and in generating appropriately tensed utterances. A first step in creating computational tools to do this is to develop an annotation schema for recording the relevant temporal information in discourse. To this end we have presented RTMML, our annotation for Reichenbach's model of tense in natural language.
We do not intend to compete with existing languages that are well-equipped to annotate temporal information in documents; RTMML may be integrated with TimeML. What is novel in RTMML is the ability to capture the abstract parts of tense in language. We can now annotate Reichenbach's time points in a document and then process them, for example, to observe interactions between temporal expressions and events, or to track reference time through discourse. This is not directly possible with existing annotation languages.
There are some extensions to Reichenbach's model of the tenses of verbs, which RTMML does not yet cater for. These include the introduction of a reference interval, as opposed to a reference point, from Dowty (1979), and Comrie's suggestion of a second reference point in some circumstances (Comrie, 1985). RTMML should cater for these extensions.
Further, we have preliminary annotation tools and have begun to create a corpus of annotated texts that are also in TimeML corpora. This will allow a direct evaluation of how well TimeML can represent Reichenbach's time points and their relations. To make use of Reichenbach's model in automatic annotation, given a corpus, we would like to apply machine learning techniques to the RTMML annotation task. Work in this direction should enable us to label temporal links and to anchor time expressions with complete accuracy where other systems have not succeeded.
AcknowledgementsThe authors would like to thank David Elson for his valuable comments. The first author would also like to acknowledge the UK Engineering and Physical Science Research Council's support in the form of a doctoral studentship.
Towards task-based temporal extraction and recognition. Ahn, Dagstuhl Seminar Proceedings. 5151References [Ahn et al.2005] D. Ahn, S.F. Adafre, and MD Rijke. 2005. Towards task-based temporal extraction and recognition. In Dagstuhl Seminar Proceedings, vol- ume 5151.
. [ Boguraev, TimeML 1.2. 1: A Formal Specification Language for Events and Temporal Expressions[Boguraev et al.2005] B. Boguraev, J. Castano, R. Gaizauskas, B. Ingria, G. Katz, B. Knippen, J. Littman, I. Mani, J. Pustejovsky, A. Sanfilippo, et al. 2005. TimeML 1.2. 1: A Formal Specification Language for Events and Temporal Expressions.
Tense. B Comrie, Cambridge University PressB. Comrie. 1985. Tense. Cambridge University Press.
Word meaning and Montague grammar. D R Dowty, KluwerD.R. Dowty. 1979. Word meaning and Montague grammar. Kluwer.
Tense and Aspect Assignment in Narrative Discourse. Mckeown2010] D Elson, K Elson, Mckeown, Proceedings of the Sixth International Conference in Natural Language Generation. the Sixth International Conference in Natural Language Generation[Elson and McKeown2010] D. Elson and K. McKeown. 2010. Tense and Aspect Assignment in Narrative Discourse. In Proceedings of the Sixth International Conference in Natural Language Generation.
[ Ferro, TIDES 2005 standard for the annotation of temporal expressions. Technical report. MITRE[Ferro et al.2005] L. Ferro, L. Gerber, I. Mani, B. Sund- heim, and G. Wilson. 2005. TIDES 2005 standard for the annotation of temporal expressions. Techni- cal report, MITRE.
ISO DIS 24614-1 LRM -Word Segmentation of Text -Part 1: Basic Concepts and General Principles (WordSeg-1). Standardization2009a, ISO/TC 37/SC 4/WG 2. [for Standardization2009b] International Organization for StandardizationStandardization2009a] International Organization for Standardization. 2009a. ISO DIS 24612 LRM -Language Annotation Framework (LAF). ISO/TC 37/SC 4/WG 2. [for Standardization2009b] International Organization for Standardization. 2009b. ISO DIS 24614-1 LRM -Word Segmentation of Text -Part 1: Basic Con- cepts and General Principles (WordSeg-1). ISO/TC 37/SC 4/WG 2.
ISO DIS 24617-1 LRM -Semantic Annotation Framework -Part 1: Time and Events (SemAF-Time). Standardization2009c, International Organization for StandardizationStandardization2009c] International Organization for Standardization. 2009c. ISO DIS 24617-1 LRM -Semantic Annotation Framework -Part 1: Time and Events (SemAF-Time). ISO/TC 37/SC 4/WG 2.
From language to time: A temporal expression anchorer. Han, Temporal Representation and Reasoning (TIME). [Han et al.2006] B. Han, D. Gates, and L. Levin. 2006. From language to time: A temporal expression an- chorer. In Temporal Representation and Reasoning (TIME), pages 196-203.
As time goes by: Tense and universal grammar. N Hornstein, MIT PressN. Hornstein. 1990. As time goes by: Tense and universal grammar. MIT Press.
Towards Interoperability of ISO Standards for Language Resource Management. ] K Romary2010, L Lee, Romary, International Conference on Global Interoperability for Language Resources. and Romary2010] K. Lee and L. Romary. 2010. Towards Interoperability of ISO Standards for Lan- guage Resource Management. In International Conference on Global Interoperability for Language Resources.
Robust temporal processing of news. [ Mani, ] I Wilson2000, G Mani, Wilson, Proceedings of the 38th Annual Meeting on ACL. the 38th Annual Meeting on ACLACL[Mani and Wilson2000] I. Mani and G. Wilson. 2000. Robust temporal processing of news. In Proceed- ings of the 38th Annual Meeting on ACL, pages 69- 76. ACL.
What's the date?: high accuracy interpretation of weekday names. [ Mani, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsUSAACL1The language of time: a reader[Mani et al.2005] I. Mani, J. Pustejovsky, and R. Gaizauskas. 2005. The language of time: a reader. Oxford University Press, USA. [Mazur and Dale2008] P. Mazur and R. Dale. 2008. What's the date?: high accuracy interpretation of weekday names. In Proceedings of the 22nd Inter- national Conference on Computational Linguistics- Volume 1, pages 553-560. ACL.
WikiWars: A New Corpus for Research on Temporal Expressions. P Mazur, R Dale, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingACLMazur and Dale2010[Mazur and Dale2010] P. Mazur and R. Dale. 2010. WikiWars: A New Corpus for Research on Tempo- ral Expressions. In Proceedings of the 2010 Con- ference on Empirical Methods in Natural Language Processing, pages 913-922. ACL.
Automatic generation of textual summaries from neonatal intensive care data. F Portet, E Reiter, A Gatt, J Hunter, S Sripada, Y Freer, C Sykes, Artificial Intelligence. 1737-8Portet et al.2009[Portet et al.2009] F. Portet, E. Reiter, A. Gatt, J. Hunter, S. Sripada, Y. Freer, and C. Sykes. 2009. Automatic generation of textual summaries from neonatal intensive care data. Artificial Intelligence, 173(7-8):789-816.
The TimeBank corpus. [ Pustejovsky, Corpus Linguistics. 40[Pustejovsky et al.2003] J. Pustejovsky, P. Hanks, et al. 2003. The TimeBank corpus. In Corpus Linguis- tics, volume 2003, page 40.
The Tenses of Verbs. Elements of Symbolic Logic. H Reichenbach, H. Reichenbach. 1947. The Tenses of Verbs. Elements of Symbolic Logic, pages 287- 98.
The role of inference in the temporal annotation and analysis of text. Language Resources and Evaluation. [ Setzer, 39[Setzer et al.2005] A. Setzer, R. Gaizauskas, and M. Hepple. 2005. The role of inference in the tem- poral annotation and analysis of text. Language Re- sources and Evaluation, 39(2):243-265.
The interpretation of temporal relations in narrative. F Song, R Cohen, Proceedings of the 7th National Conference of AAAI. the 7th National Conference of AAAISong and Cohen1988[Song and Cohen1988] F. Song and R. Cohen. 1988. The interpretation of temporal relations in narrative. In Proceedings of the 7th National Conference of AAAI.
HeidelTime: High quality rule-based extraction and normalization of temporal expressions. [ Strötgen, ] J Gertz2010, M Strötgen, Gertz, Proceedings of the 5th Workshop on Semantic Evaluation. the 5th Workshop on Semantic EvaluationACL[Strötgen and Gertz2010] J. Strötgen and M. Gertz. 2010. HeidelTime: High quality rule-based extrac- tion and normalization of temporal expressions. In Proceedings of the 5th Workshop on Semantic Eval- uation, pages 321-324. ACL.
SemEval-2010 task 13: TempEval-2. Verhagen, Proceedings of the 5th Workshop on Semantic Evaluation. the 5th Workshop on Semantic EvaluationACL[Verhagen et al.2010] M. Verhagen, R. Saurí, T. Caselli, and J. Pustejovsky. 2010. SemEval-2010 task 13: TempEval-2. In Proceedings of the 5th Workshop on Semantic Evaluation, pages 57-62. ACL.
| [] |
[
"A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization",
"A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization"
] | [
"Li Wang lilianwang@tencent.com \nTencent Data Center\nSNG\n",
"Junlin Yao jyao@student.ethz.ch \nETH Zürich\n\n",
"Yunzhe Tao y.tao@columbia.edu \nColumbia University\n\n",
"Li Zhong \nTencent Data Center\nSNG\n",
"Wei Liu \nTencent AI Lab\n\n",
"Qiang Du \nColumbia University\n\n"
] | [
"Tencent Data Center\nSNG",
"ETH Zürich\n",
"Columbia University\n",
"Tencent Data Center\nSNG",
"Tencent AI Lab\n",
"Columbia University\n"
] | [] | In this paper, we propose a deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization. Through jointly attending to topics and word-level alignment, our approach can improve coherence, diversity, and informativeness of generated summaries via a biased probability generation mechanism. On the other hand, reinforcement training, like SCST, directly optimizes the proposed model with respect to the non-differentiable metric ROUGE, which also avoids the exposure bias during inference. We carry out the experimental evaluation with state-of-the-art methods over the Gigaword, DUC-2004, and LCSTS datasets. The empirical results demonstrate the superiority of our proposed method in the abstractive summarization. | 10.24963/ijcai.2018/619 | [
"https://arxiv.org/pdf/1805.03616v2.pdf"
] | 13,663,262 | 1805.03616 | 2aca75709b589dd124aab89717048b416253fdd2 |
A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization
Li Wang lilianwang@tencent.com
Tencent Data Center
SNG
Junlin Yao jyao@student.ethz.ch
ETH Zürich
Yunzhe Tao y.tao@columbia.edu
Columbia University
Li Zhong
Tencent Data Center
SNG
Wei Liu
Tencent AI Lab
Qiang Du
Columbia University
A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization
In this paper, we propose a deep learning approach to tackle the automatic summarization tasks by incorporating topic information into the convolutional sequence-to-sequence (ConvS2S) model and using self-critical sequence training (SCST) for optimization. Through jointly attending to topics and word-level alignment, our approach can improve coherence, diversity, and informativeness of generated summaries via a biased probability generation mechanism. On the other hand, reinforcement training, like SCST, directly optimizes the proposed model with respect to the non-differentiable metric ROUGE, which also avoids the exposure bias during inference. We carry out the experimental evaluation with state-of-the-art methods over the Gigaword, DUC-2004, and LCSTS datasets. The empirical results demonstrate the superiority of our proposed method in the abstractive summarization.
Introduction
Automatic text summarization has played an important role in a variety of natural language processing (NLP) applications, such as news headlines generation [Kraaij et al., 2002] and feeds stream digests [Barzilay and McKeown, 2005]. It is of interest to generate informative and representative natural language summaries which are capable of retaining the main ideas of source articles. The key challenges in automatic text summarization are correctly evaluating and selecting important information, efficiently filtering redundant contents, and properly aggregating related segments and making humanreadable summaries. Compared to other NLP tasks, the automatic summarization has its own difficulties. For example, unlike machine translation tasks where input and output sequences often share similar lengths, summarization tasks are more likely to have input and output sequences greatly imbalanced. Besides, machine translation tasks usually have some direct word-level alignment between input and output sequences, which is less obvious in summarization.
There are two genres of automatic summarization techniques, namely extraction and abstraction. The goal of extrac-tive summarization [Neto et al., 2002] is to produce a summary by selecting important pieces of the source document and concatenating them verbatim, while abstractive summarization generates summaries based on the core ideas of the document, therefore the summaries could be paraphrased in more general terms. Other than extraction, abstractive methods should be able to properly rewrite the core ideas of the source document and assure that the generated summaries are grammatically correct and human readable, which is close to the way how humans do summarization and thus is of interest to us in this paper.
Recently, deep neural network models have been widely used for NLP tasks such as machine translation [Bahdanau et al., 2014], and text summarization [Nallapati et al., 2016b]. In particular, the attention based sequence-tosequence framework [Bahdanau et al., 2014] with recurrent neural networks (RNNs) [Sutskever et al., 2014] prevails in the NLP tasks. However, RNN-based models are more prone to gradient vanishing due to their chain structure of nonlinearities compared to the hierarchical structure of CNNbased models [Dauphin et al., 2016]. In addition, the temporal dependence among the hidden states of RNNs prevents parallelization over the elements of a sequence, which makes the training inefficient.
In this paper, we propose a new approach based on the convolutional sequence-to-sequence (ConvS2S) framework [Gehring et al., 2017] jointly with a topic-aware attention mechanism. To the best of our knowledge, this is the first work for automatic abstractive summarization that incorporates the topic information, which can provide themed and contextual alignment information into deep learning architectures. In addition, we also optimize our proposed model by employing the reinforcement training [Paulus et al., 2017]. The main contributions of this paper include:
• We propose a joint attention and biased probability generation mechanism to incorporate the topic information into an automatic summarization model, which introduces contextual information to help the model generate more coherent summaries with increased diversity and informativeness. • We employ the self-critical sequence training technique in ConvS2S to directly optimize the model with respect to the non-differentiable summarization metric ROUGE, which also remedies the exposure bias issue.
• Extensive experimental results on three benchmark datasets demonstrate that by fully exploiting the power of the ConvS2S architecture enhanced by topic embedding and SCST, our proposed model yields high accuracy for abstractive summarization, advancing the stateof-the-art methods.
Related Work
Automatic text summarization has been widely investigated. Many approaches have been proposed to address this challenging task. Various methods [Neto et al., 2002] focus on the extractive summarization, which select important contents of text and combine them verbatim to produce a summary. On the other hand, abstractive summarization models are able to produce a grammatical summary with a novel expression, most of which [Rush et al., 2015;Nallapati et al., 2016a] are built upon the neural attentionbased sequence-to-sequence framework [Sutskever et al., 2014].
The predominant models are based on the RNNs [Nallapati et al., 2016b;Shen et al., 2016;Paulus et al., 2017], where the encoder and decoder are constructed using either Long Short-Term Memory (LSTM) [Hochreiter and Schmidhuber, 1997] or Gated Recurrent Unit (GRU) [Cho et al., 2014]. However, very few methods have explored the performance of convolutional structure in summarization tasks. Compared to RNNs, convolutional neural networks (CNNs) enjoy several advantages, including the efficient training by leveraging parallel computing, and mitigating the gradient vanishing problem due to fewer non-linearities [Dauphin et al., 2016]. Notably, the recently proposed gated convolutional network [Dauphin et al., 2016;Gehring et al., 2017] outperforms state-of-the-art RNN-based models in the language modeling and machine translation tasks. While the ConvS2S model is also evaluated on the abstractive summarization [Gehring et al., 2017], there are several limitations. First, the model is trained by minimizing a maximum-likelihood loss which is sometimes inconsistent with the quality of a summary and the metric that is evaluated from the whole sentences, such as ROUGE [Lin, 2004] In addition, the exposure bias [Ranzato et al., 2015] occurs due to only exposing the model to the training data distribution instead of its own predictions. More importantly, the ConvS2S model utilizes only word-level alignment which may be insufficient for summarization and prone to incoherent generalized summaries. Therefore, the higher level alignment could be a potential assist. For example, the topic information has been introduced to a RNN-based sequence-to-sequence model [Xing et al., 2017] for chatbots to generate more informative responses.
Reinforced Topic-Aware Convolutional
Sequence-to-Sequence Model
In this section, we propose the Reinforced Topic-Aware Convolutional Sequence-to-Sequence model, which consists of a Figure 1: A graphical illustration of the topic-aware convolutional architecture. Word and topic embeddings of the source sequence are encoded by the associated convolutional blocks (bottom left and bottom right). Then we jointly attend to words and topics by computing dot products of decoder representations (top left) and word/topic encoder representations. Finally, we produce the target sequence through a biased probability generation mechanism.
convolutional architecture with both input words and topics, a joint multi-step attention mechanism, a biased generation structure, and a reinforcement learning procedure. The graphical illustration of the topic-aware convolutional architecture can be found in Figure 1.
ConvS2S Architecture
We exploit the ConvS2S architecture [Gehring et al., 2017] as the basic infrastructure of our model. In this paper, two convolutional blocks are employed, associated with the wordlevel and topic-level embeddings, respectively. We introduce the former in this section and the latter in next, along with the new joint attention and the biased generation mechanism.
Position Embeddings Let x = (x 1 , . . . , x m ) denote the input sentence. We first embed the input elements (words) in a distributional space as w = (w 1 , . . . , w m ), where w i ∈ R d are rows of a randomly initialized matrix D word ∈ R V ×d with V being the size of vocabulary. We also add a positional embedding, p = (p 1 , . . . , p m ) with p i ∈ R d , to retain the order information. Thus, the final embedding for the input is e = (w 1 +p 1 , . . . , w m +p m ). Similarly, let q = (q 1 , . . . , q n ) denote the embedding for output elements that were already generated by the decoder and being fed back to the next step.
Convolutional Layer
Both encoder and decoder networks are built by stacking several convolutional layers. Suppose that the kernel has width of k and the input embedding dimension is d. The convolution takes a concatenation of k input elements X ∈ R kd and maps it to an output element Y ∈ R 2d , namely,
Y = f conv (X) . = W Y X + b Y ,(1)
where the kernel matrix W Y ∈ R 2d×kd and the bias term b Y ∈ R 2d are the parameters to be learned.
Rewrite the output as Y = [A; B], where A, B ∈ R d . Then the gated linear unit (GLU) [Dauphin et al., 2016] is given by
g([A; B]) = A ⊗ σ(B) ,
(2) where σ is the sigmoid function, ⊗ is the point-wise multiplication, and the output of GLU is in R d .
We denote the outputs of the l-th layer as h l = (h l 1 , . . . , h l n ) for the decoder, and z l = (z l 1 , . . . , z l m ) for the encoder. Take the decoder for illustration. The convolution unit i on the l-th layer is computed by residual connections as
h l i = g • f conv h l−1 i−k/2 ; · · · ; h l−1 i+k/2 + h l−1 i ,(3)
where h l i ∈ R d and • is the function composition operator.
Multi-step Attention
The attention mechanism is introduced to make the model access historical information. To compute the attention, we first embed the current decoder state h l i as
d l i = W l d h l i + b l d + q i , (4) where q i ∈ R d is the embedding of the previous decoded element. Weight matrix W l d ∈ R d×d and bias b l d ∈ R d are the parameters to be learned.
The attention weights α l ij of state i and source input element j is computed as a dot product between d l i and the output z uo j of the last encoder block u o , namely,
α l ij = exp(d l i · z uo j ) m t=1 exp(d l i · z uo t )
.
The conditional input c l i ∈ R d for the current decoder layer is computed as
c l i = m j=1 α l ij (z uo j + e j ) ,(6)
where e j is the input element embedding that can provide point information about a specific input element. Once c l i has been computed, it is added to the output of the corresponding decoder layer h l i and serves as a part of the input to h l+1 i .
Topic-Aware Attention Mechanism
A topic model is a type of statistical model for discovering the abstract ideas or hidden semantic structures that occur in a collection of source articles. In this paper, we employ the topic model to acquire latent knowledge of documents and incorporate a topic-aware mechanism into the multi-step attention-based ConvS2S model, which is expected to bring prior knowledge for text summarization. Now we present the novel approach on how to incorporate the topic model into the basic ConvS2S framework via the joint attention mechanism and biased probability generation process.
Topic Embeddings
The topic embeddings are obtained by classical topic models such as Latent Dirichlet Allocation (LDA) [Blei et al., 2003]. During pre-training, we use LDA to assign topics to the input texts. The top N non-universal words with the highest probabilities of each topic are chosen into the topic vocabulary K. More details will be given in Section 4. While the vocabulary of texts is denoted as V , we assume that K ⊂ V . Given an input sentence x = (x 1 , . . . , x m ), if a word x i / ∈ K, we embed it as before to attain w i . However, if a word x i ∈ K, we can embed this topic word as t i ∈ R d , which is a row in the topic embedding matrix D topic ∈ R K×d , where K is the size of topic vocabulary. The embedding matrix D topic is normalized from the corresponding pre-trained topic distribution matrix, whose row is proportional to the number of times that each word is assigned to each topic. In this case, the positional embedding vectors are also added to the encoder and decoder elements, respectively, to obtain the final topic embeddings r = (r 1 , . . . , r m ) and s = (s 1 , . . . , s n ).
Joint Attention
Again we take the decoder for illustration. Following the convolutional layer introduced before, we can obtain the convolution unit i on the l-th layer in the decoder of topic level as
h l i ∈ R d . Similar to (4), we havẽ d l i =W l dh l i +b l d + s i .(7)
We then incorporate the topic information into the model through a joint attention mechanism. During decoding, the joint attention weight β l ij is given by
β l ij = exp(d l i · z uo j +d l i · z ut j ) m t=1 exp(d l i · z uo t +d l i · z ut t ) ,(8)
where z ut j is the output of the last topic-level encoder block u t . Then the conditional inputc l i ∈ R d is computed as
c l i = m j=1 β l ij (z ut j + r j ) .(9)
In the joint attention mechanism, bothc l i and c l i are added to the output of the corresponding decoder layerh l i and are a part of the input toh l+1 i . Biased Probability Generation Finally, we compute a distribution over all possible next target elements y i+1 ∈ R T , namely
p θ (y i+1 ) := p(y i+1 |y 1 , . . . , y i , x) ∈ R T ,(10)
by transforming the top word-level decoder outputs h Lo and topic-level decoder outputsh Lt via a linear layer Ψ(·), which is computed by
Ψ(h) = W o h + b o ,(11)
where W o ∈ R T ×d and b o ∈ R T are the parameters to be learned. Then the biased generation distribution is given as
p θ (y i+1 ) = 1 Z exp Ψ(h Lo i ) + exp Ψ(h Lt i ) ⊗ I {w∈K} ,(12)
where Z is the normalizer, h Lo i andh Lt i denote the i-th top decoder outputs of word and topic, respectively, and I is the one-hot indicator vector of each candidate word w in y i+1 . When the candidate word w is a topic word, we bias the generation distribution by the topic information. Otherwise, we ignore the topic part. To some extent, the complexity of the search space is reduced by introducing the topic bias since important words are more likely to be generated directly.
Reinforcement Learning
The teacher forcing algorithm [Williams and Zipser, 1989] aims to minimize the maximum-likelihood loss at each decoding step, namely,
L ml = − T i=1 log p θ (y * i |y * 1 , y * 2 , . . . , y * i−1 , x) ,(13)
where x refers to an input sequence and y * = (y * 1 ,y * 2 ,. . . ,y * T ) is the corresponding ground-truth output sequence.
Minimizing the objective in Eq. (13) often produces suboptimal results with respect to the evaluation metrics, such as ROUGE which measures the sentence-level accuracy of the generated summaries. The sub-optimality is related to the problem called exposure bias [Ranzato et al., 2015], which is caused by only exposing a model to the distribution of training data instead of its own distribution. During the training process, models are fed by ground-truth output sequences to predict the next word, whereas during inference they generate the next word given the predicted words as inputs. Therefore, in the test process, the error of each step accumulates and leads to the deterioration of performance.
The second reason for sub-optimality comes from the flexibility of summaries. The maximum-likelihood objective rewards models that can predict exactly the same summaries as references while penalizing those that produce different texts even though they are semantically similar. Providing multiple reference summaries is helpful yet insufficient since there are alternatives to rephrase a given summary. Therefore, minimizing the objective in Eq. (13) neglects the intrinsic property of summarization. ROUGE, on the other hand, provides more flexible evaluation, encouraging models to focus more on semantic meanings than on word-level correspondences.
In order to address such issues, we utilize self-critical sequence training (SCST) [Rennie et al., 2016], a policy gradient algorithm for reinforcement learning, to directly maximize the non-differentiable ROUGE metric. During reinforcement learning, we generate two output sequences given the input sequence x. The first sequenceŷ is obtained by greedily selecting words that maximize the output probability distribution, and the other output sequence y s is generated by sampling from the distribution. After obtaining ROUGE scores of both sequences as our rewards, i.e., r(y s ) and r(ŷ), we minimize the reinforcement loss
L rl = −(r(y s ) − r(ŷ)) log p θ (y s ),(14)
and update model parameters by gradient descent techniques. With SCST, we can directly optimize the discrete evaluation metric. In addition, the "self-critical" test-time estimate of the reward r(ŷ) provides a simple yet effective baseline
Topic Information
The classical LDA with Gibbs Sampling technique is used to pre-train the corpus for topic embedding initialization and provide candidates for the biased probability generation process. The topic embedding values are normalized to a distribution with mean zero and variance of 0.1 for adaption to the neural network structure. In this paper, we pick top N = 200 words with the highest probabilities in each topic to obtain the topic word set. Note that the universal words are filtered out during pre-training. Randomly selected examples of topic words of the Gigaword corpus are presented in Table 1.
Model Parameters and Optimization
We employ six convolutional layers for both the encoder and decoder. All embeddings, including the initialized embed- Table 2: Accuracy on the Gigaword corpus in terms of the fulllength ROUGE-1 (RG-1), ROUGE-2 (RG-2), and ROUGE-L (RG-L). Best performance on each score is displayed in boldface. ding and the output produced by the decoder before the final linear layer, have a dimensionality of 256. We also adopt the same dimensionality for the size of linear layer mapping between hidden and embedding states. We use a learning rate of 0.25 and reduce it by a decay rate of 0.1 once the validation ROUGE score stops increasing after each epoch until the learning rate falls below 10 −5 . We first train the basic topic-aware convolutional model with respect to a standard maximum likelihood objective, and then switch to further minimize a mixed training objective [Paulus et al., 2017], incorporating the reinforcement learning objective L rl and the original maximum likelihood L ml , which is given as
RG-1 (F) RG-2 (F) RG-L (F) ABS (beam) [Rush
L mixed = λL rl + (1 − λ)L ml ,(15)
where the scaling factor λ is set to be 0.99 in our experiments. Moreover, we choose the ROUGE-L metric as the reinforcement reward function.
Results and Analysis
We follow the existing work and adopt the ROUGE metric [Lin, 2004] for evaluation.
Gigaword Corpus
We demonstrate the effectiveness of our proposed model via a step-by-step justification. First, the basic ConvS2S structure with topic-aware model or reinforcement learning is tested, respectively. Then we combine the two to show the performance of our Reinforced-Topic-ConvS2S model. We report Examples of summaries D: the sri lankan government on wednesday announced the closure of government schools with immediate effect as a military campaign against tamil separatists escalated in the north of the country. R: sri lanka closes schools as war escalates OR: sri lanka closes schools with immediate effect OT: sri lanka closes schools in wake of military attacks D: a us citizen who spied for communist east germany was given a suspended jail sentence of ## months here friday. R: us citizen who spied for east germans given suspended sentence OR: us man gets suspended jail term for communist spying OT: us man jailed for espionage D: malaysian prime minister mahathir mohamad indicated he would soon relinquish control of the ruling party to his deputy anwar ibrahim. R: mahathir wants leadership change to be smooth OR: malaysia's mahathir to relinquish control of ruling party OT: malaysia's mahathir to submit control of ruling party D: a french crocodile farm said it had stepped up efforts to breed one of the world's most endangered species, the indian UNK, with the hope of ultimately returning animals to their habitat in south asia. R: french farm offers hope for endangered asian crocs UNK picture OR: french crocodile farm steps up efforts to breed endangered species OT: french crocodile farm says steps up efforts to save endangered species the full-length F-1 scores of the ROUGE-1 (RG-1), ROUGE-2 (RG-2), and ROUGE-L (RG-L) metrics and compare the results with various neural abstractive summarization methods, which are presented in Table 2. The ABS and ABS+ models are attention-based neural models for text summarization. The RAS-Elman model introduces a conditional RNN, in which the conditioner is provided by a convolutional attention-based encoder. The words-lvt5k-1sent model is also a RNN-based attention model which implements a largevocabulary trick. Besides, RNN+MRT employs the minimum risk training strategy which directly optimizes model parameters in sentence level with respect to the evaluation metrics. SEASS (beam) extends the sequence-to-sequence framework with a selective encoding model. The results have demonstrated that both the topic-aware module and the reinforcement learning process can improve the accuracy on text summarization. Moreover, our proposed model exhibits best scores of RG-1, RG-2 and RG-L. In addition, [Zhou et al., 2017] further selects 2000 pairs of summaries as an internal test set of Gigaword. We also evaluate our proposed model on this set and present the results in Table 3. Again, our proposed model achieves the best performance in terms of all the three ROUGE scores.
To further demonstrate the improvement of readability and diversity by the topic information, we also present some qualitative results by randomly extracting several summaries from test. We compare the reference summaries to the summaries generated by our proposed model with or without topic-aware mechanism. The examples are presented in Table 4. We can observe that when the topic model is adopted, it can generate some accurately delivered topic words which are not in RG-1 (R) RG-2 (R) RG-L (R) ABS [Rush et al., 2015] 26. Table 6: Accuracy on the LCSTS dataset in terms of the full-length RG-1, RG-2, and RG-L. In last three rows, the word-level ROUGE scores are presented on the left and the character-level on the right.
the reference summaries or the original texts. It is believed that the joint learning with a pre-trained topic model can offer more insightful information and improve the diversity and readability for the summarization.
DUC-2004 Dataset
Since the DUC-2004 dataset is an evaluation-only dataset, we train the models on the Gigaword corpus first and then evaluate their performance on the DUC dataset. As the standard practice, we report the recall-based scores of the RG-1, RG-2, and RG-L metrics in this experiment, which are given in Table 5. From Table 5 we can observe that the proposed Reinforced-Topic-ConvS2S model achieves best scores of the RG-1 and RG-L metrics, and is comparable on the RG-2 score. Due to the similarity of the two datasets, we do not provide qualitative summarization examples in this experiment.
LCSTS Dataset
We now consider the abstractive summarization task on the LCSTS dataset. Since this is a large-scale Chinese dataset, suitable data preprocessing approaches should be proposed first. Basically, there are two approaches to preprocessing the Chinese dataset: character-based and word-based. The former takes each Chinese character as the input, while the latter splits an input sentence into Chinese words. [Hu et al., 2015] provides a baseline result on both preprocessing approaches. [Shen et al., 2016] also conducts experiments on the LCSTS corpus based on character inputs. [Gu et al., 2016] proposes a neural model, the COPYNET, with both character-based and word-based preprocessing by incorporating the copying mechanism into the sequence-to-sequence framework. In this work, we adopt the word-based approach as we believe that in the case of Chinese, words are more relevant to latent knowledge of documents than characters are. Since the standard ROUGE package 2 is usually used to evaluate the English summaries, directly employing the package to evaluate Chinese summaries would yield underrated results. In order to evaluate the summarization on the LC-STS dataset, we follow the suggestion of [Hu et al., 2015] by mapping Chinese words/characters to numerical IDs, on which we then perform the ROUGE evaluation. Since not all previous work explicitly mentioned whether word-based or character-based ROUGE metrics were reported, we evaluate our proposed model with both metrics in order to obtain a comprehensive comparison. The results of both scores are presented in Table 6, which are displayed as word-based score/character-based score.
From the results shown in Table 6, we see that one can always achieve higher ROUGE scores in the character level than that based on Chinese words by our proposed model. We can also observe that the character-based results of our Reinforced-Topic-ConvS2S model outperforms every other method. Regarding to word-based ROUGE scores, our model obtains the best performance in terms of RG-1 and RG-L metrics. However, our best model does not achieve a good RG-2 score as its RG-1 and RG-L scores. We suspect that it may be partly caused by the biased probability generation mechanism that influences word order, which requires further studies.
In addition to ROUGE scores, we also present some randomly picked examples of generated summaries in Table 7.
The original examples (in Chinese) are shown and all the texts are carefully translated to English for the convenience of reading. The examples demonstrate that the topic-aware mechanism can also improve the diversity in Chinese summarization tasks.
Conclusion and Future Work
In this work, we propose a topic-aware ConvS2S model with reinforcement learning for abstractive text summarization. It is demonstrated that the new topic-aware attention mechanism introduces some high-level contextual information for summarization. The performance of the proposed model advances state-of-the-art methods on various benchmark datasets. In addition, our model can produce summaries with better informativeness, coherence, and diversity.
Note that the experiments in this work are mainly based on the sentence summarization. In the future, we aim to evaluate our model on the datasets where the source texts can be long paragraphs or multi-documents. Moreover, we also note that how to evaluate the performance on Chinese summaries remains an open problem. It is also of great interest to study on this subject in the future.
Examples of summaries D: 根据#### 年# 月# 日国家发改委等部门联合发布的《关于进一步做好新能源汽车推广应用工作的通知》,#### 年的 补贴金额相比#### 年将降低##% 。(分享自@ 电动邦) D: According to the notice On the further promotion and application of new energy vehicles, jointly released by the National Development and Reform Commission and other departments on ##/##/#### (date), the compensation of #### (year) will be reduced by ##% compared to #### (year). (reposted from @electric nation) R: 补贴金额再缩水#### 年新能源车政策解读 R: The compensation has been reduced again: #### (year) policy analysis of new energy automobiles OR: #### 年新能源汽车推广应用工作的通知 OR: #### (year) notice on the promotion and application of new energy vehicles OT : 国家发改委 发文 进一步做好 新能源汽车 推广应用工作 OT : The National Development and Reform Commission issued a policy on further promotion and application of new energy vehicles D: 成都市软件和信息技术服务业近年来一直保持快速增长势头,稳居中西部城市之首,已成为我国西部" 硅谷" 。 《#### 年度成都市软件和信息技术服务产业发展报告》日前发布. . . . . . 详情请见: @ 成都日报@ 成都发布 D: In recent years, the service industry of software and information technology in Chengdu has been growing rapidly, ranking first among the cities in Midwest China. Chengdu has become China's western "Silicon Valley". The #### (year) Annual Chengdu Software and Information Technology Service Industry Development Report has been released recently ... ... see details: @ Chengdu Daily @ Chengdu release R: 成都倾力打造西部" 硅谷" R: Chengdu makes every effort to build the western "Silicon Valley" OR: 成都软件 和信息技术服务业发展报告发布 OR: The report of Chengdu software and information technology service industry development has been released OT : 成都软件 和信息技术服务业 跃居 西部" 硅谷" OT : The service industry of software and information technology in Chengdu rockets to make it the western "Silicon Valley" D: 新疆独特的区位优势,使其成为" 一带一路" 战略重要一环。记者从新疆发改委获悉,库尔勒至格尔木铁路先期开工 段已进入招投标阶段,计划#### 年## 月中旬正式开工建设。#### 年计划完成投资## 亿元。 D: Xinjiang's unique geographical advantages make it an important part of The Belt and Road strategy. The reporter learned from the Xinjiang Development and Reform Commission that the initial railway construction project from Korla to Golmud had been on tendering procedure. The project was scheduled to officially launch in mid ## (month) of #### (year) and attract the investment of ## billion yuan by #### (year). R: " 一带一路" 战略惠及新疆<unk>, 铁路年底开建 R: The Belt and Road strategy benefits Xinjiang <unk> and the railway construction starts by the end of #### (year) OR: 新疆<unk> 至格尔木铁路计划#### 年开建 OR: The railway from <unk> to Golmud is scheduled to start construction in #### (year) OT : 库尔勒至格尔木铁路拟 ## 月开工建设 OT : The railway construction project from Korla to Golmud is planned to launch in ## (month) D: 昨日,商报记者从代表国内婚尚产业" 风向标" 的上海国际婚纱摄影器材展览会上了解到,部分商家开始将婚庆布 置、婚礼流程、形式交给新人决定以迎合## 后新人的需求。此次展览会的规模超过# 万平方米,吸引参展企业超过### 家。 D: The day before, the reporters of Commercial News learned from the Shanghai International Wedding Photographic Equipment Exhibition, which has been leading and defining the domestic wedding industry, that some companies began to cater for the requirements of ##s-generation newly married couples by self-decided wedding decoration, wedding process and forms. The venue of the exhibition is more than # tens of thousands square meters, attracting more than ### exhibitors. R: 婚庆" 私人定制" 受## 后新人追捧 R: The personalized wedding is admired by ##s-generation newly married couples OR: 上海 国际婚纱摄影 器材展览会举行 OR: Shanghai International Wedding Photographic Equipment Exhibition was held OT : 上海 国际婚纱摄影 器材展览会昨 举行 OT : Shanghai International Wedding Photographic Equipment Exhibition was held yesterday Table 7: Examples of generated summaries on the LCSTS dataset. D: source document, R: reference summary, OR: output of the Reinforced-ConvS2S model, OT: output of the Reinforced-Topic-ConvS2S model. The words marked in blue are topic words not in the reference summaries. The words marked in red are topic words neither in the reference summaries nor in the source documents. All the texts are carefully translated from Chinese.
190K validation samples and 1951 test samples for evaluation. The input summary pairs consist of the headline and the first sentence of the source articles. We also evaluate various models on the DUC-2004 test set 1[Over et al., 2007]. The dataset is a standard summarization evaluation set, which consists of 500 news articles.No.
Topic Words
1
prime, minister, talks, leader, elections, visit
2
bird, flu, officials, opens, poultry, die
3
trade, free, EU, army, urges, ban
4
Bush, world, talks, foreign, investment, markets
5
world, Malaysia, Thailand, meet, Vietnam, U.S.
Table 1: Examples of topic words for the Gigaword corpus.
and improves training/test time consistency. Since during
learning we set the baseline of the REINFORCE algorithm
as the reward obtained by the current model in the test-time
inference, the SCST exposes the model to its own distribu-
tion and encourages it to produce the sequence outputŷ with
a high ROUGE score, avoiding the exposure bias issue and
thus improving the test performance.
4 Experimental Setup
4.1 Datasets
In this paper, we consider three datasets to evaluate the per-
formance of different methods in the abstractive text sum-
marization task. First, we consider the annotated Gigaword
corpus [Graff and Cieri, 2003] preprocessed identically to
[Rush et al., 2015], which leads to around 3.8M training
samples, Unlike the
Gigaword corpus, each article in DUC-2004 is paired with
four human-generated reference summaries, which makes the
evaluation more objective. The last dataset for evaluation is
a large corpus of Chinese short text summarization (LCSTS)
dataset [Hu et al., 2015] collected and constructed from the
Chinese microblogging website Sina Weibo. Following the
setting in the original paper, we use the first part of LCSTS
dataset for training, which contains 2.4M text-summary pairs,
and choose 725 pairs from the last part with high annotation
scores as our test set.
Table 3 :
3Accuracy on the internal test set of Gigaword corpus in terms of the full-length RG-1, RG-2, and RG-L. Best performance on each score is displayed in boldface.
Table 4 :
4Examples of generated summaries on the Gigaword cor-
pus. D: source document, R: reference summary, OR: output of the
Reinforced-ConvS2S model, OT: output of the Reinforced-Topic-
ConvS2S model. The words marked in blue are topic words not in
the reference summaries. The words marked in red are topic words
neither in the reference summaries nor in the source documents.
Table 5 :
5Accuracy on the DUC-2004 dataset in terms of the recall-
only RG-1, RG-2, and RG-L. Best performance on each score is
displayed in boldface.
RG-1 (F)
RG-2 (F)
RG-L (F)
character-based preprocessing
RNN context [Hu et al., 2015]
29.90
17.40
27.20
COPYNET [Gu et al., 2016]
34.40
21.60
31.30
RNN+MLE [Shen et al., 2016]
34.90
23.30
32.70
RNN+MRT [Shen et al., 2016]
38.20
25.20
35.40
word-based preprocessing
RNN context [Hu et al., 2015]
26.80
16.10
24.10
COPYNET [Gu et al., 2016]
35.00
22.30
32.00
Topic-ConvS2S
38.94/44.42 21.05/32.65 37.03/42.09
Reinforced-ConvS2S
36.68/42.61 18.69/29.79 34.85/40.03
Reinforced-Topic-ConvS2S
39.93/45.12 21.58/33.08 37.92/42.68
http://duc.nist.gov/data.html
http://www.berouge.com/Pages/default.aspx
AcknowledgementsQiang Du is supported in part by the US NSF TRIPODs project through CCF-170483.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. [ References, Bahdanau, arXiv:1409.0473arXiv:1705.03122Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMichael Auli31arXiv preprintJonas Gehring. David Grangier, Denis Yarats, and Yann N Dauphin. Convolutional sequence to sequence learningReferences [Bahdanau et al., 2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473, 2014. [Barzilay and McKeown, 2005] Regina Barzilay and Kath- leen R McKeown. Sentence fusion for multidocu- ment news summarization. Computational Linguistics, 31(3):297-328, 2005. [Blei et al., 2003] David M Blei, Andrew Y Ng, and Michael I Jordan. Latent dirichlet allocation. Journal of machine Learning research, 3(Jan):993-1022, 2003. [Cho et al., 2014] Kyunghyun Cho, Bart Van Merriënboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078, 2014. [Chopra et al., 2016] Sumit Chopra, Michael Auli, and Alexander M Rush. Abstractive sentence summarization with attentive recurrent neural networks. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 93-98, 2016. [Dauphin et al., 2016] Yann N Dauphin, Angela Fan, Michael Auli, and David Grangier. Language modeling with gated convolutional networks. arXiv preprint arXiv:1612.08083, 2016. [Gehring et al., 2017] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. Convo- lutional sequence to sequence learning. arXiv preprint arXiv:1705.03122, 2017.
Headline extraction based on a combination of uni-and multidocument summarization techniques. Cieri ; David Graff, C Cieri, ; Gu, arXiv:1603.06393arXiv:1506.05865Proceedings of the ACL workshop on Automatic Summarization/Document Understanding Conference. Kraaij et al., 2002] Wessel Kraaij, Martijn Spitters, and Anette Hulththe ACL workshop on Automatic Summarization/Document Understanding ConferenceACL9tion dataset. arXiv preprintIncorporating copying mechanism in sequence-to-sequence learningand Cieri, 2003] David Graff and C Cieri. English gi- gaword corpus. Linguistic Data Consortium, 2003. [Gu et al., 2016] Jiatao Gu, Zhengdong Lu, Hang Li, and Victor OK Li. Incorporating copying mecha- nism in sequence-to-sequence learning. arXiv preprint arXiv:1603.06393, 2016. [Hochreiter and Schmidhuber, 1997] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735-1780, 1997. [Hu et al., 2015] Baotian Hu, Qingcai Chen, and Fangze Zhu. Lcsts: A large scale chinese short text summariza- tion dataset. arXiv preprint arXiv:1506.05865, 2015. [Kraaij et al., 2002] Wessel Kraaij, Martijn Spitters, and Anette Hulth. Headline extraction based on a combina- tion of uni-and multidocument summarization techniques. In Proceedings of the ACL workshop on Automatic Sum- marization/Document Understanding Conference (DUC 2002). ACL, 2002.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out: Proceedings of the ACL-04 workshop. Ramesh Nallapati, Bing Xiang, and Bowen ZhouBarcelona, SpainSequence-to-sequence rnns for text summarization, 2004] Chin-Yew Lin. Rouge: A package for auto- matic evaluation of summaries. In Text summarization branches out: Proceedings of the ACL-04 workshop, vol- ume 8. Barcelona, Spain, 2004. [Nallapati et al., 2016a] Ramesh Nallapati, Bing Xiang, and Bowen Zhou. Sequence-to-sequence rnns for text summa- rization. 2016.
Joel Neto, Alex Freitas, and Celso Kaestner. Automatic text summarization using a machine learning approach. [ Nallapati, arXiv:1602.06023Advances in Artificial Intelligence. arXiv preprintAbstractive text summarization using sequence-to-sequence rnns and beyond[Nallapati et al., 2016b] Ramesh Nallapati, Bowen Zhou, Caglar Gulcehre, Bing Xiang, et al. Abstractive text sum- marization using sequence-to-sequence rnns and beyond. arXiv preprint arXiv:1602.06023, 2016. [Neto et al., 2002] Joel Neto, Alex Freitas, and Celso Kaest- ner. Automatic text summarization using a machine learn- ing approach. Advances in Artificial Intelligence, pages 205-215, 2002.
Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304. arXiv:1511.06732Sequence level training with recurrent neural networks. 43arXiv preprintDuc in context. Information Processing & Managementet al., 2007] Paul Over, Hoa Dang, and Donna Har- man. Duc in context. Information Processing & Man- agement, 43(6):1506-1520, 2007. [Paszke et al., 2017] Adam Paszke, Sam Gross, and Soumith Chintala. Pytorch, 2017. [Paulus et al., 2017] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304, 2017. [Ranzato et al., 2015] Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. Sequence level training with recurrent neural networks. arXiv preprint arXiv:1511.06732, 2015.
Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. Rennie , arXiv:1612.00563arXiv:1604.01904A learning algorithm for continually running fully recurrent neural networks. Williams and Zipser1arXiv preprintAdvances in neural information processing systemsRennie et al., 2016] Steven J Rennie, Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. Self- critical sequence training for image captioning. arXiv preprint arXiv:1612.00563, 2016. [Rush et al., 2015] Alexander M Rush, Sumit Chopra, and Jason Weston. A neural attention model for ab- stractive sentence summarization. arXiv preprint arXiv:1509.00685, 2015. [Shen et al., 2016] Shiqi Shen, Yu Zhao, Zhiyuan Liu, Maosong Sun, et al. Neural headline genera- tion with sentence-wise optimization. arXiv preprint arXiv:1604.01904, 2016. [Sutskever et al., 2013] Ilya Sutskever, James Martens, George Dahl, and Geoffrey Hinton. On the importance of initialization and momentum in deep learning. In International conference on machine learning, pages 1139-1147, 2013. [Sutskever et al., 2014] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104-3112, 2014. [Williams and Zipser, 1989] R. J. Williams and D. Zipser. A learning algorithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270-280, June 1989.
Selective encoding for abstractive sentence summarization. arXiv:1704.07073AAAI. Nan Yang, Furu Wei, and Ming ZhouarXiv preprintTopic aware neural response generationet al., 2017] Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. Topic aware neural response generation. In AAAI, pages 3351-3357, 2017. [Zhou et al., 2017] Qingyu Zhou, Nan Yang, Furu Wei, and Ming Zhou. Selective encoding for abstractive sentence summarization. arXiv preprint arXiv:1704.07073, 2017.
| [] |
[
"KazakhTTS2: Extending the Open-Source Kazakh TTS Corpus With More Data, Speakers, and Topics",
"KazakhTTS2: Extending the Open-Source Kazakh TTS Corpus With More Data, Speakers, and Topics"
] | [
"Saida Mussakhojayeva saida.mussakhojayeva@nu.edu.kz \nInstitute of Smart Systems and Artificial Intelligence (ISSAI)\nNazarbayev University\nNur-SultanKazakhstan\n",
"Yerbolat Khassanov yerbolat.khassanov@nu.edu.kz \nInstitute of Smart Systems and Artificial Intelligence (ISSAI)\nNazarbayev University\nNur-SultanKazakhstan\n",
"Huseyin Atakan Varol \nInstitute of Smart Systems and Artificial Intelligence (ISSAI)\nNazarbayev University\nNur-SultanKazakhstan\n"
] | [
"Institute of Smart Systems and Artificial Intelligence (ISSAI)\nNazarbayev University\nNur-SultanKazakhstan",
"Institute of Smart Systems and Artificial Intelligence (ISSAI)\nNazarbayev University\nNur-SultanKazakhstan",
"Institute of Smart Systems and Artificial Intelligence (ISSAI)\nNazarbayev University\nNur-SultanKazakhstan"
] | [
"Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)"
] | We present an expanded version of our previously released Kazakh text-to-speech (KazakhTTS) synthesis corpus. In the new KazakhTTS2 corpus, the overall size has increased from 93 hours to 271 hours, the number of speakers has risen from two to five (three females and two males), and the topic coverage has been diversified with the help of new sources, including a book and Wikipedia articles. This corpus is necessary for building high-quality TTS systems for Kazakh, a Central Asian agglutinative language from the Turkic family, which presents several linguistic challenges. We describe the corpus construction process and provide the details of the training and evaluation procedures for the TTS system. Our experimental results indicate that the constructed corpus is sufficient to build robust TTS models for real-world applications, with a subjective mean opinion score ranging from 3.6 to 4.2 for all the five speakers. We believe that our corpus will facilitate speech and language research for Kazakh and other Turkic languages, which are widely considered to be low-resource due to the limited availability of free linguistic data. The constructed corpus, code, and pretrained models are publicly available in our GitHub repository. | null | [
"https://www.aclanthology.org/2022.lrec-1.578.pdf"
] | 246,016,049 | 2201.05771 | 8da4485b568b1b5757160c7e1829aec9939ef603 |
KazakhTTS2: Extending the Open-Source Kazakh TTS Corpus With More Data, Speakers, and Topics
June 2022
Saida Mussakhojayeva saida.mussakhojayeva@nu.edu.kz
Institute of Smart Systems and Artificial Intelligence (ISSAI)
Nazarbayev University
Nur-SultanKazakhstan
Yerbolat Khassanov yerbolat.khassanov@nu.edu.kz
Institute of Smart Systems and Artificial Intelligence (ISSAI)
Nazarbayev University
Nur-SultanKazakhstan
Huseyin Atakan Varol
Institute of Smart Systems and Artificial Intelligence (ISSAI)
Nazarbayev University
Nur-SultanKazakhstan
KazakhTTS2: Extending the Open-Source Kazakh TTS Corpus With More Data, Speakers, and Topics
Proceedings of the 13th Conference on Language Resources and Evaluation (LREC 2022)
the 13th Conference on Language Resources and Evaluation (LREC 2022)MarseilleJune 2022Language Resources Association (ELRA), licensed under CC-BY-NC-4.0 5404text-to-speechTTSspeech synthesisspeech corpusopen-sourceKazakhTurkicagglutinative
We present an expanded version of our previously released Kazakh text-to-speech (KazakhTTS) synthesis corpus. In the new KazakhTTS2 corpus, the overall size has increased from 93 hours to 271 hours, the number of speakers has risen from two to five (three females and two males), and the topic coverage has been diversified with the help of new sources, including a book and Wikipedia articles. This corpus is necessary for building high-quality TTS systems for Kazakh, a Central Asian agglutinative language from the Turkic family, which presents several linguistic challenges. We describe the corpus construction process and provide the details of the training and evaluation procedures for the TTS system. Our experimental results indicate that the constructed corpus is sufficient to build robust TTS models for real-world applications, with a subjective mean opinion score ranging from 3.6 to 4.2 for all the five speakers. We believe that our corpus will facilitate speech and language research for Kazakh and other Turkic languages, which are widely considered to be low-resource due to the limited availability of free linguistic data. The constructed corpus, code, and pretrained models are publicly available in our GitHub repository.
Introduction
Text-to-speech (TTS), also known as speech synthesis, is the automatic process of converting written text into speech (Taylor, 2009), which has wide application potential and a substantial social impact, including digital assistants, improved accessibility for people with reading disabilities, speech and vision impairments, to name a few. For visually impaired people, in particular, it enables voice-controlled access to Internet-ofthings devices, on-demand access to books and websites, and access to other vocalized assistive technologies. In turn, these enhance the overall quality of life, consumption of information, and access to knowledge. In addition, TTS can complement other important language and vision technologies, such as speech recognition (Tjandra et al., 2017), speech-to-speech translation (Wahlster, 2013), face-to-face translation (Prajwal et al., 2019), and visual-to-sound (Zhou et al., 2018). Considering the aforementioned benefits, TTS is undoubtedly an essential speech processing technology for any language. In recent years, TTS research has progressed remarkably thanks to neural network-based architectures (Tan et al., 2021), regularly organized challenges (Black and Tokuda, 2005;Dunbar et al., 2019), and open-source datasets (Ito and Johnson, 2017;Zen et al., 2019;Shi et al., 2020). Especially, impressive results have been achieved for commercially viable languages, such as English and Mandarin. However, there is still a lack of research into the development of TTS technologies for low-resource languages. To address this problem in regard to Kazakh, Mussakhojayeva et al. (2021a) have recently developed the first open-source Kazakh text-to-speech (KazakhTTS) corpus, which contains 93 hours of manually transcribed audio from two professional speakers (one female and one male) reading news articles. The developed corpus has generated substantial interest and has been downloaded over 200 times in less than a year by academia and industry, both from local and global organizations. This demonstrates high demand for open-source and high-quality transcribed speech data in the Kazakh language. Motivated by this, in this paper, we present a new version of the KazakhTTS corpus called KazakhTTS2, which adds more data, speakers, and topics to our corpus. Specifically, we have increased the data size from 93 hours to 271 hours. We have added three new professional speakers (two females and one male), with over 25 hours of transcribed data for each speaker. In addition to news, we have diversified the topic coverage with a book and Wikipedia articles. Like the first version, KazakhTTS2 is freely available to both academic researchers and industry practitioners in our GitHub repository 1 . To validate the KazakhTTS2 corpus, we built a stateof-the-art TTS system based on the Tacotron 2 (Shen et al., 2018) architecture. The constructed TTS system was evaluated using the subjective mean opinion score (MOS) measure. The obtained MOSs for all the speakers ranged from 3.6 to 4.2, which indicates the utility of the KazakhTTS2 corpus for building robust TTS systems suitable for real-world applications. We believe that our corpus will further facilitate the rapid development of TTS systems in the Kazakh language and thus serve as an enabler for the wide range of applica-tions mentioned above. We also believe that this work will encourage subsequent efforts in this area to address some of the practical issues that arise when training TTS systems for the Kazakh language. Additionally, our corpus can be employed to bootstrap speech technologies for other similar languages from the Turkic family, for example, by means of cross-lingual transfer learning and self-supervised pretraining (Baevski et al., 2020). To sum up, our main contributions are:
• We developed a text-to-speech synthesis corpus for the Kazakh language containing five speakers (three females and two males) comprising 271 hours of carefully transcribed data from various sources (news, book, and Wikipedia). • We validated the efficacy of the corpus, by training state-of-the-art neural TTS models, which achieved a sufficient subjective MOS for most practical applications. • The KazakhTTS2 corpus, code, and pretrained models were made publicly available 1 for both commercial and academic use. The rest of this paper is organized as follows: Section 2 reviews the work on Kazakh language corpus creation. In Section 3, we briefly summarize the previous release of the corpus and explain the changes made in Kaza-khTTS2, including the corpus structure and statistics. The experimental setup and evaluation results are described in Section 4. Section 5 discusses the challenges of Kazakh speech synthesis and future research directions. Section 6 concludes this work.
Related Work
Despite its under-resourced status, Kazakh language research is emerging as an evolving field with an increasing number of recently released open-source corpora. For example, Khassanov et al. (2021) developed the first large-scale publicly available corpus for automatic speech recognition (ASR). The corpus was collected by means of crowdsourcing, with over 2,000 people contributing around 330 hours of audio recordings. Similarly, Yeshpanov et al. (2021) developed an open-source Kazakh named entity recognition dataset consisting of over 100,000 sentences annotated for 25 entity classes. Linguistic corpora development has also been observed in neighboring countries with languages similar to Kazakh, such as Uzbek (Musaev et al., 2021). Additionally, there are other large-scale projects aimed at collecting open-source corpora for various languages, including Kazakh, such as Common Voice (Ardila et al., 2020). However, all these datasets are unsuitable for building robust Kazakh TTS systems, which require a large number of high-quality audio recordings of a single speaker. The first attempt to collect a large-scale open-source TTS dataset for the Kazakh language was made by Mussakhojayeva et al. (2021a). The collected dataset was called KazakhTTS and consisted of 93 hours of carefully transcribed audio from two professional speakers. Specifically, the speakers were assigned to read local news articles. The recorded articles were manually segmented into sentences and then aligned with the corresponding text with the help of native Kazakh transcribers. The TTS systems developed using KazakhTTS achieved an MOS of above 4.0, demonstrating the high quality of the collected data. This work further extends the KazakhTTS corpus, as described in the following sections. The other existing corpora dedicated to Kazakh TTS are either proprietary or have been collected by leveraging unsupervised and semi-supervised approaches. For example, Black (2019) extracted readings of the Bible in hundreds of languages, including Kazakh. The extracted recordings were automatically segmented and aligned with the corresponding text. Although a TTS system built using this corpus is sufficient to deploy in some use-cases, its overall quality is unsatisfactory for most real-world applications. Specifically, in the evaluation experiments, the Kazakh TTS system achieved a mel-cepstral distortion score of more than 6, which is considered low quality 2 . In another work, Khomitsevich et al. (2015) developed a Kazakh TTS system using a female voice. However, the authors did not provide any information on their corpus, such as its size, how the recordings were acquired, and how to download it. Additionally, the authors did not describe the evaluation procedures performed to assess the developed TTS system.
KazakhTTS2 Corpus
In this section, we describe the curation procedures for the KazakhTTS2 corpus. The KazakhTTS2 corpus collection was approved by the Institutional Research Ethics Committee of Nazarbayev University. We first briefly summarize the previous version of the corpus (i.e., KazakhTTS) and then systematically explain the changes made to extend it.
KazakhTTS
KazakhTTS is the first version of our corpus, which contains around 93 hours of transcribed audio consisting of over 42,000 sentences. The audio was recorded by two professional speakers, both of whom had had over ten years of narration experience in local television and radio stations. The speakers were assigned to read news articles covering various topics, such as sports, business, politics, and so on. The recorded audio was manually segmented into sentences, with defective segments (e.g., mispronunciation and external noise) filtered out. The correspondence between audio and text was verified by native Kazakh transcribers. The statistics for the first and second versions of the corpus are provided in Table 1.
Text Collection
We began by collecting additional news articles from four local news websites. To further broaden the topic coverage, we added a book from the public domain and Wikipedia articles. From Wikipedia, we extracted articles on science, computer technology, countries, and history. All the articles were manually extracted to eliminate defects peculiar to web crawlers and saved in the DOC format for the professional speakers' convenience (i.e., font size, line spacing, and typeface could be adjusted to the preferences of the speakers). In total, over 2,500 additional news articles, one book, and 159 Wikipedia articles were extracted.
Recording Process
To narrate the collected text, we auditioned several candidates and, as a result, hired three professional speakers (two females and one male). Each speaker participated voluntarily and was informed of the protocols for data collection and use through an informed consent form. All the hired speakers were tasked to read news articles only. In addition, we rehired the male speaker (speaker M1) from the previous corpus creation process, because of his extensive experience in narrating documentaries. He was subsequently tasked to read the book and Wikipedia articles. The speaker specifications, including gender, age, professional experience as a narrator, and recording device information, are provided in Table 2. Speakers F1 and M1 were part of KazakhTTS, whereas F2, F3, and M2 are newly hired speakers.
Due to the COVID-19 pandemic, we could not invite the speakers to our laboratory for data collection. Therefore, the speakers were allowed to record audio in their makeshift studios that they had set up to work from home. The speakers were instructed to read the texts in a quiet indoor environment with neutral tone and pace. They were also asked to follow orthoepic rules, to maintain a constant distance between the microphone and lips, to pause at commas, and to intonate sentences ending with a question mark appropriately. In total, each newly hired speaker read around 1,400 news articles, and Speaker M1 read one book entitled Abai Zholy (The Path of Abai) and 159 Wikipedia articles.
Segmentation and Alignment
For audio segmentation and audio-to-text alignment, we employed the same approach as in the KazakhTTS corpus construction. We hired five native Kazakh transcribers with different backgrounds and thorough knowledge of Kazakh grammar rules. The transcribers manually segmented the recordings into sentence-level chunks and aligned them with the corresponding text using the Praat toolkit (Boersma, 2001). All the texts were represented using a Cyrillic script consisting of 42 letters 3 and other punctuation marks, such as period, comma, hyphen, question mark, and exclamation mark. The transcribers were instructed to remove segments with mispronunciation and background noise, to trim long pauses at the beginning and end of segments, and to convert numbers and special characters (e.g., '%', '$', '+', etc) into the written form. To ensure the uniform quality of work among the transcribers, we assigned a linguist to randomly check the completed tasks and to organize regular "go through errors" sessions.
To ensure the correctness of the audio-to-text alignment process, the segmented recordings were inspected using our internal ASR system trained on the KSC dataset . Specifically, the ASR system was used to generate segment transcriptions, which were then compared to the corresponding manually annotated transcripts. Segments with a high character error rate (CER) were regarded as incorrectly transcribed, and therefore rechecked by the linguist.
Corpus Structure and Statistics
The file structure of the KazakhTTS2 corpus is shown in Figure 1. Collections of audio recordings and the corresponding transcriptions are stored in a separate folder for each speaker. Additionally, for Speaker M1, we split the data from different sources into separate folders (i.e., News, Wiki, and Book). All audio recordings were downsampled to 22.05 kHz and stored at 16 bits per sample in the WAV format. All transcripts are stored as TXT files in the UTF-8 encoding. The audio and the corresponding transcript filenames are identical except for the extension. The name of each file consists of the source name, document ID, and utterance ID (i.e., source docID uttID). Speaker information, including gender, age, professional experience, and recording device, is provided in the speaker metadata.txt file.
The statistics for the KazakhTTS2 corpus are given in Table 3. The overall corpus size is around 271 hours, with each speaker having at least 25 hours of transcribed audio. The total number of sentences is around 136 thousand, and the total number of tokens is over 1.7 million, with unique token types per speaker ranging from 28.5 thousand to 80.7 thousand. Figure 2 presents the histograms of the distributions of sentence duration and length (in words) for each speaker in KazakhTTS2. For all speakers, the majority of sentence durations are between 3 and 6 seconds. The majority of sentence lengths are between 11 and 15 words for female speakers, and between 6 and 10 words for male speakers.
Speech Synthesis Experiments
In this section, we describe the experiments conducted to validate the utility of the KazakhTTS2 corpus. We first describe the experimental setup, followed by our evaluation procedures and results.
Experimental Setup
We used the ESPnet-TTS toolkit (Hayashi et al., 2020) to build end-to-end TTS models based on the Tacotron 2 (Shen et al., 2018) architecture. Specifically, we followed the training recipe of LJ Speech (Ito and Johnson, 2017). All TTS models were trained using Tesla V100 GPUs running on NVIDIA DGX 2 machines. The input for each model is a sequence of characters consisting of 42 letters and 5 symbols ('.', ',', '-','?', '!'), and the output is a sequence of acoustic features (80 dimensional log Mel-filter bank features). To transform these acoustic features into the time-domain waveform samples, we employed Wave-GAN vocoders.
In the Tacotron 2 model, the encoder module was modeled as a single bidirectional LSTM layer with 512 units (256 units in each direction), and the decoder module was modeled as a stack of two unidirectional LSTM layers with 1,024 units. The parameters were optimized using the Adam algorithm (Kingma and Ba, 2015) with an initial learning rate of 10 −3 for 200 epochs. To mitigate overfitting, we applied a dropout of 0.5. A separate Tacotron 2 model was trained for each speaker (i.e., a single speaker model). More details on the model specifications and training procedures are provided in our GitHub repository 1 .
Experimental Evaluation
To assess the quality of the synthesized recordings, we performed a subjective evaluation using the MOS measure. We evaluated only the voices developed using the newly collected data 4 (i.e., F2, F3, M1 Wikipedia and Book, and M2), as the other data had already been evaluated in the previous work (Mussakhojayeva et al., 2021a). The evaluation procedure was similar to that of our previous work, except for the number of sentences selected as a test set. Specifically, in this work, we selected 25 sentences of varying lengths from each speaker, whereas, in the previous work, 50 sentences per speaker were selected. The reason for selecting a smaller number of sentences is based on our observation that raters become exhausted or bored after around 25 sentences and quit the evaluation session. The evaluation sentences were not used to train the models. The speakers were evaluated in separate sessions, and in each session we compared the ground truth (i.e., natural speech) recordings against the Tacotron 2 synthesized recordings. The ground truth sentences were manually checked to ensure that the speaker read them Figure 2: Segment duration (a, b, c, d, e) and length (f, g, h, i, j) distributions for each speaker of KazakhTTS2 well (i.e., without disfluencies, mispronunciations, or background noise). Evaluation sessions were conducted using the instant messaging platform Telegram (Telegram Messenger Inc., 2013), as it is difficult to find native Kazakh raters on other well-known platforms, such as Amazon Mechanical Turk (Amazon.com Inc., 2005). We developed a separate evaluation Telegram bot for each speaker. The bots first presented a welcome message with instructions and then started the evaluation process. During the evaluation, the bots sent a sentence recording 5 with the associated transcript to a rater and received the corresponding evaluation score. Recordings were rated using a five-point Likert scale: 5 for excellent, 4 for good, 3 for fair, 2 for poor, and 1 for bad. The raters were instructed to assess the overall quality through headphones in a quiet environment 6 . They were allowed to listen to the recordings several times, but they were not allowed to alter the ratings once submitted. Additionally, the Telegram bots kept track of the raters' ID, to prevent them from participating in the evaluation session more than once. The evaluation recordings were presented in the same order and one at a time. However, at each time step, the bots randomly decided which version of a recording to select (i.e., ground truth or synthesized). As a result, each rater heard only one of the versions of a recording, and both systems (i.e., ground truth and Tacotron 2) were presented to all the raters. Each recording was rated at least 24 times for all the three speakers. The numbers of raters were 57, 61, 116, 89, and 53 for speakers F2, F3, M1 Wikipedia, M1 Book, and M2, respectively 7 . At the end of the evaluation, the bots thanked the raters and invited them to fill in an optional questionnaire about their age, region (where a rater grew up and learned the Kazakh language), and gender. The questionnaire results showed that the raters varied in gender . Specifically, the majority of raters were from the south and west of Kazakhstan, and females outnumbered males by a factor of 1.5.
(a) F1 (b) F2 (c) F3 (d) M1 (e) M2 (f) F1 (g) F2 (h) F3 (i) M1 (j) M2
Experiment Results
The subjective evaluation results are given in Table 4. As expected, the ground truth recordings received higher MOS scores than the Tacotron 2 synthesized ones. Nevertheless, all synthesized recordings except M1 Wikipedia scored above 4.0 on the MOS measure and were close to the ground truth (i.e., 8.7%, 3.1%, 18.1%, 11.1% and 5.2% relative MOS reductions for speakers F2, F3, M1 Wikipedia, M1 Book, and M2, respectively). These results demonstrate the utility of our KazakhTTS2 dataset for TTS applications. Overall, the highest MOS score among the synthesized recordings was achieved by Speaker F1, and the lowest score was achieved by M1 Wikipedia. Presumably, the reason for the poor performance of M1 Wikipedia is the wide variety of topics and the abundance of rare scientific terms (from chemistry, biology, information technology, etc.). We believe that the performance of M1 Wikipedia can be improved by exploiting other data from Speaker M1. For example, by pre-training a model on M1 News and Book data, followed by finetuning using M1 Wikipedia. In addition, we conducted an objective evaluation in which we manually analyzed the synthesized evaluation set recordings. Specifically, we counted the various error types made by the Tacotron 2 systems built using the newly collected data. The objective evaluation results are given in Table 5, which are consistent with the subjective evaluation, with Speaker M2 having the lowest number of errors, followed by F2 and M1 Book, and then F3 and M1 Wikipedia. The most common error types among all speakers are mispronunciation, incomplete words, and word skipping. This analysis indicates that there is still room for improvement and future work should focus on eliminating these errors.
Challenges and Future Work
The Kazakh language presents several challenges to the speech synthesis task. The first one is code-switching, as the majority of Kazakh speakers are bilingual in Kazakh and Russian. While the languages are not mixed in most formal situations (e.g., news, books, law, etc.), intrasentential code-switching often occurs in informal conversations. Moreover, intra-word codeswitching is also possible (e.g., Kazakh stem words with Russian suffixes or vice versa), which may further deteriorate TTS quality.
Additionally, Kazakh has a large number of loanwords from Russian, and these words usually retain the orthographic and phonological properties of the source language. This has especially important consequences for TTS applications, as Russian differs from Kazakh in many aspects. For example, in most Kazakh words, the stress is fixed on the final syllable, while in Russian, the stress can be on any syllable of a word (Jouravlev and Lupker, 2014). Furthermore, the spelling of Kazakh words closely matches their pronunciation, which is not the case with Russian words; for example, the letter "o" is sometimes pronounced as /a/. It is important to mention that due to globalization, the number of loanwords from other languages, especially English, is also increasing, which is likely to pose an additional challenge in the near future (Mussakhojayeva et al., 2021b). Another challenge is that Kazakh is an agglutinative language, with a very large vocabulary and many characters per word. It is also susceptible to morphophonemic changes arising during word formation. One of the solutions would be to increase the size of the Kazakh speech corpus to cover more word formation variants. We believe that overcoming these challenges for the Kazakh language will be an interesting direction for future research.
Conclusion
We have presented KazakhTTS2, a large-scale opensource Kazakh text-to-speech corpus, which further extends the previous work with more data, voices, and topics. The corpus consists of five voices (three female and two male), with over 270 hours of high-quality transcribed data. The corpus is publicly available, which permits both academic and commercial use. We validated the corpus by means of crowdsourced subjective evaluation, where all voices synthesized using the Tacotron 2 model achieved an MOS of above 3.6, making it suitable for practical deployment. To enable experiment reproducibility and facilitate future research, we shared our training recipes and pretrained models in our GitHub repository 1 . Although the corpus was designed with TTS application in mind, it can be used to complement other speech processing applications, such as speech recognition and speech translation. We hope the TTS corpus construction and evaluation procedures described in this paper will contribute to the burgeoning field of Kazakh speech and language research and help advance the state-of-the-art for other low-resource languages of the Turkic family.
Figure 1 :
1The file structure of KazakhTTS2
Table 2 :
2The KazakhTTS2 speaker information
Table 3 :
3The KazakhTTS2 dataset specifications
Table 4 :
4Mean opinion score (MOS) results with 95%
confidence intervals
and region, but not in age (most of them were under
20)
Table 5 :
5Manual analysis of error types made by Tacotron 2
https://github.com/IS2AI/Kazakh_TTS
http://festvox.org/cmu_wilderness/ index.html
Note that at the time of writing, the Cyrillic alphabet is the official alphabet used for the Kazakh language, though the transition process to the Latin alphabet has already begun.
For Speaker M1, we trained two separate models from scratch using Wikipedia and Book data.
Note that in Telegram, to send audio recordings, we had to convert them into MP3 format.6 Due to the crowdsourced nature of the evaluation process, we cannot guarantee that all raters used headphones and sat in a quiet environment.
In fact, the number of raters was higher, but we excluded the ratings of those who did not go through the session to the end, or whose ratings were suspicious (e.g., all scores are "excellent" or all scores are "bad".)
AcknowledgementsThe authors would like to thank Aigerim Borambayeva, Almas Mirzakhmetov, Dias Bakhtiyarov, and Rustem Yeshpanov for their help in data collection, voice evaluation, and paper revision. The authors would also like to thank the speakers for their recordings and the anonymous raters for their evaluations.
Amazon Mechanical Turk (MTurk). Amazon.com Inc. Amazon.com Inc. (2005). Amazon Mechanical Turk (MTurk).
Common Voice: A massively-multilingual speech corpus. R Ardila, M Branson, K Davis, M Kohler, J Meyer, M Henretty, R Morais, L Saunders, F M Tyers, G Weber, LREC. ELRAArdila, R., Branson, M., Davis, K., Kohler, M., Meyer, J., Henretty, M., Morais, R., Saunders, L., Tyers, F. M., and Weber, G. (2020). Common Voice: A massively-multilingual speech corpus. In LREC, pages 4218-4222. ELRA.
wav2vec 2.0: A framework for selfsupervised learning of speech representations. A Baevski, Y Zhou, A Mohamed, Auli , M , Advances in Neural Information Processing Systems (NIPS). 33Baevski, A., Zhou, Y., Mohamed, A., and Auli, M. (2020). wav2vec 2.0: A framework for self- supervised learning of speech representations. In Advances in Neural Information Processing Systems (NIPS), volume 33, pages 12449-12460.
The Blizzard Challenge -2005: Evaluating corpus-based speech synthesis on common datasets. A W Black, K Tokuda, European Conference on Speech Communication and Technology (Interspeech). ISCABlack, A. W. and Tokuda, K. (2005). The Blizzard Challenge -2005: Evaluating corpus-based speech synthesis on common datasets. In European Con- ference on Speech Communication and Technology (Interspeech), pages 77-80. ISCA.
CMU wilderness multilingual speech dataset. A W Black, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Black, A. W. (2019). CMU wilderness multilin- gual speech dataset. In IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 5971-5975.
Praat, a system for doing phonetics by computer. P Boersma, Glot International. 59Boersma, P. (2001). Praat, a system for doing phonet- ics by computer. Glot International, 5(9):341-345.
Endto-end text-to-speech for low-resource languages by cross-lingual transfer learning. Y Chen, T Tu, C Yeh, H Lee, Interspeech. ISCAChen, Y., Tu, T., Yeh, C., and Lee, H. (2019). End- to-end text-to-speech for low-resource languages by cross-lingual transfer learning. In Interspeech, pages 2075-2079. ISCA.
The zero resource speech challenge. E Dunbar, R Algayres, J Karadayi, M Bernard, J Benjumea, X Cao, L Miskic, C Dugrain, L Ondel, A W Black, L Besacier, S Sakti, E Dupoux, TTS without T. In Interspeech. ISCADunbar, E., Algayres, R., Karadayi, J., Bernard, M., Benjumea, J., Cao, X., Miskic, L., Dugrain, C., On- del, L., Black, A. W., Besacier, L., Sakti, S., and Dupoux, E. (2019). The zero resource speech chal- lenge 2019: TTS without T. In Interspeech, pages 1088-1092. ISCA.
ESPnet-TTS: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit. T Hayashi, R Yamamoto, K Inoue, T Yoshimura, S Watanabe, T Toda, K Takeda, Y Zhang, X Tan, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE International Conference on Acoustics, Speech and Signal essing (ICASSP)Hayashi, T., Yamamoto, R., Inoue, K., Yoshimura, T., Watanabe, S., Toda, T., Takeda, K., Zhang, Y., and Tan, X. (2020). ESPnet-TTS: Unified, repro- ducible, and integratable open source end-to-end text-to-speech toolkit. In Proc. IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 7654-7658.
The LJ speech dataset. K Ito, L Johnson, Ito, K. and Johnson, L. (2017). The LJ speech dataset. https://keithito.com/ LJ-Speech-Dataset/.
Stress consistency and stress regularity effects in russian. Language. O Jouravlev, S J Lupker, Cognition and Neuroscience. 295Jouravlev, O. and Lupker, S. J. (2014). Stress consis- tency and stress regularity effects in russian. Lan- guage, Cognition and Neuroscience, 29(5):605-619.
A crowdsourced open-source Kazakh speech corpus and initial speech recognition baseline. Y Khassanov, S Mussakhojayeva, A Mirzakhmetov, A Adiyev, M Nurpeiissov, H A Varol, European Chapter of the Association for Computational Linguistics (EACL). Online, April. Association for Computational LinguisticsKhassanov, Y., Mussakhojayeva, S., Mirzakhme- tov, A., Adiyev, A., Nurpeiissov, M., and Varol, H. A. (2021). A crowdsourced open-source Kazakh speech corpus and initial speech recognition base- line. In European Chapter of the Association for Computational Linguistics (EACL), pages 697-706, Online, April. Association for Computational Lin- guistics.
A bilingual kazakh-russian system for automatic speech recognition and synthesis. O Khomitsevich, V Mendelev, N A Tomashenko, S Rybin, I Medennikov, S Kudubayeva, Speech and Computer (SPECOM). Springer9319Khomitsevich, O., Mendelev, V., Tomashenko, N. A., Rybin, S., Medennikov, I., and Kudubayeva, S. (2015). A bilingual kazakh-russian system for au- tomatic speech recognition and synthesis. In Speech and Computer (SPECOM), volume 9319 of Lecture Notes in Computer Science, pages 25-33. Springer.
Adam: A method for stochastic optimization. D P Kingma, J Ba, Proc. International Conference on Learning Representations. International Conference on Learning RepresentationsICLRKingma, D. P. and Ba, J. (2015). Adam: A method for stochastic optimization. In Proc. International Con- ference on Learning Representations (ICLR).
USC: an open-source uzbek speech corpus and initial speech recognition experiments. M Musaev, S Mussakhojayeva, I Khujayorov, Y Khassanov, M Ochilov, H A Varol, Speech and Computer (SPECOM). Springer12997Musaev, M., Mussakhojayeva, S., Khujayorov, I., Khassanov, Y., Ochilov, M., and Varol, H. A. (2021). USC: an open-source uzbek speech cor- pus and initial speech recognition experiments. In Speech and Computer (SPECOM), volume 12997 of Lecture Notes in Computer Science, pages 437-447. Springer.
Kaza-khTTS: An Open-Source Kazakh Text-to-Speech Synthesis Dataset. S Mussakhojayeva, A Janaliyeva, A Mirzakhmetov, Y Khassanov, H A Varol, Proc. Interspeech 2021. Interspeech 2021Mussakhojayeva, S., Janaliyeva, A., Mirzakhmetov, A., Khassanov, Y., and Varol, H. A. (2021a). Kaza- khTTS: An Open-Source Kazakh Text-to-Speech Synthesis Dataset. In Proc. Interspeech 2021, pages 2786-2790.
A study of multilingual end-to-end speech recognition for Kazakh, Russian, and English. S Mussakhojayeva, Y Khassanov, H A Varol, Speech and Computer (SPECOM). Springer12997Mussakhojayeva, S., Khassanov, Y., and Varol, H. A. (2021b). A study of multilingual end-to-end speech recognition for Kazakh, Russian, and English. In Speech and Computer (SPECOM), volume 12997 of Lecture Notes in Computer Science, pages 448-459. Springer.
Towards automatic face-to-face translation. K R Prajwal, R Mukhopadhyay, J Philip, A Jha, V Namboodiri, C V Jawahar, Proceedings of the International Conference on Multimedia (MM). the International Conference on Multimedia (MM)ACMPrajwal, K. R., Mukhopadhyay, R., Philip, J., Jha, A., Namboodiri, V., and Jawahar, C. V. (2019). Towards automatic face-to-face translation. In Proceedings of the International Conference on Multimedia (MM), pages 1428-1436. ACM.
Natural TTS synthesis by conditioning Wavenet on MEL spectrogram predictions. J Shen, R Pang, R J Weiss, M Schuster, N Jaitly, Z Yang, Z Chen, Y Zhang, Y Wang, R Ryan, R A Saurous, Y Agiomyrgiannakis, Y Wu, Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE International Conference on Acoustics, Speech and Signal essing (ICASSP)Shen, J., Pang, R., Weiss, R. J., Schuster, M., Jaitly, N., Yang, Z., Chen, Z., Zhang, Y., Wang, Y., Ryan, R., Saurous, R. A., Agiomyrgiannakis, Y., and Wu, Y. (2018). Natural TTS synthesis by conditioning Wavenet on MEL spectrogram predictions. In Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4779-4783.
AISHELL-3: A multi-speaker Mandarin TTS corpus and the baselines. Y Shi, H Bu, X Xu, S Zhang, Li , M , abs/2010.11567CoRRShi, Y., Bu, H., Xu, X., Zhang, S., and Li, M. (2020). AISHELL-3: A multi-speaker Mandarin TTS corpus and the baselines. CoRR, abs/2010.11567.
X Tan, T Qin, F Soong, T.-Y Liu, arXiv:2106.15561A survey on neural speech synthesis. arXiv preprintTan, X., Qin, T., Soong, F., and Liu, T.-Y. (2021). A survey on neural speech synthesis. arXiv preprint arXiv:2106.15561.
Text-to-speech synthesis. P Taylor, Cambridge university pressTaylor, P. (2009). Text-to-speech synthesis. Cam- bridge university press.
. Telegram Messenger Inc. TelegramTelegram Messenger Inc. (2013). Telegram.
Listening while speaking: Speech chain by deep learning. A Tjandra, S Sakti, S Nakamura, IEEE Automatic Speech Recognition and Understanding Workshop (ASRU). Tjandra, A., Sakti, S., and Nakamura, S. (2017). Lis- tening while speaking: Speech chain by deep learn- ing. In IEEE Automatic Speech Recognition and Un- derstanding Workshop (ASRU), pages 301-308.
Verbmobil: foundations of speech-to-speech translation. W Wahlster, Springer Science & Business MediaWahlster, W. (2013). Verbmobil: foundations of speech-to-speech translation. Springer Science & Business Media.
Parallel Wavegan: A fast waveform generation model based on generative adversarial networks with multiresolution spectrogram. R Yamamoto, E Song, Kim , J , IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Yamamoto, R., Song, E., and Kim, J. (2020). Par- allel Wavegan: A fast waveform generation model based on generative adversarial networks with multi- resolution spectrogram. In IEEE International Con- ference on Acoustics, Speech and Signal Processing (ICASSP), pages 6199-6203.
R Yeshpanov, Y Khassanov, H A Varol, arXiv:2111.13419KazNERD: Kazakh named entity recognition dataset. arXiv preprintYeshpanov, R., Khassanov, Y., and Varol, H. A. (2021). KazNERD: Kazakh named entity recogni- tion dataset. arXiv preprint arXiv:2111.13419.
LibriTTS: A corpus derived from LibriSpeech for text-to-speech. H Zen, V Dang, R Clark, Y Zhang, R J Weiss, Y Jia, Z Chen, Y Wu, In Interspeech. ISCAZen, H., Dang, V., Clark, R., Zhang, Y., Weiss, R. J., Jia, Y., Chen, Z., and Wu, Y. (2019). LibriTTS: A corpus derived from LibriSpeech for text-to-speech. In Interspeech, pages 1526-1530. ISCA.
Visual to sound: Generating natural sound for videos in the wild. Y Zhou, Z Wang, C Fang, T Bui, T L Berg, IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Zhou, Y., Wang, Z., Fang, C., Bui, T., and Berg, T. L. (2018). Visual to sound: Generating natural sound for videos in the wild. In IEEE Conference on Com- puter Vision and Pattern Recognition (CVPR).
| [
"https://github.com/IS2AI/Kazakh_TTS"
] |
[
"NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task",
"NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task"
] | [
"Rui Wang wangrui@nict.go.jp \nNational Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan\n",
"Benjamin Marie bmarie@nict.go.jp \nNational Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan\n",
"Masao Utiyama mutiyama@nict.go.jp \nNational Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan\n",
"Eiichiro Sumita eiichiro.sumita@nict.go.jp \nNational Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan\n"
] | [
"National Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan",
"National Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan",
"National Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan",
"National Institute of Information and Communications Technology\n3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan"
] | [] | This paper presents the NICT's participation in the WMT18 shared parallel corpus filtering task. The organizers provided 1 billion words German-English corpus crawled from the web as part of the Paracrawl project. This corpus is too noisy to build an acceptable neural machine translation (NMT) system. Using the clean data of the WMT18 shared news translation task, we designed several features and trained a classifier to score each sentence pairs in the noisy data. Finally, we sampled 100 million and 10 million words and built corresponding NMT systems. Empirical results show that our NMT systems trained on sampled data achieve promising performance. | 10.18653/v1/w18-6489 | [
"https://arxiv.org/pdf/1809.07043v1.pdf"
] | 52,305,516 | 1809.07043 | 9253df04e209c28e25700fe32f3a2c0344cb3335 |
NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task
19 Sep 2018
Rui Wang wangrui@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan
Benjamin Marie bmarie@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan
Masao Utiyama mutiyama@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan
Eiichiro Sumita eiichiro.sumita@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Souraku-gun619-0289KyotoJapan
NICT's Corpus Filtering Systems for the WMT18 Parallel Corpus Filtering Task
19 Sep 2018
This paper presents the NICT's participation in the WMT18 shared parallel corpus filtering task. The organizers provided 1 billion words German-English corpus crawled from the web as part of the Paracrawl project. This corpus is too noisy to build an acceptable neural machine translation (NMT) system. Using the clean data of the WMT18 shared news translation task, we designed several features and trained a classifier to score each sentence pairs in the noisy data. Finally, we sampled 100 million and 10 million words and built corresponding NMT systems. Empirical results show that our NMT systems trained on sampled data achieve promising performance.
Introduction
This paper describes the corpus filtering system built for the participation of the National Institute of Information and Communications Technology (NICT) to the WMT18 shared parallel corpus filtering task.
NMT has shown large gains in quality over Statistical machine translation (SMT) and set several new benchmarks (Bojar et al., 2017). However, NMT is much more sensitive to domain (Wang et al., 2017) and noise . The reason is that NMT is a single neural network structure, which would be affected by each instance during the training procedure (Wang et al., 2017). In comparison, SMT is a combination of distributed models, such as a phrase-table and a language model. Even if some instances in the phrase-table or the language model are noisy, they can only affect part of the models and would not affect the entire system so much. To the best of our knowledge, there are only few works investi- * The first two authors have equal contributions. gating the impact of the noise problem in NMT (Xu and Koehn, 2017;Belinkov and Bisk, 2017).
In this paper, we focus on the performance of NMT trained on noisy parallel data. We adopt the clean data of WMT18 News Translation Task to train a classifier and compute informative features. Using this classifier, we score each sentence in the noisy data and sample the top ranked sentences to construct the pseudo clean data. The new pseudo clean data are used to train a robust NMT system.
The remainder of this paper is organized as follows. In Section 2, we introduce the task and data. In Section 3, we introduce the features that we designed to score sentences in the noisy corpus. We use these features to train a classifier and the sentences in the noisy corpus are scored by this classifier. Empirical results produced with our systems are showed and analyzed in Section 4, and Section 5 concludes this paper.
Task Description
WMT18 shared parallel corpus filtering task 1 provides a very noisy 1 billion words (English word count) German-English (De-En) corpus crawled from the web as a part of the Paracrawl project. Participants are asked to provide a quality score for each sentence pair in the corpus. Computed scores are then evaluated given the performance of SMT and NMT systems trained on 100M and 10M words sampled from data using the quality scores computed by the participants. newstest2016 is used as the development data and the test data include newstest2018, iwslt2017, Acquis, EMEA, Global Voices, and KDE. 2 The statistics of the noisy data to filter are shown in Table 1. The participants may use the WMT18 News Translation Task data 3 for German-English (without the Paracrawl parallel corpus) to train components of their method. In addition, to participate in the shared task, participants have to submit a file with quality scores, one score per line, corresponding to the sentence pairs. The scores do not have to be meaningful, except that higher scores indicate better quality.
Sentence Pairs Scoring
The task requires to give a score to each sentence pair in the corpus to filter. We performed first an aggressive filtering (Section 3.1) to avoid scoring sentence pairs that are clearly too noisy to be used during the training of MT systems. Then, we computed informative features (Section 3.2) for each one of the remaining sentence pairs. Then, according to the feature scores, a classifier computes a global score for each sentence pair that can be used to rank them.
Aggressive Filtering
After a quick observation of the data, we first decided to perform an aggressive filtering since it appeared that many of the sentence pairs are obviously too noisy to be used to train MT systems. For instance, many sentences in the corpus are made of long sequences of numbers or punctuation marks. We decided to give a score of 0.0 to all the sentence pairs that contain a sentence made of tokens that are, for more than 25% them, numbers or punctuation marks. We also had to take into account the sentence length: very short source sentences are more likely to be paired with a good translation in the corpus, and our classifier may give to such pairs very high scores. Then, in order to avoid a filtering that keeps sentences made in majority of very short and redundant sentences, that are not very useful to train NMT systems, we also give a score of 0.0 to all sentence pairs that contain a source or a target sentence that contains less than four tokens. We also give a score of 0.0 3 http://www.statmt.org/wmt18/translation-task.html to all the sentence pairs that contain a sentence longer than 80 tokens since the default parameters of the SMT system used for evaluation filter out sentences longer than that.
This aggressive filtering excluded 69% of the sentence pairs, leaving us a much reduced quantity of sentence pairs to be scored by our classifier.
Features
We scored each of the remaining sentence pairs with four NMT transformer models, trained with Marian (Junczys-Dowmunt et al., 2018) 4 , on all the parallel data provided for the shared news translation task (excluding the "paracrawl" corpus). We trained left-to-right and right-to-left models for German-to-English and English-to-German translation directions. We used these four model scores as features in our classifier.
We also trained lexical translation probability with Moses and used them to compute a sentencelevel translation probability, for both translation directions, as proposed by Marie and Fujita (2017).
To evaluate the semantic similarity between the source and target sentence, we compute a feature based on bilingual word embeddings as follows. First, we trained monolingual word embeddings with FastText (Bojanowski et al., 2017) 5 on the monolingual English and German data provided by the WMT organizers. Then, we aligned English and German monolingual word embedding spaces in a bilingual space using the unsupervised method proposed by Artetxe et al. (2018). 6 Given the bilingual word embeddings, we computed embeddings for the source and target sentence by doing the element-wise addition of the bilingual embedding of the words they contain. Finally, we computed the cosine similarity between the embeddings of source and target sentence for each sentence pair, and used it as a feature.
Other features are computed to take into account the sentence length: the number of tokens in the source and target sentences, and the difference, and its absolute value, between them. We summarize the features that we used in Table 2. 4 https://marian-nmt.github.io/ 5 We used the default parameters for skipgram, with 512 dimensions. 6 We used the implementation provided by the authors, with default parameters, at: https://github.com/artetxem/vecmap.
Feature
Description L2R (2) Scores given by the left-to-right German-to-English and English-to-German NMT models R2L (2) Scores given by the right-to-left German-to-English and English-to-German NMT models LEX (4) Lexical translation probabilities, for both translation directions WE (1) Bilingual sentence embedding similarity LEN (4) Length-based features
Classifier
We chose a logistic regression classifier to compute a score for each sentence pair using the features presented in Section 3.2. We trained our classifier on Newstest2014, that we used as positive examples of good sentence pairs, and created the same number of negative examples using the following procedure. We created three-type of negative examples, each of which contains one third of the sentence number of Newstest2014:
• Misaligned:
The target sentences are wrongly aligned to the previous or following source sentences.
• Wrong translation: some words in a sentence are replaced by random words from the vocabulary.
• Misordered words: we shuffled the words in a sentence.
We used the same procedure to create training data with Newstest2015, and used it to tune the regularization parameter of our classifier. The classifier accuracy is 78.9% on Newstest2015.
We used the probability returned by the classifier for each sentence pair as the score to be used to perform filtering.
NMT Systems and Results
For this task, we did not conduct experiments with a state-of-the-art NMT system, because the organizers fixed the data and systems settings for a fair comparison.
NMT Systems
For the data preprocessing, we strictly followed the data preparation (including tokenization, truecasing, and byte pair encoding) provided by the organizers. To train NMT systems, we used the provided official settings of Marian, which can be found at the WMT offical website 7 and the Ap-7 http://www.statmt.org/wmt18/parallel-corpus-filtering-data/dev-tools.tgz pendix A. All our NMT systems were trained on four Nvidia Tesla P100 GPUs.
Our settings were the same for all of the NMT systems. For each method, we use their score to select the top 100M and 10M sentences to train the corresponding NMT systems. In Table 4, "Original" means the original corpus without any filtering. "Aggressive Filtering" is the method which we introduced in Section 3.1. "Hunalign" indicates the baseline corpus filtering method (Varga et al., 2007) 8 given by the organizers. "Classifier" indicates the classifier that we proposed in Section 3.3. "Classifier + LangID" indicates that we also use a language identification tool, LangID (Lui and Baldwin, 2012) 9 , to filter the sentence pairs containing sentences that are not German or English. The results were evaluated on the development data newstest2016.
NMT Performance
From the results in Table 4, we have the following observations:
• The proposed "Aggressive Filtering" reduced 69% sentences and improved 1.5 BLEU compared to using the original corpus. This indicates that most of the noisy data can be filtered by the aggressive filter.
• The baseline "Hunalign" did not perform very well, the performance decreased to 3.6/0.03 by selecting 100/10M sentences. Especially when selecting 10M sentences, the NMT system nearly did not work.
• The proposed "Classifier" significantly improved NMT performance by more than 20 BLEU. This indicates that the proposed classifier can rank sentence by a proper order and the more useful sentences are selected.
• The "Classifier + LangID" achieved further approximately 2∼5 BLEU improve- ment. This indicates there are several sentences which are not proper languages and they can be detected by the LangID.
• For the proposed method, the systems built from 100M sentences performed much better than the ones built from 10M sentences. This indicates that filtering too many sentences will harm the NMT performance.
Training Efficiency
Besides the NMT performances, we also showed the training efficiency in Table 5. The results in Table 5 showed:
• The training time of using 1.6B, 584M, and 100M sentences was very close.
• The training time of using 10M sentences was quite faster than the other ones. Together with the performance results in Table 4, it show that these 10M contains most of the useful information in the entire corpus and can accelerate NMT training significantly.
Official Results
We reported the official results of our submitted system "Classifier + LangID" in Tables 3. In the official results, both SMT and NMT results were reported. From the results in Table 3, we have the following observations: • The NMT system performed much better than corresponding SMT systems. This indicates that the proposed method can help NMT in overcoming the noise problem.
• The systems built from 100M sentences performed much better than the ones built from 10M sentences. This is consistent with the results obtained on the development data.
• Compared with other teams, the rankng of our SMT systems performed better than our NMT systems. The reason may be that we used several features from SMT. We ranked the first in the KDE SMT-10M task.
Conclusion and Future Work
In this paper, we investigated the noisy data problem in NMT. We designed a classification system to filter the noisy data for the WMT18 shared parallel corpus filtering task and built NMT systems using the selected data.
The empirical results showed that most of the sentence pairs in the corpus are noisy. By removing these sentence pairs, the training corpus can be reduced up to 1% of the original one while training a significantly better NMT system than the original NMT system trained on all the data. In our future work, we would like to investigate the impact of each type of noise and the effect of each feature used by our classifier.
In this paper, we focused on supervised classification methods. That is, we used clean data as a gold standard. In our future work, we would like to investigate this task using unsupervised methods. That is, we only use the noisy data and let NMT itself detect noisy sentence pairs. search, Development, and Social Demonstration of Multilingual Speech Translation Technology" of MIC, Japan.
A Marian Settings
To train NMT systems, we used the provided settings of Marian: --sync-sgd -T --devices 0 1 2 3 --mini-batch-fit -w 3000 --dim-vocabs 50000 50000 --layer-normalization --dropout-rnn 0.2 --dropout-src 0.1 --dropout-trg 0.1 --learn-rate 0.0001 --after-epochs 0 --early-stopping 5 --max-length 80 --valid-freq 20000 --save-freq 20000 --disp-freq 2000 --valid-mini-batch 8 --valid-metrics cross-entropy perplexity translation --seed 1111 --exponential-smoothing --normalize=1 --beam-size=12 --quiet-translation.
Table 1 :
1Statistics of the noisy data to filter. "#words" indicates the word count before tokenization.
Table 2 :
2Set of features used by our classifier.
Table 3 :
3WMT official results.Methods
#tokens (En)
#lines #BLEU
Original
1.6B 104.0M
7.4
Aggressive Filtering
584M
31.9M
8.8
Hunalign
100M
8.7M
3.6
Classifier
100M
9.1M
26.1
Classifier + LangID
100M
6.7M
31.6
Hunalign
10M
2.6M
0.03
Classifier
10M
1.2M
25.6
Classifier + LangID
10M
0.9M
27.8
Table 4 :
4Results on the development data.Methods
#tokens (En)
#Time
Original
1.6B 43 hours
Aggressive Filtering
584M 47 hours
Classifier + LangID
100M 55 hours
Classifier + LangID
10M 11 hours
Table 5 :
5Training efficiency.
http://www.statmt.org/wmt18/parallel-corpus-filteri 2 Note that, except for newstest2018, all testsets remained unknown from the participants until the submission deadline.
http://mokk.bme.hu/resources/hunalign/ 9 https://github.com/saffsd/langid.py
AcknowledgmentsThis work is partially supported by the program "Promotion of Global Communications Plan: Re-
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsMikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 789-798. Association for Com- putational Linguistics.
Synthetic and natural noise both break neural machine translation. Yonatan Belinkov, Yonatan Bisk, abs/1711.02173CoRRYonatan Belinkov and Yonatan Bisk. 2017. Synthetic and natural noise both break neural machine transla- tion. CoRR, abs/1711.02173.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, Transactions of the Association for Computational Linguistics. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.
Proceedings of the second conference on machine translation. Ondřej Bojar, Christian Buck, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, Julia Kreutzer, Proceedings of the Second Conference on Machine Translation. the Second Conference on Machine TranslationAssociation for Computational LinguisticsOndřej Bojar, Christian Buck, Rajen Chatterjee, Chris- tian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, Antonio Jimeno Yepes, Philipp Koehn, and Julia Kreutzer. 2017. Proceedings of the second conference on machine translation. In Proceedings of the Second Conference on Ma- chine Translation. Association for Computational Linguistics.
. Marcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Alham Fikri Aji, Nikolay Bogoychev, André F. TTom Neckermann, Frank Seide, Ulrich GermannMarcin Junczys-Dowmunt, Roman Grundkiewicz, Tomasz Dwojak, Hieu Hoang, Kenneth Heafield, Tom Neckermann, Frank Seide, Ulrich Germann, Alham Fikri Aji, Nikolay Bogoychev, André F. T.
Marian: Fast neural machine translation in c++. Alexandra Martins, Birch, Proceedings of ACL 2018, System Demonstrations. ACL 2018, System DemonstrationsAssociation for Computational LinguisticsMartins, and Alexandra Birch. 2018. Marian: Fast neural machine translation in c++. In Proceedings of ACL 2018, System Demonstrations, pages 116-121. Association for Computational Linguistics.
On the impact of various types of noise on neural machine translation. Huda Khayrallah, Philipp Koehn, Proceedings of the 2nd Workshop on Neural Machine Translation and Generation. the 2nd Workshop on Neural Machine Translation and GenerationAssociation for Computational LinguisticsHuda Khayrallah and Philipp Koehn. 2018. On the impact of various types of noise on neural machine translation. In Proceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 74-83. Association for Computational Linguistics.
Findings of the wmt 2018 shared task on parallel corpus filtering. Philipp Koehn, Huda Khayrallah, Kenneth Heafield, Mikel Forcada, Proceedings of the Third Conference on Machine Translation. the Third Conference on Machine TranslationBrussels, BelgiumAssociation for Computational Linguistics2Shared Task PapersPhilipp Koehn, Huda Khayrallah, Kenneth Heafield, and Mikel Forcada. 2018. Findings of the wmt 2018 shared task on parallel corpus filtering. In Proceed- ings of the Third Conference on Machine Transla- tion, Volume 2: Shared Task Papers, Brussels, Bel- gium. Association for Computational Linguistics.
2012. langid.py: An off-the-shelf language identification tool. Marco Lui, Timothy Baldwin, Proceedings of the ACL 2012 System Demonstrations. the ACL 2012 System DemonstrationsJeju Island, KoreaAssociation for Computational LinguisticsMarco Lui and Timothy Baldwin. 2012. langid.py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, pages 25-30, Jeju Island, Korea. Association for Computational Linguistics.
Efficient extraction of pseudo-parallel sentences from raw monolingual data using word embeddings. Benjamin Marie, Atsushi Fujita, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics2Short Papers). Association for Computational LinguisticsBenjamin Marie and Atsushi Fujita. 2017. Efficient extraction of pseudo-parallel sentences from raw monolingual data using word embeddings. In Pro- ceedings of the 55th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 2: Short Papers), pages 392-398. Association for Computa- tional Linguistics.
Parallel corpora for medium density languages. Dániel Varga, Péter Halácsy, András Kornai, Viktor Nagy, Amsterdam Studies In The Theory And History Of Linguistic Science Series. 4247László Németh, and Viktor TrónDániel Varga, Péter Halácsy, András Kornai, Viktor Nagy, László Németh, and Viktor Trón. 2007. Par- allel corpora for medium density languages. Ams- terdam Studies In The Theory And History Of Lin- guistic Science Series 4, 292:247.
Instance weighting for neural machine translation domain adaptation. Rui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, Eiichiro Sumita, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsRui Wang, Masao Utiyama, Lemao Liu, Kehai Chen, and Eiichiro Sumita. 2017. Instance weighting for neural machine translation domain adaptation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 1482-1488, Copenhagen, Denmark. Association for Computational Linguistics.
Zipporah: a fast and scalable data cleaning system for noisy webcrawled parallel corpora. Hainan Xu, Philipp Koehn, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsHainan Xu and Philipp Koehn. 2017. Zipporah: a fast and scalable data cleaning system for noisy web- crawled parallel corpora. In Proceedings of the 2017 Conference on Empirical Methods in Natural Lan- guage Processing, pages 2945-2950. Association for Computational Linguistics.
| [
"https://github.com/artetxem/vecmap.",
"https://github.com/saffsd/langid.py"
] |
[
"T 3 : Domain-Agnostic Neural Time-series Narration",
"T 3 : Domain-Agnostic Neural Time-series Narration"
] | [
"Mandar Sharma mandarsharma@vt.edu \nComputer Science Virginia Tech\nBoston Children's Hospital Harvard Medical School\nComputer Science Virginia Tech\n\n",
"John S Brownstein john.brownstein@childrens.harvard.edu \nComputer Science Virginia Tech\nBoston Children's Hospital Harvard Medical School\nComputer Science Virginia Tech\n\n",
"Naren Ramakrishnan \nComputer Science Virginia Tech\nBoston Children's Hospital Harvard Medical School\nComputer Science Virginia Tech\n\n"
] | [
"Computer Science Virginia Tech\nBoston Children's Hospital Harvard Medical School\nComputer Science Virginia Tech\n",
"Computer Science Virginia Tech\nBoston Children's Hospital Harvard Medical School\nComputer Science Virginia Tech\n",
"Computer Science Virginia Tech\nBoston Children's Hospital Harvard Medical School\nComputer Science Virginia Tech\n"
] | [] | The task of generating rich and fluent narratives that aptly describe the characteristics, trends, and anomalies of time-series data is invaluable to the sciences (geology, meteorology, epidemiology) or finance (trades, stocks, or sales and inventory). The efforts for time-series narration hitherto are domain-specific and use predefined templates that offer consistency but lead to mechanical narratives. We present T 3 (Time-series-To-Text), a domain-agnostic neural framework for time-series narration, that couples the representation of essential time-series elements in the form of a dense knowledge graph and the translation of said knowledge graph into rich and fluent narratives through the transfer learning capabilities of PLMs (Pre-trained Language Models). T 3 's design primarily addresses the challenge that lies in building a neural framework in the complete paucity of annotated training data for time-series. The design incorporates knowledge graphs as an intermediary for the representation of essential time-series elements which can be linearized for textual translation. To the best of our knowledge, T 3 is the first investigation of the use of neural strategies for timeseries narration. Through extensive evaluations, we show that T 3 can improve the lexical diversity of the generated narratives by up to 65.38% while still maintaining grammatical integrity. The practicality and deployability of T 3 is further validated through an expert review (n = 21) where 76.2% of participating experts wary of auto-generated narratives favored T 3 as a deployable system for time-series narration due to its richer narratives. Our code-base, models, and datasets, with detailed instructions for reproducibility is publicly hosted 1 . | null | [
"https://arxiv.org/pdf/2110.05633v1.pdf"
] | 238,634,417 | 2110.05633 | 6b8ef926e486079c0eed877dfc863cbc896f8914 |
T 3 : Domain-Agnostic Neural Time-series Narration
Mandar Sharma mandarsharma@vt.edu
Computer Science Virginia Tech
Boston Children's Hospital Harvard Medical School
Computer Science Virginia Tech
John S Brownstein john.brownstein@childrens.harvard.edu
Computer Science Virginia Tech
Boston Children's Hospital Harvard Medical School
Computer Science Virginia Tech
Naren Ramakrishnan
Computer Science Virginia Tech
Boston Children's Hospital Harvard Medical School
Computer Science Virginia Tech
T 3 : Domain-Agnostic Neural Time-series Narration
Index Terms-time-series analysistime-series-to-textdata-to- textpre-trained language modelsnatural language generation
The task of generating rich and fluent narratives that aptly describe the characteristics, trends, and anomalies of time-series data is invaluable to the sciences (geology, meteorology, epidemiology) or finance (trades, stocks, or sales and inventory). The efforts for time-series narration hitherto are domain-specific and use predefined templates that offer consistency but lead to mechanical narratives. We present T 3 (Time-series-To-Text), a domain-agnostic neural framework for time-series narration, that couples the representation of essential time-series elements in the form of a dense knowledge graph and the translation of said knowledge graph into rich and fluent narratives through the transfer learning capabilities of PLMs (Pre-trained Language Models). T 3 's design primarily addresses the challenge that lies in building a neural framework in the complete paucity of annotated training data for time-series. The design incorporates knowledge graphs as an intermediary for the representation of essential time-series elements which can be linearized for textual translation. To the best of our knowledge, T 3 is the first investigation of the use of neural strategies for timeseries narration. Through extensive evaluations, we show that T 3 can improve the lexical diversity of the generated narratives by up to 65.38% while still maintaining grammatical integrity. The practicality and deployability of T 3 is further validated through an expert review (n = 21) where 76.2% of participating experts wary of auto-generated narratives favored T 3 as a deployable system for time-series narration due to its richer narratives. Our code-base, models, and datasets, with detailed instructions for reproducibility is publicly hosted 1 .
I. INTRODUCTION
Real-world data is often temporal in nature. From the global outbreaks of infectious diseases to the prices of stocks, all chronologically recorded data takes the form of a timeseries. Thus, its mining and analysis has been of significant interest to the scientific community [1]. Time-series narration aims to portray the discerning characteristics of a time-series obtained from such analysis through a textual narrative. The efficacy of narratives as an aid to data comprehension has been validated through studies in digital libraries [2] as well as causal networks [3]. Petre, in his advocacy for the importance of textual representations of data [4], humorously notes, "A picture is worth a thousand words -isn't it? And hence graphical representation is by its nature universally superior to text -isn't it? Why then isn't the anecdote itself expressed graphically?". 1 https://github.com/Mandar-Sharma/TCube Time-series narration falls under the umbrella of data-totext, a sub-field of NLG (Natural Language Generation) that aims to produce meaningful and coherent textual descriptions of non-linguistic data [5], [6]. Although data-to-text has garnered significant interest over the years, recent efforts for textual description of data have been focused on either tabular data [7]- [9] or graph data [10], [11]. The attention that these data types have garnered simultaneously highlight the two key challenges for time-series data. The first being in the design and training of such a system in the paucity of "gold" datasets and the second in its evaluation standards.
• End-to-end models for data-to-text generation showcase learning a direct input-output mapping from data to text [12], [13] through the use of annotated datasets such as WikiBio [12] and E2E [14] for tabular data and WebNLG and DART [15], [16] for RDF (Resource Description Framework) triples [17]. In both tabular data and RDF triples, the information to be presented in the narrative is present in the data itself and is copied to the output token -making end-to-end learning possible. In contrast, time-series requires further processing for the discovery of underlying patterns to be narrated. Thus, due to the inherent numerical and continuous nature of time-series, one needs to consider time-series as a whole rather than a sum of its individual constituents. Thus, one would have to either follow the traditional modular pipeline architecture [5] where non-linguistic data is transformed into text through several intermediate steps, or formulate a novel approach suited to time-series data altogether. • The "gold" narratives in the aforementioned datasets offers a common ground for automated evaluation of competing frameworks on the basis of word-based metrics such as BLEU [18] and its variants [19]- [21]. Thus, there are domain-familiar metrics present to showcase how one framework can perform better than another. For timeseries data, without human annotations corresponding to the data, automated evaluation through said word-based metrics is not possible.
As will be discussed in the related works section, there have been several previous efforts for time-series narration. Although these pioneering efforts have laid significant groundwork for this field, the recent work in time-series narration falls short in two crucial areas: First, they are domain-specific, modeled specifically for use in fields such as meteorology, intensive care, health monitoring and so on. Second, the proposed systems have not actualized the recent advances in language processing, rather, relying on the traditional pipeline architecture. Graefe et. al. [22] note "news consumers get more pleasure out of reading human-written as opposed to computer-written content". Thus, these template-based narratives can be met with a dismissive response by its users due to its seemingly mechanical nature -we further elaborate on this in our expert review section.
To address these challenges, we present T 3 : Timeseries-To-Text, which stands out from previous forays in this task through a) its domain-agnostic nature and b) its coupling of dense knowledge graph based representation of essential timeseries elements and the translation of said knowledge graph into rich and fluent narratives through the transfer-learning capabilities of large PLMs (Pre-trained Language Models) fine-tuned to this specific task -tackling the paucity of annotated data. Figure 1 highlights the diversity in the narratives generated by T 3 along with the automatic extrapolations and abbreviations deduced by the language models. The terms 'United Kingdom', 'United States', and 'Carbon Monoxide' are automatically abbreviated to 'UK', 'US', and 'CO' respec-tively. Similarly, the system extrapolates information such as adding 'as a measurement of air quality' when mentioning carbon monoxide values, adding 'the state of' to Kansas, and introducing the term 'trade volume' when describing export values. Our contributions are summarized as follows:
• To the best of our knowledge, T 3 is the first foray into neural time-series narration. Our rigorous evaluations across multi-domain datasets showcases that T 3 consistently produces 65.38% more diverse narratives with the same grammatical integrity as the existing baselines. • Through an expert review (n = 21), we validate the performance, practicality, and linguistic superiority of T 3 . 76.2% of participating experts who were wary of autogenerated narratives favored T 3 as a deployable system as compared to existing baselines. • We benchmark the performance of several time-series segmentation and regime-shift detection algorithms as well as prominent PLMs for outlining the best approach to a domain-agnostic time-series narration framework. • Our code-base, pre-trained models, the datasets used, along with a detailed notebook guide for reproducibility are made public 1 .
II. NARRATIVES: GOOD, BAD, AND BORING Textual narratives are swiftly becoming important components of visualization systems, either as a way to generate data insights to accompany visualizations [23] or to structure visualizations for better communication [24]. Research into what makes an effective narrative is still in its infancy and is necessarily tied to the underlying analytical task and domain. For temporal data, we identify the following crucial facets: Level of detail: Should the narrative capture an executive summary or provide in-depth access to the underlying data? Language diversity: Greater diversity in language prevents monotony but could detract from conveying key messages and conclusions. Lower diversity, on the other hand, supports comparison of different narratives, but leads to "glossing over" by analysts -defeating the very purpose of these narratives. Verbalizing numbers: The verbalization of quantitative or probabilistic data (using Kent's words of estimative probability [25] or the NIC/ Mercyhurst standardization) and trends is considered important in specific domains (such as intelligence analysis [26]), however, other applications argue for direct access to the original numeric information. Human performance aspects: Understanding the characteristics of narratives that lead to improved human performance is an ongoing research problem [27]. Narratives provide increased comprehension, interest, and engagement and are known to contribute "distinct cognitive pathways of comprehension" with increased recall, ease of comprehension, and shorter reading times [28]. Conversely, the challenge of the written word implies slowness and error-prone behavior due to short-term memory limits.
In essence, successful narrative research requires a standardization of both the generation and evaluation space, and an understanding of how a narrative fits into the larger comprehension process of the analyst. As an example, a "bad" narrative for a fictional monthly sales-volume dataset, in the form of "The sales numbers for January 2019 were 1500 while the sales numbers for February 2019 were 2000. Similarly, the sales numbers for ...", falls to meet all the above criterion: it is lexically repetitive, portrays no information about the data that would have been difficult to discern visually, and presents the numbers as-is with no verbalization.
III. RELATED WORK
While some of the earliest work on time-series narration can be traced back to 1994 with the Forecast Generator (FOG) [29], a framework for generating bilingual (English/French) textual summaries of weather forecasts, in the recent decades, Ehud Reiter's research group has laid significant groundwork for this domain. Their SUMTIME-MOUSAM project [30] generates short textual summaries of weather forecasts and SUMTIME-TURBINE [31] generates the same for sensor readings from a gas turbine. The design of these SUMTIME systems highlights the importance of domain expertise in relaying the information embedded in a raw time-series in a manner relevant to the end user. Following this, their SUMTIME project was extended to SUMTIME-NEONATE [32], which generates textual summaries of time-series data intended to aid medical professionals in monitoring infants in neonatal intensive care units. In 2003 [33], the authors highlight the use of Gricean maxims of cooperative communication [34] for the selection of the most crucial information to be relayed to the end user. The authors further investigate the impact of word choice in textual summarization by avoiding words specific to one idiolect and words whose meanings varied in different idiolects [35].
Kacprzyk et. al. [36] propose the use of Zadeh's calculus of linguistically quantified propositions with varying tnorms to summarize time-series segmented with Piece-wise Linear Approximations. Castillo-Ortega et. al. [37], propose linguistic summarization of time-series based on the hierarchical structure of time. The multiple candidate summaries are evaluated with a multi-objective evolutionary algorithm. In the physiological domain, Banaee et. al. [38] propose a system to summarize the data streams from health monitoring systems in a clinician and patient centric manner. Dubey et. al. [39] propose the use of Case-based Reasoning from records of previous summaries to summarize weather reports.
Thus, there has been significant investigation into this domain. However, the research emphasis has heavily been in the identification of the information to relay to the end user rather than relaying the information in a manner engaging to the end user -having the narratives themselves be rich and fluent. The textual output of the above mentioned systems follow the traditional modular pipeline architecture of Reiter and Dale [5]. Commercial services such as The Automatic Statistician 2 and Narrative Science 3 offer data summarization through visualization and narratives. Although their technology and code is proprietary, a perusal through offered samples 4 for time-series summarization hints towards templated generation where variables from analysis are plugged into preset templates.
IV. PRELIMINARIES
In this section we outline some necessary background in time-series segmentation, detecting shifting regimes, and PLMs, as a foundation for T 3 's architecture.
A. Segmentation
Given a time-series T of length n, a segmentation of T contains a set of distinct temporal cut-points S = {c 1 , c 2 , .., c k } corresponding to k straight lines where k << n [40]. The segmentation approach can be limited by the number of segments k produced, or by a predefined threshold for segment-wise or cumulative error. As time-series of varying types and lengths need to be approximated with varying number of segments, we evaluate the following candidate segmentation algorithms based on a preset error threshold to promote domain-agnosticism.
Sliding Windows: The data points from a time-series are added to a sliding window until the maximum approximation error is met and a segment is formed. This process repeats with the window starting from the next data point. Bottom-Up: The algorithm starts with the finest approximation such that a time-series of length n is approximated by n 2 segments. The algorithm iteratively merges the lowest cost adjacent segments until the stopping criteria is met. SWAB: An acronym for the integration of Sliding Windows and Bottom-Up, SWAB [41] first defines an initial buffer w on which Bottom-Up is performed. The first segment from w is reported and the corresponding data points are removed from it. Remaining points from the series are read into w till the linear fit on it reaches an error threshold. This process is repeated until the buffer w reaches the end of the time-series.
B. Regime-shifts
Regime shift or switching refers to changes in the state or structure of a time-series. For domain-agnosticism, we require the shift-detection algorithms to be unsupervised, universal approximators, and input length invariant. Thus, based on these criterion, we evaluate the following candidates: Rrepresentation Learning: Franceschi et. al.'s [42] unsupervised representation learning algorithm, hereby noted as "RL", learns representations of time-series elements using an encoder architecture based on causal dilated convolutions with a triplet loss arrangement that employs time-based negative sampling. Matrix Profile: The Matrix Profile [43], [44] is a multipurpose annotation (profile) of a time-series T where the i th location on the profile records the distance of the sub-sequence in T at the i th location to its nearest neighbor.
C. Pre-trained Language Models
Transfer learning in language processing has been democratized and made universal with the advent of PLMs [45] which share the multi-headed attention core architecture of transformers [46]. Transfer learning, in the context of PLMs, is essentially the adaptation of these massive language models to downstream tasks such as data-to-text, question answering, summarization and much more via a fine-tuning process on task-specific data. Through the effors of Thomas Wolf et. al. [47], second-generation seq2seq PLMs such as Google's T5 [48] and Facebook's BART [49] and auto-regressive PLMs such as Open-AI's GPT-2 [50] and many more have been made accessible to the larger community.
The motivation behind using PLMs for this task not only stems from the fact that they lead the benchmark for a multitude of downstream language processing tasks [51] but also due to the evidence that PLMs, due to their apparent acquisition of worldly knowledge [52], in some cases refuse to generate false outputs even when the input to the system is corrupted [11]. As Open AI's GPT-3 [53] has not been released for public access at the time of publication of this paper, we have not been able to incorporate it into our experiments.
D. Decoding Strategies
The PLMs we intend to investigate-viz. Open-AI's GPT-2, Facebook's BART, and Google's T5-though differing in their architectures and training strategies, share an auto-regressive decoder. Auto-regressive language generation is based on the assumption that the probability distribution of a sequence of words can be decomposed into the product of conditional next word distributions. If W 0 be the initial context word sequence and T be the length of the sequence to be generated, then the probability distribution can be defined as:
P (w 1:T |W 0 ) = T t=1 P (w t |w 1:t−1 , W 0 )(1)
Basic Sampling: This strategy is based on randomly picking a word w t based on its conditional probability distribution w t ∼ P (w|w 1:t−1 ). Thus, the next word in the sequence is chosen based on its conditional probability of occurrence.
Top-K Sampling: In top-K sampling [54], the K words most likely to occur next in the sequence are chosen and the probability mass is redistributed among these K words. This leads to a more "human-like" text generation.
Top-p Sampling: Top-p sampling [55], also known as nucleus sampling, addresses a core issue in top-K sampling. Since top-K re-distributes the probability mass among the top K chosen words, it has the potential to break down in particularly sharp or flat distributions. If a distribution is sharp, the limit on the selection of just K words can lead to insensible text generation. On the other hand, for flat distributions, the limit prevents the generation from being diverse. Thus, instead of limiting the sampling space to K words, top-p samples from the smallest possible set of words whose cumulative probability exceeds a predefined probability p.
V. T 3 FRAMEWORK
A. The Architecture
The two-stage design of T 3 , as illustrated in figure 2, is motivated by the need to produce rich and fluent narratives of time-series data with the least-possible human intervention. Subsections VI-A and VI-B highlight thorough experimentation that motivate the specific choices for the segmentation and regime-shift detection algorithms for T 3 while subsections VI-C and VI-D highlight the same for our choice of PLMs.
Stage I: The time-series is first log-transformed to approximately conform the data to normality before information extraction. This log-transformed series is segmented into k linear segments where the individual slopes of these k segments indicates the trends followed by the data in their respective intervals. Simultaneously, sequential data-points with similar properties are clustered together based on their learned representations. These clusters represent changing regimes in the dataset. The above time-series characteristics are encoded into a RDF-based knowledge graph. Figure 3 illustrates a sample knowledge graph (curtailed) as extracted from T 3 's first stage for the United States COVID19 time-series. Fig. 2. The two stage T 3 framework: In Stage I, the system extracts trends, regimes, and peaks from the input time-series which is formulated into a knowledge graph. In Stage II, a PLM fine-tuned for graph-to-text generation generates the narrative from the input graph.
Stage II: Anterior to T 3 's execution, the PLMs are finetuned with both WebNLG and DART datasets for graph-totext translation. The knowledge graph from Stage I is thus translated into a rich and descriptive narrative by these PLMs using sampling techniques for strategic language generation.
B. Datasets
Time-series: To promote domain-agnosticism, the datasets used for evaluating T 3 are drawn from five different fields -COVID19 5 , Direction of Trade Statistics 6 , Carbon Monoxide Pollution 7 , World Population 4 , and Climate Change 4 . Based on the amount and consistency of the data, we consider the same ten countries (United States, India, Brazil, Russia, United Kingdom, France, Spain, Italy, Turkey, and Germany) across these datasets. The CO (Carbon Monoxide) units, however, are extracted for the U.S. states with EPA state codes 1 through 10. Table I provides a brief statistical summary of these datasets.
Fine-tuning: RDF-based datasets WebNLG v3.0 and DART v1.1 are used for fine-tuning the PLMs in T 3 . Table I briefly summarizes the statistics of these datasets where N x represents the number of samples for x ∈ {train, dev, test} and V , W SR, and SSR represent the vocabulary size, words per SR (Surface Realization), and sentence per SR respectively.
C. Fine-tuning, Training, and Decoding Specifications
Tokens <X> where X ∈ {H, R, T } are appended to the start of the Head (subject), Relationship (predicate), and Tail (object) entities of each RDF triple. The Adam optimizer [56] with a linearly decreasing learning rate is used to fine-tune the PLMs with learning rates initially set to 3e-5 for T5 and BART and 5e-4 for GPT-2. For uniformity, the maximum token lengths for all PLMs are set to their default maximum (512) with a batch size of 4. For strategic decoding, based on the average length (∼100 words) and the average number of unique words (∼50) present in the generated narratives we set k as 50. Similarly, based on popular practice, we set p as 92%.
A. Trend Detection
In order to evaluate our candidate segmentation algorithms, we must first determine the right value of allowable maximum linear-fit error appropriate for our datasets. The evaluation of the total SSE (Sum Squared of Errors) of residuals vs k (the numbers of segments produced), as a function of the maxmimum linear-fit error, hints at 2.75 as a potential error "sweet spot". The figure below presents this analysis for the U.S. COVID19 dataset -the left marker indicates the tradeoff point between the total SSE and k while the right marker indicates the point where both total SSE and k stabilize. Table II outlines the performance of the selected segmentation algorithms across our datasets with the maximum linearfit threshold set to 2.75. We observe that SWAB consistently performs the best in terms of both the r 2 goodness-of-fit and SSE, making it the segmentation algorithm of choice for T 3 .
Out of the k segments produced for each time-series, if the slope of k th i−1 segment follows that of the k th i segment, we rearrange them as a single segment for continuity. This is illustrated in the figure above for the U.S. COVID19 timeseries where the original k segments are consolidated based on their slopes to 6 long segments (k > 6) that indicate the core trends followed by the time-series over significant time-spans.
B. Regime Shift Detection
For the evaluation of our candidate regime-shift detection algorithms, we force these algorithms to produce a known number of regime shifts validated through visual interpretation of the data -regime shifts in COVID19 cases should correspond to waves of outbreak, as illustrated in the figure below, whereas those in DOTS Exports should correlate to inflation or deflation in the economy. Table III outlines the performance of Matrix Profile and RL across our datasets based on the standard deviations (σ) of the formed regimes. Our evaluations lead us to conclude that the performance of Matrix Profile and RL are on-par and vary based on the individual dataset. In our implementation, an RL instance trained on the COVID19 dataset showcases high cross-domain transferability when applied to other series in our catalog. The Matrix Profile, however, requires a windowsize definition prior to its execution which varies based on the input time-series. The tendency of RL to favor automation makes it the regime-shift detection algorithm of choice for T 3 .
C. Graph-to-text Translation
The task of translating a graph to text is predominantly a Machine Translation task. Thus, the PLM architecture of preference are seq2seq models such as Google's T5 and Facebook's BART. However, for completeness we also include an auto-regressive model -OpenAI's GPT-2 in our evaluation. The performance of these models are bench-marked across three dataset configurations: WebNLG, DART, and a
D. T 3 Evaluation
To evaluate the performance of T 3 , we measure its performance with respect to our baseline -the templated generation framework. The templated generation takes in the data from Stage I of T 3 , however, instead of passing it to Stage II, it feeds it to a template designed for the desired domain. The narratives generated by these systems are evaluated based on three core dimensions of linguistic quality:
• The Flesch's RE (Reading Ease) score [57] measures the readability of a text based on the average length of its sentences and the average number of syllables of its words 8 . Ranging from 0 to 100, increasing scores represent increasing levels of readability. • The TTR (Type Token Ratio) 9 is a measure of text diversity where the tokens refers to the total number of words in a given text while types refers to the number of non-repeating unique words. Simply calculated as T T R = T ypes T okens , the closer the TTR is to 1, the more lexical variety there is in a given text. 8 https://pypi.org/project/textstat/ 9 https://pypi.org/project/lexical-diversity/ • The G (grammar score) 10 , represents the grammatical integrity of the text. Similar to TTR, the closer G is to 1, the better the grammar of the text. G = 1 − Number of grammatical errors in a sentence Number of words in a sentence For each of our five datasets described in section 5-B, the RE score, TTR, and Grammar score (G) are averaged-out for the aforementioned ten countries/states. The performance of T 3 is evaluated with three decoding strategies: T 3 with P LM top−K represents the use of top-K sampling scheme, T 3 with P LM top−p represents the use of top-p sampling scheme, and T 3 with simply P LM refers to the default sampling scheme where words are sampled from the base conditional probability distribution without the use of top-K or top-p strategies. Table V illustrates the comparative performance of T 3 with templated generation. From this, we make four key observations: 1) T 3 significantly outperforms templated generation in lexical diversity. The highest increase in lexical diversity was observed in the COVID19 dataset where T 3 increases the TTR by 65.38% while the lowest observed increase was in the DOTS Exports dataset where T 3 increases the TTR by 13.33%. 2) T 3 remains closely competitive with templated generation in maintaining grammatical integrity. As templated generation uses pre-defined sentence planning, the grammar is expected to be perfect (T T R = 1). While T 3 achieves perfect grammatical integrity in the DOTS Exports, U.S. CO Pollution, and World Population datasets, the highest observed loss in grammatical integrity was 7.9% in the Global Temperature dataset. 10 https://pypi.org/project/language-tool-python/ 3) T 3 consistently outperforms templated generation in terms of readability, although not significantly. We attribute this to the distinct sentences formed when each element of the knowledge graph is translated to text. 4) In terms of PLM selection, we observe that T5 tends to lean more towards grammatical integrity while BART tends to produce more linguistically diverse text. Similar observations are made for the sampling strategies: topp sampling leads to more grammatical consistent texts while top-K sampling promotes linguistic diversity.
VII. EXPERT REVIEW
We conduct an expert review (n = 21) [58] to validate the practicality of T 3 . The review simultaneously acts as a human evaluation of T 3 's narratives as well. 85.7% of the recruited experts had expertise in data science, 76.2% in data visualization, and 66.7% in NLP. When asked to rate their trust in machine-generated narratives on a 1 to 5 Likert scale, the response from the experts resembled a right-skewed bellcurve where 42.9% of the experts had chosen a rating of 3 (neither complete trust or distrust in machine-generated narratives). In agreement with [22], 61.9% of the recruited experts acknowledged being dismissive of machine-generated narratives, while the remaining claimed equal treatment of both machine and human generated narratives. The experts, each, were presented with 2 time-series datasets, where each time-series was accompanied with 4 narratives -a baseline templated narrative, 2 narratives randomly sampled from T 3 , and finally, a sub-par T 3 narrative (generated by repeatedly sampling from T 3 until a a sub-par narrative was generated). For each of these narratives, the experts were asked to rate its coherence, linguistic diversity, grammatical integrity, and data fidelity (does the model tend to hallucinate?) on a 1 to 5 Likert scale. Figure 4 presents an overview of the findings: T 3 and templated generation were rated comparably in terms of coherence, grammatical integrity, and data fidelity. However, T 3 was rated considerably higher in terms of linguistic diversity -in alignment with our experimental findings. In their concluding remarks, 76.2% of the experts chose T 3 over templated narratives for deployable systems. For the remaining 23.8% of the experts that chose templated narratives, their sentiment resonates with the need for mission-critical data fidelity.
VIII. CONCLUSION AND FUTURE WORK
We have presented T 3 , a domain-agnostic neural framework for time-series narration. Through our experiments, we outline a strategy forward for universal time-series narration. There are numerous avenues to pursue to augment the space of timeseries narration. From the analysis of time-series data to the realization of natural language summaries, work in each of these space will bring us closer to better data-to-text systems. With a dataset of time-series and narrative pairs, a promising direction for future exploration lies in learning direct mappings from numbers to text, extending beyond just time-series.
IX. ACKNOWLEDGEMENTS
This work was partially supported by DARPA (Defense Advanced Research Projects Agency) under contract number FA8650-17-C-7720. The views, opinions and/or findings expressed in this publication are solely those of the author(s).
Fig. 1 .
1Sample narratives generated by T 3 for the United Kingdom COVID19, Kansas CO pollution, and United States merchandise exports datasets.
Fig. 3 .
3Sample T 3 knowledge graph (curtailed) for U.S. COVID19 Cases.
Fig. 4 .
4Histogram of Likert Ratings based on Narrative Type.
TABLE I DATASETS
ISTATISTICS.N
µ
σ
COVID19 Cases
351
1.75e4 1.95e4
DOTS Exports (MM)
254
1.35e4 6.27e3
U.S. CO Pollution (Units) 4722
0.39
0.22
World Population
22
8.02e7 4.82e7
Global Temperature (°C)
3166
8.15
6.91
N train
N dev
Ntest
V
W SR
SSR
WebNLG
35426
4464
5150
8000
22.5
1.4
DART
62659
6980
12551
33200
21.6
1.5
VI. EXPERIMENTS
Through our experimentation, we seek to address three core
questions regarding the design and need for a domain-agnostic
time-series narration framework:
• For the design of a domain-agnostic narration framework,
how do we choose among the prominent time-series
analysis tools at our disposal? (Sections A and B)
• How do state-of-the-art language models stack up against
each other for the task of translating knowledge graphs
to natural language? (Section C)
• Does T 3 deliver richer and more diverse narratives as
compared to traditional approaches? Does T 3 hallucinate?
Would domain experts find this favorable and/or practi-
cal? (Sections D and VII)
TABLE II COMPARISON
IIOF TIME-SERIES SEGMENTATION ALGORITHMS BASED ON TOTAL SSE OF RESIDUALS AND r 2 FIT ACROSS DATASETS. COVID19 Cases DOTS Exports U.S. CO Pollution World Population Global TemperatureSSE
r 2
SSE
r 2
SSE
r 2
SSE
r 2
SSE
r 2
Sliding Window 300.61
0.12
6.91
0.08
73.91
0.15
0.92
0.61
838.35
0.16
Bottom-Up
27.07
0.13
4.98
0.07
67.53
0.16
0.75
0.64
67.36
0.16
SWAB
27.16
0.14
4.90
0.08
67.11
0.16
0.11
0.37
65.21
0.17
TABLE III COMPARISON
IIIOF REGIME SHIFT DETECTION ALGORITHMS BASED ON σ.Matrix Profile (σ) RL (σ)
COVID19 Cases
7.29
9.43
DOTS Exports
8.68
8.98
U.S. CO Pollution
2.35
2.41
World Population
-
15.26
Global Temperature
2.20
2.19
TABLE IV EVALUATION
IVOF PLMS ON GRAPH-TO-TEXT TRANSLATION ON THE WEBNLG DATASET, DART DATASET, AND THEIR COMBINATION.TABLE V COMPARISON OF THE PERFORMANCE OF T 3 WITH THAT OF TEMPLATED GENERATION BASED ON LANGUAGE EVALUATION METRICS.Dataset
WebNLG
DART
WebNLG + DART
Model
BLEU ROUGE METEOR chrF++ BLEU ROUGE METEOR chrF++ BLEU ROUGE METEOR chrF++
GPT-2
14.2
4.28
20.22
37.13
15.56
5.23
21.79
37.68
18.65
7.54
23.61
39.22
BART Base
32.13
51.81
33.49
59.08
33.77
54.27
35.86
61.15
37.89
58.22
37.80
64.52
BART Large
32.04
51.10
34.68
59.98
34.75
55.32
36.47
61.75
38.36
58.18
38.15
64.82
T5 Small
33.94
56.46
35.4
61.56
34.52
55.96
36.33
61.74
38.52
59.05
38.21
65.06
T5 Base
36.75
57.76
37.25
64.17
36.40
57.00
37.44
63.23
39.88
59.71
38.91
65.95
COVID19 Cases
DOTS Exports
U.S. CO Pollution
World Population
Global Temperature
RE
TTR
G
RE
TTR
G
RE
TTR
G
RE
TTR
G
RE
TTR
G
Templated Generation
17.79
0.26
1
54.73
0.45
1
64.34
0.22
1
66.28
0.46
1
55.24
0.37
1
T 3 with T 5
64.48
0.31
0.99 67.54
0.47
1
69.22
0.28
0.99 69.56
0.49
1
67.45
0.39
0.99
T 5 top−K
65.67
0.38
0.98 67.57
0.51
0.98 64.43
0.33
0.99 74.19
0.56
1
66.06
0.46
1
T 5 top−p
68.02
0.37
0.99 66.27
0.48
1
65.15
0.32
1
71.82
0.54
1
66.98
0.45
1
BART
70.71
0.42
0.94 67.30
0.46
0.97 68.16
0.33
0.99 75.04
0.55
0.99 63.95
0.42
0.92
BART top−K
69.60
0.43
0.94
69.47
0.47
0.96
72.10
0.32
0.99 76.81
0.56
0.99 64.58
0.40
0.94
BART top−p
67.53
0.40
0.94 68.36
0.47
0.97 67.35
0.32
0.99 76.58
0.57
0.99 65.46
0.41
0.93
combination of both. Table IV shows our evaluation results
for these PLMs on automated word-based metrics. From this
table, there are three key takeaways: For every model, the
performance improves with the third dataset configuration
(both the WebNLG and DART datasets). The T 5 Base model
significantly outperforms the competitors while GP T 2 falls
short across all benchmarks. Finally, although T 5 Small outper-
forms BART Large , their performance is almost competitive.
From these observations, T 5 Base and BART Large , with the
third dataset configuration, are T 3 's preferred language models.
https://automaticstatistician.com/ 3 https://narrativescience.com/ 4 https://automaticstatistician.com/examples/
https://ourworldindata.org/ 6 https://data.imf.org/ 7 https://data.world/data-society/
A review on time series data mining. T Fu, Engineering Applications of Artificial Intelligence. 241T. chung Fu, "A review on time series data mining," Engineering Applications of Artificial Intelligence, vol. 24, no. 1, pp. 164 -181, 2011.
Vis author profiles: Interactive descriptions of publication records combining text and visualization. S Latif, F Beck, IEEE transactions on visualization and computer graphics. 251S. Latif and F. Beck, "Vis author profiles: Interactive descriptions of publication records combining text and visualization," IEEE transactions on visualization and computer graphics, vol. 25, no. 1, pp. 152-161, 2018.
Once upon a time in visualization: Understanding the use of textual narratives for causality. A C , IEEE Transactions on Visualization and Computer Graphics. A. C. et. al., "Once upon a time in visualization: Understanding the use of textual narratives for causality," IEEE Transactions on Visualization and Computer Graphics, pp. 1-1, 2020.
Why looking isn't always seeing: readership skills and graphical programming. M Petre, Communications of the ACM. 386M. Petre, "Why looking isn't always seeing: readership skills and graphical programming," Communications of the ACM, vol. 38, no. 6, pp. 33-44, 1995.
Building Natural Language Generation Systems, ser. E Reiter, R Dale, Studies in Natural Language Processing. Cambridge University PressE. Reiter and R. Dale, Building Natural Language Generation Systems, ser. Studies in Natural Language Processing. Cambridge University Press, 2000.
Survey of the state of the art in natural language generation: Core tasks, applications and evaluation. A Gatt, E Krahmer, J. Artif. Int. Res. 611A. Gatt and E. Krahmer, "Survey of the state of the art in natural language generation: Core tasks, applications and evaluation," J. Artif. Int. Res., vol. 61, no. 1, p. 65-170, Jan. 2018.
Table-to-text generation by structure-aware seq2seq learning. T Liu, K Wang, L Sha, B Chang, Z Sui, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. the Thirty-Second AAAI Conference on Artificial IntelligenceAAAI PressT. Liu, K. Wang, L. Sha, B. Chang, and Z. Sui, "Table-to-text generation by structure-aware seq2seq learning," in Proceedings of the Thirty- Second AAAI Conference on Artificial Intelligence. AAAI Press, 2018, pp. 4881-4888.
Data-to-text generation with content selection and planning. R Puduppully, L Dong, M Lapata, The Thirty-Third AAAI Conference on Artificial Intelligenc. AAAI PressR. Puduppully, L. Dong, and M. Lapata, "Data-to-text generation with content selection and planning," in The Thirty-Third AAAI Conference on Artificial Intelligenc. AAAI Press, 2019, pp. 6908-6915.
A hierarchical model for data-to-text generation. C Rebuffel, L Soulier, G Scoutheeten, P Gallinari, Advances in Information Retrieval. ChamSpringer International PublishingC. Rebuffel, L. Soulier, G. Scoutheeten, and P. Gallinari, "A hierarchical model for data-to-text generation," in Advances in Information Retrieval. Cham: Springer International Publishing, 2020, pp. 65-80.
Triple-to-text: Converting RDF triples into high-quality natural languages via optimizing an inverse KL divergence. Y Z , Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. the 42nd International ACM SIGIR Conference on Research and Development in Information RetrievalACMY. Z. et. al., "Triple-to-text: Converting RDF triples into high-quality natural languages via optimizing an inverse KL divergence," in Pro- ceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 2019, pp. 455-464.
Investigating pretrained language models for graph-to-text generation. L F R Ribeiro, M Schmitt, H Schutze, I Gurevych, ArXiv. L. F. R. Ribeiro, M. Schmitt, H. Schutze, and I. Gurevych, "Investigating pretrained language models for graph-to-text generation," ArXiv, vol. abs/2007.08426, 2020.
Neural text generation from structured data with application to the biography domain. R Lebret, D Grangier, M Auli, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsR. Lebret, D. Grangier, and M. Auli, "Neural text generation from struc- tured data with application to the biography domain," in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Association for Computational Linguistics, Nov. 2016, pp. 1203-1213.
End-to-end content and plan selection for data-to-text generation. S Gehrmann, F Dai, H Elder, A Rush, Proceedings of the 11th International Conference on Natural Language Generation. Tilburg University. the 11th International Conference on Natural Language Generation. Tilburg UniversityThe NetherlandsAssociation for Computational LinguisticsS. Gehrmann, F. Dai, H. Elder, and A. Rush, "End-to-end content and plan selection for data-to-text generation," in Proceedings of the 11th International Conference on Natural Language Generation. Tilburg University, The Netherlands: Association for Computational Linguistics, Nov. 2018, pp. 46-56.
The E2E dataset: New challenges for end-to-end generation. J Novikova, O Dušek, V Rieser, Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. the 18th Annual SIGdial Meeting on Discourse and DialogueAssociation for Computational LinguisticsJ. Novikova, O. Dušek, and V. Rieser, "The E2E dataset: New challenges for end-to-end generation," in Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue. Association for Computational Linguistics, 2017, pp. 201-206.
Creating training corpora for NLG micro-planners. C Gardent, A Shimorina, S Narayan, L Perez-Beltrachini, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Long Papers). Vancouver, Canada: Association for Computational LinguisticsC. Gardent, A. Shimorina, S. Narayan, and L. Perez-Beltrachini, "Cre- ating training corpora for NLG micro-planners," in Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Com- putational Linguistics, Jul. 2017, pp. 179-188.
Dart: Open-domain structured data record to text generation. D R , ArXiv. D. R. et. al., "Dart: Open-domain structured data record to text genera- tion," ArXiv, vol. abs/2007.02871, 2020.
Resource description framework (rdf): Concepts and abstract syntax. G Klyne, J Carroll, G. Klyne and J. Carroll, "Resource description framework (rdf): Con- cepts and abstract syntax," 2003.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsK. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, "Bleu: a method for automatic evaluation of machine translation," in Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia, Pennsylvania, USA: Association for Computational Lin- guistics, Jul. 2002, pp. 311-318.
ROUGE: A package for automatic evaluation of summaries. C.-Y. Lin, Text Summarization Branches Out. Barcelona, SpainAssociation for Computational LinguisticsC.-Y. Lin, "ROUGE: A package for automatic evaluation of summaries," in Text Summarization Branches Out. Barcelona, Spain: Association for Computational Linguistics, Jul. 2004, pp. 74-81.
METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments. A Lavie, A Agarwal, Proceedings of the Second Workshop on Statistical Machine Translation. the Second Workshop on Statistical Machine TranslationA. Lavie and A. Agarwal, "METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments," in Proceedings of the Second Workshop on Statistical Machine Translation.
chrF: character n-gram F-score for automatic MT evaluation. M Popović, Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsM. Popović, "chrF: character n-gram F-score for automatic MT eval- uation," in Proceedings of the Tenth Workshop on Statistical Machine Translation. Lisbon, Portugal: Association for Computational Linguis- tics, Sep. 2015, pp. 392-395.
Readers' perception of computer-generated news: Credibility, expertise, and readability. A Graefe, M Haim, B Haarmann, H.-B Brosius, Journalism. 195A. Graefe, M. Haim, B. Haarmann, and H.-B. Brosius, "Readers' percep- tion of computer-generated news: Credibility, expertise, and readability," Journalism, vol. 19, no. 5, pp. 595-610, 2018.
Augmenting visualizations with interactive data facts to facilitate interpretation and communication. A Srinivasan, S M Drucker, A Endert, J Stasko, A. Srinivasan, S. M. Drucker, A. Endert, and J. Stasko, "Augmenting visualizations with interactive data facts to facilitate interpretation and communication," 2018.
Graphiti: Interactive specification of attribute-based edges for network modeling and visualization. A Srinivasan, H Park, A Endert, R C Basole, 24A. Srinivasan, H. Park, A. Endert, and R. C. Basole, "Graphiti: Inter- active specification of attribute-based edges for network modeling and visualization," vol. 24, no. 1, pp. 226-235, 2017.
Words of Estimative Probability. S Kent, S. Kent, Words of Estimative Probability, 1964.
Analysis of competing hypotheses. R J HeuerJr, Psychology of Intelligence Analysis. R. J. Heuer Jr, "Analysis of competing hypotheses," Psychology of Intelligence Analysis, pp. 95-110, 1999.
Coupling story to visualization: Using textual analysis as a bridge between data and interpretation. R Metoyer, Q Zhi, B Janczuk, W Scheirer, Proceedings of the ACM Conference on Intelligent User Interfaces. the ACM Conference on Intelligent User InterfacesR. Metoyer, Q. Zhi, B. Janczuk, and W. Scheirer, "Coupling story to visualization: Using textual analysis as a bridge between data and interpretation," in Proceedings of the ACM Conference on Intelligent User Interfaces, 2018, pp. 503-507.
Using narratives and storytelling to communicate science with nonexpert audiences. M F Dahlstrom, National Acad Sciences. 111620Supplement 4.M. F. Dahlstrom, "Using narratives and storytelling to communicate science with nonexpert audiences," vol. 111, no. Supplement 4. National Acad Sciences, 2014, pp. 13 614-13 620.
Using natural-language processing to produce weather forecasts. E Goldberg, N Driedger, R Kittredge, IEEE Expert. 9E. Goldberg, N. Driedger, and R. Kittredge, "Using natural-language processing to produce weather forecasts," IEEE Expert, vol. 9, pp. 45- 53, 1994.
Modelling the task of summarising time series data using ka techniques. S G Sripada, E Reiter, J Hunter, J Yu, I P Davy, Applications and Innovations in Intelligent Systems IX. SpringerS. G. Sripada, E. Reiter, J. Hunter, J. Yu, and I. P. Davy, "Modelling the task of summarising time series data using ka techniques," in Applications and Innovations in Intelligent Systems IX. Springer, 2002, pp. 183-196.
Sumtime-turbine: a knowledge-based system to communicate gas turbine time-series data. J Yu, E Reiter, J Hunter, S Sripada, International Conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems. SpringerJ. Yu, E. Reiter, J. Hunter, and S. Sripada, "Sumtime-turbine: a knowledge-based system to communicate gas turbine time-series data," in International Conference on Industrial, Engineering and Other Ap- plications of Applied Intelligent Systems. Springer, 2003, pp. 379-384.
Summarizing neonatal time series data. S G Sripada, E Reiter, J Hunter, J Yu, 10th Conference of the European Chapter of the Association for Computational Linguistics. Budapest, HungaryAssociation for Computational LinguisticsS. G. Sripada, E. Reiter, J. Hunter, and J. Yu, "Summarizing neonatal time series data," in 10th Conference of the European Chapter of the Association for Computational Linguistics. Budapest, Hungary: Association for Computational Linguistics, Apr. 2003.
Generating english summaries of time series data using the gricean maxims. S Sripada, E Reiter, J Hunter, J Yu, KDD '03. S. Sripada, E. Reiter, J. Hunter, and J. Yu, "Generating english sum- maries of time series data using the gricean maxims," in KDD '03, 2003.
Logic and conversation. H Grice, Syntax and Semantics. 3H. Grice, "Logic and conversation," Syntax and Semantics, vol. 3, pp. 41-58, 1975.
Choosing words in computer-generated weather forecasts. E Reiter, S Sripada, J Hunter, J Yu, I Davy, Artificial Intelligence. 1671-2E. Reiter, S. Sripada, J. Hunter, J. Yu, and I. Davy, "Choosing words in computer-generated weather forecasts," Artificial Intelligence, vol. 167, no. 1-2, pp. 137-169, 2005.
Linguistic summarization of time series using a fuzzy quantifier driven aggregation. J Kacprzyk, A Wilbik, S Zadrożny, Fuzzy Sets and Systems. 15912J. Kacprzyk, A. Wilbik, and S. Zadrożny, "Linguistic summarization of time series using a fuzzy quantifier driven aggregation," Fuzzy Sets and Systems, vol. 159, no. 12, pp. 1485-1499, 2008.
Linguistic summarization of time series data using genetic algorithms. R. Castillo Ortega, N Marín, D Sánchez, A G Tettamanzi, EUSFLAT. Atlantis Press1R. Castillo Ortega, N. Marín, D. Sánchez, and A. G. Tettamanzi, "Linguistic summarization of time series data using genetic algorithms," in EUSFLAT, vol. 1, no. 1. Atlantis Press, 2011, pp. 416-423.
A framework for automatic text generation of trends in physiological time series data. H Banaee, M U Ahmed, A Loutfi, 2013 IEEE International Conference on Systems, Man, and Cybernetics. IEEEH. Banaee, M. U. Ahmed, and A. Loutfi, "A framework for automatic text generation of trends in physiological time series data," in 2013 IEEE International Conference on Systems, Man, and Cybernetics. IEEE, 2013, pp. 3876-3881.
Textual summarization of time series using case-based reasoning: a case study. N Dubey, S Chakraborti, D Khemani, Workshop on Reasoning about Time in CBR-RATIC. N. Dubey, S. Chakraborti, and D. Khemani, "Textual summarization of time series using case-based reasoning: a case study," in Workshop on Reasoning about Time in CBR-RATIC, 2018, pp. 164-174.
Cut-n-reveal: Time series segmentations with explanations. N Muralidhar, ACM Transactions on Intelligent Systems and Technology (TIST). 115N. Muralidhar et. al., "Cut-n-reveal: Time series segmentations with explanations," ACM Transactions on Intelligent Systems and Technology (TIST), vol. 11, no. 5, pp. 1-26, 2020.
Segmenting time series: A survey and novel approach. E J Keogh, S Chu, D Hart, M Pazzani, E. J. Keogh, S. Chu, D. Hart, and M. Pazzani, "Segmenting time series: A survey and novel approach," 2002.
Unsupervised scalable representation learning for multivariate time series. J Franceschi, A Dieuleveut, M Jaggi, Advances in Neural Information Processing Systems. J. Franceschi, A. Dieuleveut, and M. Jaggi, "Unsupervised scalable representation learning for multivariate time series," in Advances in Neural Information Processing Systems, 2019, pp. 4652-4663.
Matrix profile i: all pairs similarity joins for time series: a unifying view that includes motifs, discords and shapelets. C.-C E Yeh, 2016 IEEE 16th international conference on data mining (ICDM). IeeeC.-C. e. a. Yeh, "Matrix profile i: all pairs similarity joins for time series: a unifying view that includes motifs, discords and shapelets," in 2016 IEEE 16th international conference on data mining (ICDM). Ieee, 2016, pp. 1317-1322.
Mpa: a novel cross-language api for time series analysis. A V Benschoten, A Ouyang, F Bischoff, T Marrs, Journal of Open Source Software. 5492179A. V. Benschoten, A. Ouyang, F. Bischoff, and T. Marrs, "Mpa: a novel cross-language api for time series analysis," Journal of Open Source Software, vol. 5, no. 49, p. 2179, 2020.
Pre-trained models for natural language processing: A survey. X Qiu, T Sun, Y Xu, Y Shao, N Dai, X Huang, X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, "Pre-trained models for natural language processing: A survey," 2020.
Attention is all you need. A V , Advances in Neural Information Processing Systems. I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. GarnettA. V. et. al., "Attention is all you need," in Advances in Neural Information Processing Systems, I. Guyon, U. von Luxburg, S. Bengio, H. M. Wallach, R. Fergus, S. V. N. Vishwanathan, and R. Garnett, Eds., 2017, pp. 5998-6008.
Transformers: State-of-the-art natural language processing. T E Wolf, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingT. e. a. Wolf, "Transformers: State-of-the-art natural language process- ing," in Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, 2020, pp. 38-45.
Exploring the limits of transfer learning with a unified textto-text transformer. C R , Journal of Machine Learning Research. 21140C. R. et. al., "Exploring the limits of transfer learning with a unified text- to-text transformer," Journal of Machine Learning Research, vol. 21, no. 140, pp. 1-67, 2020.
BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. M E Lewis, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsM. e. a. Lewis, "BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension," in Proceedings of the 58th Annual Meeting of the Association for Com- putational Linguistics, Jul. 2020, pp. 7871-7880.
Language models are unsupervised multitask learners. A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever, A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, "Language models are unsupervised multitask learners," 2019.
Text-to-text pre-training for data-to-text tasks. M Kale, A Rastogi, Proceedings of the 13th International Conference on Natural Language Generation. the 13th International Conference on Natural Language GenerationDublin, IrelandAssociation for Computational LinguisticsM. Kale and A. Rastogi, "Text-to-text pre-training for data-to-text tasks," in Proceedings of the 13th International Conference on Natural Language Generation. Dublin, Ireland: Association for Computational Linguistics, Dec. 2020, pp. 97-102.
Language models are open knowledge graphs. C Wang, X Liu, D Song, ArXiv. C. Wang, X. Liu, and D. Song, "Language models are open knowledge graphs," ArXiv, vol. abs/2010.11967, 2020.
Language models are few-shot learners. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A , arXiv:2005.14165arXiv preprintT. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., "Language models are few-shot learners," arXiv preprint arXiv:2005.14165, 2020.
Hierarchical neural story generation. A Fan, M Lewis, Y Dauphin, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational LinguisticsA. Fan, M. Lewis, and Y. Dauphin, "Hierarchical neural story gener- ation," in Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Melbourne, Australia: Association for Computational Linguistics, Jul. 2018, pp. 889-898.
The curious case of neural text degeneration. A Holtzman, J Buys, L Du, M Forbes, Y Choi, 8th International Conference on Learning Representations. Addis Ababa, EthiopiaA. Holtzman, J. Buys, L. Du, M. Forbes, and Y. Choi, "The curious case of neural text degeneration," in 8th International Conference on Learning Representations, Addis Ababa, Ethiopia, 2020.
Adam: A method for stochastic optimization. D P Kingma, J Ba, 3rd International Conference on Learning Representations. Bengio and Y. LeCunSan Diego, CA, USAConference Track ProceedingsD. P. Kingma and J. Ba, "Adam: A method for stochastic optimiza- tion," in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, Y. Bengio and Y. LeCun, Eds., 2015.
A new readability yardstick. R Flesch, The Journal of applied psychology. 323R. Flesch, "A new readability yardstick." The Journal of applied psy- chology, vol. 32 3, pp. 221-33, 1948.
Evaluating visualizations: do expert reviews work. M Tory, T Moller, IEEE computer graphics and applications. 255M. Tory and T. Moller, "Evaluating visualizations: do expert reviews work?" IEEE computer graphics and applications, vol. 25, no. 5, pp. 8-11, 2005.
| [
"https://github.com/Mandar-Sharma/TCube"
] |
[
"Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment",
"Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment"
] | [
"Zichao Li zichao.li@mila.quebec \nMcGill University\nMila\n",
"Prakhar Sharma \nUniversity of California\nLos Angeles\n",
"Xing Han Lu \nMcGill University\nMila\n",
"Jackie C K Cheung \nMcGill University\nMila\n",
"Siva Reddy \nMcGill University\nMila\n"
] | [
"McGill University\nMila",
"University of California\nLos Angeles",
"McGill University\nMila",
"McGill University\nMila",
"McGill University\nMila"
] | [
"Association for Computational Linguistics: ACL 2022"
] | Most research on question answering focuses on the pre-deployment stage; i.e., building an accurate model for deployment. In this paper, we ask the question: Can we improve QA systems further post-deployment based on user interactions? We focus on two kinds of improvements: 1) improving the QA system's performance itself, and 2) providing the model with the ability to explain the correctness or incorrectness of an answer. We collect a retrievalbased QA dataset, FEEDBACKQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. We train a neural model with this feedback data that can generate explanations and re-score answer candidates. We show that feedback data not only improves the accuracy of the deployed QA system but also other stronger non-deployed systems. The generated explanations also help users make informed decisions about the correctness of answers. 1 | 10.18653/v1/2022.findings-acl.75 | [
"https://www.aclanthology.org/2022.findings-acl.75.pdf"
] | 248,006,299 | 2204.03025 | 1d7cb74e9b6b3fb5a485c529ad6731267d6a4305 |
Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-27, 2022 c 2022
Zichao Li zichao.li@mila.quebec
McGill University
Mila
Prakhar Sharma
University of California
Los Angeles
Xing Han Lu
McGill University
Mila
Jackie C K Cheung
McGill University
Mila
Siva Reddy
McGill University
Mila
Using Interactive Feedback to Improve the Accuracy and Explainability of Question Answering Systems Post-Deployment
Association for Computational Linguistics: ACL 2022
Association for Computational LinguisticsMay 22-27, 2022 c 2022
Most research on question answering focuses on the pre-deployment stage; i.e., building an accurate model for deployment. In this paper, we ask the question: Can we improve QA systems further post-deployment based on user interactions? We focus on two kinds of improvements: 1) improving the QA system's performance itself, and 2) providing the model with the ability to explain the correctness or incorrectness of an answer. We collect a retrievalbased QA dataset, FEEDBACKQA, which contains interactive feedback from users. We collect this dataset by deploying a base QA system to crowdworkers who then engage with the system and provide feedback on the quality of its answers. The feedback contains both structured ratings and unstructured natural language explanations. We train a neural model with this feedback data that can generate explanations and re-score answer candidates. We show that feedback data not only improves the accuracy of the deployed QA system but also other stronger non-deployed systems. The generated explanations also help users make informed decisions about the correctness of answers. 1
Introduction
Much of the recent excitement in question answering (QA) is in building high-performing models with carefully curated training datasets. Datasets like SQuAD (Rajpurkar et al., 2016), NaturalQuestions (Kwiatkowski et al., 2019) and CoQA (Reddy et al., 2019) have enabled rapid progress in this area. Most existing work focuses on the pre-deployment stage; i.e., training the best QA model before it is released to users. However, this stage is only one stage in the potential lifecycle of a QA system.
In particular, an untapped resource is the large amounts of user interaction data produced after the initial deployment of the system. Gathering this 1 Project page: https://mcgill-nlp.github.io/feedbackqa/ data should in practice be relatively cheap, since users genuinely engage with QA systems (such as Google) for information needs and may provide feedback to improve their results. 2 Exploiting this kind of user interaction data presents new research challenges, since they typically consist of a variety of weak signals. For example, user clicks could indicate answer usefulness (Joachims, 2002), users could give structured feedback in the form of ratings to indicate the usefulness (Stiennon et al., 2020), or they could give unstructured feedback in natural language explanations on why an answer is correct or incorrect. User clicks have been widely studied in the field of information retrieval (Joachims, 2002). Here we study the usefulness of interactive feedback in the form of ratings and natural language explanations.
Whilst there are different variants of QA tasks, this paper focuses primarily on retrieval-based QA (RQA; Chen et al. 2017;. Given a question and a set of candidate answer passages, a model is trained to rank the correct answer passage the highest. In practice, when such a system is deployed, an user may engage with the system and provide feedback about the quality of the answers. Such feedback is called interactive feedback. Due to the lack of a dataset containing interactive feedback for RQA, we create FEEDBACKQA.
FEEDBACKQA is a large-scale English QA dataset containing interactive feedback in two forms: user ratings (structured) and natural language explanations (unstructured) about the correctness of an answer. Figure 1 shows an example from FEEDBACKQA. The dataset construction has two stages: We first train a RQA model on the questions and passages, then deploy it on a crowdsourcing platform. Next, crowdworkers engage with this system and provide interactive feedback. To make our dataset practically useful, we focus on question answering on public health agencies for the Covid-19 pandemic. The base model for FEED-BACKQA is built on 28k questions and 3k passages from various agencies. We collect 9k interactive feedback data samples for the base model.
We investigate the usefulness of the feedback for improving the RQA system in terms of two aspects: answer accuracy and explainability. Specifically, we are motivated by two questions: 1) Can we improve the answer accuracy of RQA models by learning from the interactive feedback? and 2) Can we learn to generate explanations that help humans to discern correct and incorrect answers?
To address these questions, we use feedback data to train models that rerank the original answers as well as provide an explanation for the answers. Our experiments show that this approach not only improves the accuracy of the base QA model for which feedback is collected but also other strong models for which feedback data is not collected. Moreover, we conduct human evaluations to verify the usefulness of explanations and find that the generated natural language explanations help users make informed and accurate decisions on accepting or rejecting answer candidates.
Our contributions are as follows:
1. We create the first retrieval-based QA dataset containing interactive feedback. 2. We demonstrate a simple method of using the feedback data to increase the accuracy and explainability of RQA systems. 3. We show that the feedback data not only improve the deployed model but also a stronger non-deployed model.
FEEDBACKQA Dataset
Recently, there have been efforts to collect feedback data in the form of explanations for natural language understanding tasks (Camburu et al. 2018;Rajani et al. 2019, inter alia). These contain explanations only for ground-truth predictions for a given input sampled from the training data without any user-system interaction. Instead, we collect user feedback after deploying a RQA system thereby collecting feedback for both correct and incorrect predictions. Table 1 presents a comprehensive comparison of FEEDBACKQA and existing natural language understanding (NLU) datasets with explanation data.
Dataset collection
In order to collect post-deployment feedback as in a real-world setting, we divide the data collection into two stages: pre-deployment (of a RQA model) and post-deployment. Fact checking Free-form QED (Lamm et al., 2021) Reading comprehension Structured NExT (Wang et al., 2019) Text classification Structured Stage 2: Post-deployment of a QA system Since each domain has several hundred passages (Table 2), it is hard for a crowdworker to ask questions that cover a range of topics in each source. We thus collect questions for individual passages beforehand similar to Stage 1 and use these as interactive questions. The question and top-2 predictions of the model are shown to the user and they give feedback for each question-answer pair. The collected feedback consists of a rating, selected from excellent, good, could be improved, bad, and a natural language explanation elaborating on the strengths and/or weaknesses of the answer. For each QA pair, we elicit feedback from three different workers. We adopted additional strategies to ensure the quality of the feedback data, the details of which are available in Appendix B. The resulting dataset statistics are shown in Table 2. In order to test whether interactive feedback also helps in outof-distribution settings, we did not collect feedback for one of the domains (Canada). Table 3 shows examples of the feedback data, including both ratings and explanations. We find that explanations typically contain review-style text indicating the quality of the answer, or state-ments summarizing which parts are correct and why. Therefore, we analyze a sample of explanations using the following schema: Review Several explanations start with a generic review such as This directly answers the question or It is irrelevant to the question. Sometimes users also highlight aspects of the answer that are good or can be improved. For instance, ... could improve grammatically ... suggests that the answer could be improved in terms of writing. Summary of useful content refers to the part of answer that actually answers the question; Summary of irrelevant content points to the information that is not useful for the answer, such as off-topic or addressing incorrect aspects; Summary of missing content points the information the answer fails to cover. We randomly sample 100 explanations and annotate them. Figure 2 shows the distribution of the types present in explanations for each rating label. All explanations usually contain some review type information. Whereas explanations for answers labeled as excellent or acceptable predominantly indicate the parts of the answer that are useful. The explanations for answers that can be improved indicate parts that are useful, wrong or missing. Whereas bad answers often receive explanations that highlight parts that are incorrect or missing as expected.
FEEDBACKQA analysis
Experimental Setup
FEEDBACKQA contains two types of data. One is pre-deployment data D pre = (Q, A + , A), where Q is a question paired with its gold-standard answer passage A + from the domain corpus A. The other is post-deployment feedback data D feed = (Q, A, Y, E), where Q is a question paired with a candidate answer A ∈ A and corresponding feedback for the answer. The feedback consists of a rating Y and an explanation E. We build Rating label
Explanation Excellent
This answers the question directly. This answer provides information and recommendation on how people and adolescent can protect themselves when going online during the Covid-19 pandemic. Acceptable
This answer, while adequate, could give more information as this is a sparse answer for a bigger question of what one can do for elderly people during the pandemic. Could be improved The answer relates and answers the question, but could improve grammatically and omit the "yes" Could be improved The answer is about some of the online risks but not about how to protect against them. Bad
This does not answer the question. This information is about applying visa to work in critical sector. It does not provide any information on applying for Covid-19 pandemic visa event as asked in the question. Table 3: Examples of explanation and its associated rating label. Span color and their types of components: generic and aspect review ; summary of useful content ; summary of irrelevant content ; summary of missing content two kinds of models on pre-and post-deployment data: RQA models on the pre-deployment data that can retrieve candidate answers for a given question, and feedback-enhanced RQA models on the post-deployment data that can rate an answer for a given question as well as generate an explanation for the answer. We use this rating to rerank the answer candidates. Therefore, in our setting, a feedback-enhanced RQA model is essentially a reranker. Keeping in mind the fact that realworld QA systems evolve quickly, we decouple the reranker model from the RQA model by using separate parameters for the reranker independent of the RQA model. We train this reranker on the feedback data. This allows for the reranker to be reused across many RQA models. We leave other ways to enhance RQA models with feedback data for future work. Below, we describe the architectures for the RQA models and feedback-based rerankers.
RQA Models (Pre-deployment)
We use dense passage retrievers (Karpukhin et al., 2020) to build the RQA models, where the similarity between the question embedding and the passage embedding is used to rank candidates. We use two variants of pre-trained models to obtain the (Humeau et al., 2020) to build question-sensitive document representations. In a poly-encoder, each passage is represented as multiple encodings, first independent of the question, but then a simple attention between the question and passage embeddings is used to compute question-sensitive passage representation, which is later used to compute the relevance of the passage for a given query. Humeau et al. show that the poly-encoder architecture is superior to alternatives like the bi-encoder (Karpukhin et al., 2020) without much sacrifice in computational efficiency. 4 Given pre-deployment training data D pre = (Q, A + , A), the RQA model parameterized by θ is trained to maximize the log-likelihood of the correct answer:
J θ = log P θ (A + |Q, A) P θ (A i |Q, A) = exp(S(Q, A i )) A∈A exp(S(Q, A))(1)
Here S(Q, A) denotes the dot product similarity between the question and passage embedding. As it is inefficient to compute the denominator over all passages during training, we adopt an in-batch negative sampling technique (Humeau et al., 2020), merging all of the A + in the same minibatch into a set of candidates.
Feedback-enhanced RQA models (Post-deployment)
On the post-deployment data D feed = (Q, A, Y, E), we train a reranker that assigns a rating to an answer and also generates an explanation. We use BART parameterized by φ as the base of EXPLAINRATE because it is ease to adapt it to both explanation generation and rating classification. The encoder of the BART model takes as input the concatenation [Q; SEP; A], and the decoder generates an explanation E; after that, an incremental fully-connected network predicts the rating Y given the last hidden states of decoder. The rating is used to score QA pairs, whereas the generated explanation is passed to humans to make an informed decision of accepting the answer. We also implement a variant where the model directly produces a rating without generating an explanation.
Since each candidate answer is annotated by different annotators, an answer could have multiple rating labels. To account for this, we minimize the KL-divergence between the the target label distribution and the predicted distribution:
J φ = −D KL (P (Y |Q, A)||P φ (Y |Q, A)), P (Y i = y|Q i , A i ) = C y,i y C y,i(2)
where C y,i is the count of the rating label y for the i-th feedback.
In order to enhance an RQA model with the reranker, we first select the top-k candidates according to the RQA model (in practice we set k = 5). The reranker then takes as input the concatenation of the question and each candidate, then generates a rating for each answer. We simply sum up the scores from the RQA model and the reranker model. In practice, we found that using the reranker probability of excellent worked better than normalizing the expectation of the rating score (from score 0 for label bad to 3 for excellent). So, we score the candidate answers as follows:
S(A|A, Q) =P θ (A = A + |A, Q) + P φ (y = excellent|A, Q)(3)
Experiments and Results
We organize the experiments based on the following research questions:
• RQ1: Does feedback data improve the base RQA model accuracy?
• RQ2: Does feedback data improve the accuracy of RQA models that are stronger than the base model? • RQ3: Do explanations aid humans in discerning between correct and incorrect answers?
We answer these questions by comparing the RQA models with the feedback-enhanced RQA models. The implementation and hyper-parameter details of each model are included in Appendix D.
4.1 RQ1: Does feedback data improve the base RQA model?
Model details. Our base model is a BERT RQA model which we deployed to collect feedback data to train the other models (Section 3.1).
For the feedback-enhanced RQA model, we use the BART-based reranker described in Section 3.2. We train one single model for all domains. We call this FEEDBACKRERANKER. We compare two variants of FEEDBACKRERANKER on validation set, one of which directly predicts the rating while the other first generates an explanation and then the rating. And we found the first one performs slightly better (Appendix Table 10). We conjecture that learning an explanation-based rating model from the limited feedback data is a harder problem than directly learning a rating model. Therefore, for this experiment, we only use the rating prediction model (but note that explanation-based rating model is already superior to the base RQA model).
To eliminate the confounding factor of having a larger number of model parameters introduced by the reranker, we train another reranker model on the pre-deployment data VANILLARERANKER and compare against the reranker trained on the feedback data. To convert the pre-deployment data into the reranker's expected format, we consider a correct answer's rating label to be excellent, and the randomly sampled answer candidates 5 to be bad. Note that this dataset is much larger than the feedback data.
Finally, we combine the training data of FEED-BACKRERANKER and VANILLARERANKER and train the third reranker called COMBINEDR-ERANKER.
To measure retrieval accuracy, we adopt Preci-sion@1 (P@1) as our main metric. Results. As shown in Table 4, the feedbackenhanced RQA model is significantly 6 better than the base RQA model by 1.84 points. Although VANILLARERANKER improves upon the base model, it is weaker than FEEDBACKRERANKER, and COMBINEDRERANKER is a much stronger model than any of the models, indicating that learning signals presented in feedback data and the predeployment data are complementary to each other. Moreover, we also see improved performance on the Canada domain, although feedback data was not collected for that domain.
From these experiments, we conclude that feedback data can improve the accuracy of the base RQA model, not only for the domains for which feedback data is available but also for unseen domains (Canada).
RQ2
: Does feedback data improve the accuracy of RQA models that are stronger than the base model?
If feedback data were only useful for the base RQA model, then its usefulness would be questionable, since the RQA development cycle is continuous and the base RQA model will eventually be replaced with a better model. For example, we find that BART-based dense retriever is superior than the BERT RQA model: Table 9 in Appendix E shows the results on validation set which indicate that BART RQA model overall performance is nearly 4 points better than the BERT RQA model. 6 We follow Berg-Kirkpatrick et al. (2012) to conduct the statistical significant test
To answer RQ2, we use the same FEEDBACK-RERANKER and VANILLARERANKER to rescore the BART RQA predictions, even though feedback data is not collected for this model. We observe that the resulting model outperforms the BART RQA model in Table 5, indicating that the feedback data is still useful. Again, FEEDBACKR-ERANKER is superior to VANILLARERANKER although the feedback data has fewer samples than the pre-deployment data, and the COMBINEDR-ERANKER has the best performance.
These results suggest that the feedback data is useful not only for the base RQA model but also other stronger RQA models.
RQ3: Do explanations aid humans in discerning between correct and incorrect answers?
We conduct a human evaluation to investigate whether explanations are useful from the perspective of users. Unfortunately, rigorous definitions and automatic metrics of explainability remain open research problems. In this work, we simulate a real-world scenario, where the user is presented an answer returned by the system as well as an explanation for the answer, and they are asked to determine whether the answer is acceptable or not. Jacovi and Goldberg (2020) advocate utility metrics as proxies to measure the usefulness of explanations instead of directly evaluating an explanation since plausible explanations does not necessarily increase the utility of the resulting system. Inspired by their findings, we measure if explana- tions can: 1) help users to make accurate decisions when judging an answer (with respect to a ground truth) and 2) improve the agreement among users in accepting/rejecting an answer candidate. The former measures the utility of an explanation and the latter measures if the explanations invoke the same behavioral pattern across different users irrespective of the utility of the explanation. Note that agreement and utility are not tightly coupled. For example, agreement can be higher even if the utility of an explanation is lower when the explanation misleads end users to consistently select a wrong answer (González et al., 2021;. We sample 60 feedback samples from the hidden split of the feedback data D feed = (Q, A, Y, E) for evaluation purposes. 7 We evaluate four experimental setups on these samples which vary in the type of explanation shown to the end users: 1) no explanation; 2) human-written explanations; 3) explanations generated by the BART model trained on the feedback data (Section 3.2); and 4) summary of the answer candidate generated by a strong finetuned BART-based summarization model. 8 The last setting is inspired from the observation in Section 2.2 that a large portion of explanations contain summary of questions/answers. We investigate if conventional summary of an answer is as useful as an explanation. For each of these setups, two crowdworkers assign a rating label to each answer candidate indicating the quality of the answer. Each setup has its own set of workers in order to avoid information-leakage across setups (this simulates A/B testing often used by production systems).
We measure the workers' accuracy (average of the two workers) in determining the correctness of an answer with respect to the original annotation 7 For simplicity, we merge the answer feedback labels good and could be improved into one label called partially correct. The final set of answer labels we used are correct (for original label excellent), partially correct and incorrect (for original label bad). 8 https://huggingface.co/facebook/bart-large-xsum Question: What are the restrictions to travelers landing in the US?
Answer: ... Under current federal regulations, pilots must report all illnesses and deaths to CDC before arriving to a US destination. According to CDC disease protocols, if a sick traveler is considered to be a public health risk, CDC works with local and state health departments and international public health agencies to contact passengers and crew exposed to that sick traveler. Be sure to give the airline your current contact information when booking your ticket so you can be notified if you are exposed to a sick traveler on a flight. ...
Generated explanation:
This answer is irrelevant to the question. The question is asking about restrictions on travelers landing in the US and this answer provides information on what to do if there is a sick passenger on an international or domestic flight.
Human explanation: This information does not answer the question. The question is asking for information on the restrictions to travelers landing in the US and this answer contains information on pilots reports on illnesses and deaths before arriving to a US destinations and airline notification when passengers are exposed to sick travelers on a flight. Generated Summary: The Centers for Disease Control and Prevention (CDC) is working with airlines to help prevent illness and death on flights.
Question: Should I wear gloves when I go out in public to prevent Covid-19?
Answer: No, WHO does not recommend the regular use of gloves by people in the community. The wearing of gloves may increase risks, since it can lead to self-infection or transmission to others through touching contaminated surfaces which is not followed by removal of gloves and hand hygiene. Therefore, in public places such as supermarkets, in addition to physical distancing...
Generated
Explanation: This answer is excellent because it unequivocally states that gloves are not recommended. Therefore, it answers the question directly. It goes on to explain why gloves aren't recommended. This is a great answer.
Human explanation: The question is answered directly. The answer states not to wear gloves in public.
Generated Summary: The World Health Organization (WHO) has issued a guidance on the use of gloves in public places.
Question: Can students work in aged care?
Answer:...The Australian Government has announced that residential aged care facilities should take extra precautions to keep residents safe from COVID-19. Both individuals and management need to take responsibility for the health of visitors and staff at facilities to protect our most vulnerable community members. Aged care facilities will: closely monitor the health of staff...
Generated
Explanation: This answer does not address whether or not students are allowed to work in aged care. This answer only provides information on precautions to be taken by aged care facilities to keep residents safe.
Human Explanation: The information here give explanation on guideline that aged care facility staffs should put in place and did not say anything about student working in aged care facility.
Generated Summary: Residents in aged care facilities across Australia are being urged to take extra precautions to prevent the spread of a deadly virus. in FEEDBACKQA, as well as compute the agreement of workers with each other using Spearman correlation. Table 6 presents the results. All explanation types improve accuracy compared to the the model with no explanations. This could be because any explanation forces the worker to think more about an answer. The human-written explanations has the highest utility and also leads to the biggest agreement. Both the human-written explanations and the explanations generated by the BART feedback model have more utility and higher agreement than the BART summarization model. In fact, the summarization model leads to lower agreement.
These results indicate that explanations based on feedback data are useful for end users in discerning correct and incorrect answers, and they also improve the agreement across users. Table 7 shows some examples of explanation that helps the users make more informed and accurate decision. In the first example, the model-generated explanation points out the gap between the question and the answer candidate, though there are a large number of overlapping keywords. Meanwhile, human explanations are generally more abstractive and shorter in nature (e.g., see the second example).
Related work
Retrieval-based question answering has been widely studied, from early work on rule-based systems (Kwok et al., 2001), to recently proposed neural-based models (Yang et al., 2019;Karpukhin et al., 2020). Most existing work focuses on improving the accuracy and efficacy by modification of a neural architecture (Karpukhin et al., 2020;Humeau et al., 2020), incorporation of external knowledge (Ferrucci et al., 2010), and retrieval strategy (Kratzwald and Feuerriegel, 2018). These methods focus on the pre-deployment stage of RQA models.
By contrast, we investigate methods to improve a RQA model post-deployment with interactive feedback. The proposed methods are agnostic to the architecture design and training methods of the base RQA model.
Learning from user feedback has been a long standing problem in natural language processing. Whilst earlier work proposes methods for using implicit feedback-for instance, using click-through data for document ranking (Joachims, 2002)recent work has explored explicit feedback such as explanations of incorrect responses by chatbots (Li et al., 2016;Weston, 2016) and correctness labels in conversational question answering and text classification (Campos et al., 2020). However, the feedback in these studies is automatically generated using heuristics, whereas our feedback data is collected from human users. Hancock et al. (2019) collect suggested responses from users to improve a chatbot, while we investigate the effect of natural feedback for RQA models.
Explainability and Interpretability has received increasing attention in the NLP community recently. This paper can be aligned to recent efforts in collecting and harnessing explanation data for language understanding and reasoning tasks, such as natural language inference (Camburu et al., 2018;Kumar and Talukdar, 2020), commonsense question answering (Rajani et al., 2019), document classification (Srivastava et al., 2017), relation classification (Murty et al., 2020), reading comprehension (Lamm et al., 2021), and fact checking (Alhindi et al., 2018). The type of feedback in FEED-BACKQA differs from the existing work in several aspects: 1) FEEDBACKQA has feedback data for both positive and negative examples, while most of other datasets only contains explanations of positive ones; 2) FEEDBACKQA has both structured and unstructured feedback, while previous work mainly focuses on one of them; 3) The feedback in FEEDBACKQA is collected post-deployment; 4) While previous work aims to help users interpret model decisions, we investigate whether feedbackbased explanations increase the utility of the deployed system.
Conclusion
In this work, we investigate the usefulness of feedback data in retrieval-based question answering. We collect a new dataset FEEDBACKQA, which contains interactive feedback in the form of ratings and natural language explanations. We propose a method to improve the RQA model with the feedback data, training a reranker to select an answer candidate as well as generate the explanation. We find that this approach not only increases the accuracy of the deployed model but also other stronger models for which feedback data is not collected. Moreover, our human evaluation results show that both human-written and model-generated explanations help users to make informed and accurate decisions about whether to accept an answer.
Limitations and Ethical consideration
The training and inference of a reranker with feedback data increases the usage of computational resources. We note that our feedback collection setup is a simulation of a deployed model. The feedback in real-world systems may contain sensitive information that should be handled with care. Moreover, real-world feedback could be noisy and is prone to adversarial attacks.
A Details of Data Collection
Passage curating After we scraped the websites, we collect the questions and answers in the Frequently-Asked-Questions pages directly. For those pages without explicit questions and answers, we extract the text content as passages and proceed to question collection.
Question collection We hire crowd-source workers from English-speaking countries at the Amazon MTurk platform to write questions conditioned on the extracted passages. The workers are instructed not to ask too generic questions or copy and paste directly from the passages.
A qualification test with two sections is done to pick up the best performing workers. In the first section, the workers are asked to distinguish the good question from the bad ones for given passages. The correct and incorrect questions were carefully designed to test various aspects of lowquality submissions we had received in the demo run. The second section is that writing a question given a passage. We manually review and score the questions. We paid 0.2$ to workers for each question.
B Details of Feedback Collection
We asked the workers to provide rating and natural language feedback for question-answer pairs. For qualification test, we labeled the rating for multiple pairs of questions and answers. The workers are selected based on their accuracy of rating labeling. We paid 0.4$ to workers for each feedback.
C Details of Human Evaluation
The worker assignment is done to make sure a worker rates the same question-answer pair only once. Otherwise there is risk that the workers just blindly give the same judgement for a certain QA pair.
We adopt the qualification test similar to the one for feedback collection. We also include some dummy QA pairs, whose answer candidate were randomly sampled from the corpora, and we filter out the workers who fail to recognize them. We paid 0.3$ to workers for each QA pair.
D Implementation Details
Throughout the experiments, we have used 4 32-GB Nvidia Tesla V100. The hyperparameter (learning rate, dropout rate) optimisation is performed lr Dropout BERT (Bi-encoder) 5.0e-05 0.1 BERT (Poly-encoder) 5.0e-05 0.1 BART (Bi-encoder) 9.53e-05 0.01026 BART (Poly-encoder) 4.34e-05 0.1859 FEEDBACKRERANKER 5.0e-05 0.1 Table 8: Hyper-parameter setting of different variants of QA models as well as EXPLAINRATE and RA-TEONLY. There is no pooling operation in the latter two models.
for the RQA models only and standard fine-tuning hyperparameters of BART are used for building the FEEDBACKRERANKER model. We set batch size as 16. We truncate the questions and passages to 50 and 512 tokens, respectively. The models are trained with 40 epochs. For our hyperparameter search, we have used 5 trials and while reporting the final results the best hyperparameter variant's performance was averaged across 3 different runs. All experiment runs were finished within 20 hours.
E Validation performance
In addition to the Poly-encoders, we also explore Bi-encoder and we have found that its performance is consistently worse. Table 10: Accuracy of PIPELINE models using different feedback data to train the re-ranker on the validation set. All of the results are averaged across 3 runs.
Figure 1 :
1Users interact with the deployed QA model and give feedback. Feedback contains a rating (bad, good, could be improved, excellent) and a natural language explanation.
Figure 2 :
2Distribution of component number in 100 natural language feedback of different rating labels.
Table 1 :
1Comparison of FEEDBACKQA with existing NLU datasets containing feedback in the form of structured representations (according to a schema) or natural language explanations (free-form).#Passages #Questions #Feedback
Australia
584
1783
2264
Canada
587
8844
/
UK
956
2874
3668
US
598
13533
2628
WHO
226
688
874
Overall
2951
27722
9434
Table 2 :
2Number of samples in different do-
mains of FEEDBACKQA. We split the data into
train/validation/test sets in the ratio of 0.7 : 0.1 : 0.2.
et al., 2020) combined with Poly-encoder (Miller
et al., 2017) (more details are in Section 3.1).
For BERT, we use average pooling of token representations as the embedding, whereas for BART we use the decoder's final state. While Karpukhin et al. use question-agnostic passage representations, we use a poly-encoderembeddings: 1) BERT (Devlin et al., 2019), a pre-
trained Transformer encoder; and 2) BART (Lewis
et al., 2020), a pretrained Transformer encoder-
decoder.
Table 4 :
4Accuracy of the BERT RQA model, i,.e., the deployed model, and its enhanced variants on the test set. FEEDBACKRERANKER is trained on the post-deployment feedback data, VANILLARERANKER is trained on the pre-deployment data and COMBINEDRERANKER is trained on both. The column Beats indicates that the model significantly outperforms (p-value < 0.05) the competing methods. All of the results are averaged across 3 runs.Methods
Australia
US
Canada UK WHO
All
Beats
BART RQA model
52.88
68.47 82.49 51.29 81.97 67.42 None
+ FEEDBACKRERANKER
54.78
70.45 84.38 53.47 82.51 69.12
+ VANILLARERANKER
53.09
70.40 82.76 53.08 82.33 68.33
+ COMBINEDRERANKER
55.27
71.45 85.35 54.83 83.61 70.10
Table 5 :
5Accuracy of the BART RQA model and its enhanced variants on the test set. Results are averaged across 3 runs.
Table 6 :
6Human evaluation results of the usefulness of
explanations. Accuracy measures the utility of explana-
tions in selecting the correct rating label for an answer,
whereas agreement measures whether explanations in-
voke same behaviour pattern across users.
Table 7 :
7Examples of different explanation types:
model-generated and human-written explanation and
model-generated summary.
Table 9
9presents the performance of base QA models with different pretrained Transformer models and encoding methods on the validation set.Methods
Australia
US
Canada UK WHO
All
BERT (Bi-encoder)
44.57
64.24 81.12 50.55 81.85 64.47
BERT (Poly-encoder)
47.25
65.30 81.49 48.50 81.19 64.75
BART (Bi-encoder)
47.13
67.62 86.01 55.06 85.48 68.26
BART (Poly-encoder)
49.17
66.98 85.75 54.27 87.46 68.73
Table 9 :
9The accuracy of different RQA models on the validation set. All of the results are averaged across 3 runs.Methods
Australia
US
Canada UK WHO
All
BART RQA model
BART RQA model
49.17
66.98 85.75 54.27 87.46 68.73
+ FEEDBACKRERANKER with
explanation-based rating
51.34
69.09 84.20 56.87 87.79 69.86
+ FEEDBACKRERANKER with
rating only
51.09
68.57 86.84 58.21 88.78 70.70
BERT RQA model
BERT RQA model
47.25
65.30 81.49 48.50 81.19 64.75
+ FEEDBACKRERANKER with
explanation-based rating
51.34
70.15 83.72 53.71 84.49 68.68
+ FEEDBACKRERANKER with
rating only
51.09
68.46 84.18 55.69 85.15 68.91
Google and Bing collect such data through "Feedback" button located at the bottom of search results.
We focus on the Province of Quebec
The performance results of poly-encoder and bi-encoder for our task are shown inTable 9.
We also tried using the top predictions from the base QA model, but found this approch leads to slightly worse performance than negative sampling.
AcknowledgementsWe would like to thank Andreas Madsen, Nathan Schucher, Nick Meade and Makesh Narsimhan for their discussion and feedback on our manuscript. We would also like to thank the Mila Applied Research team, especially Joumana Ghosn, Mirko Bronzi, Jeremy Pinto, and Cem Subakan whose initial work on the Covid-19 chatbot inspired this work. This work is funded by Samsung Electronics. JC and SR acknowledge the support of the NSERC Discovery Grant program and the Canada CIFAR AI Chair program. The computational resource for this project is partly supported by Compute Canada.
Where is your evidence: Improving factchecking by justification modeling. Savvas Tariq Alhindi, Smaranda Petridis, Muresan, Proceedings of the First Workshop on Fact Extraction and VERification (FEVER). the First Workshop on Fact Extraction and VERification (FEVER)Association for Computational LinguisticsTariq Alhindi, Savvas Petridis, and Smaranda Mure- san. 2018. Where is your evidence: Improving fact- checking by justification modeling. In Proceedings of the First Workshop on Fact Extraction and VERifi- cation (FEVER), pages 85-90. Association for Com- putational Linguistics.
Does the whole exceed its parts? the effect of ai explanations on complementary team performance. Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, Daniel Weld, https:/dl.acm.org/doi/abs/10.1145/3411764.3445717Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. the 2021 CHI Conference on Human Factors in Computing SystemsGagan Bansal, Tongshuang Wu, Joyce Zhou, Ray- mond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? the effect of ai explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pages 1-16.
An empirical investigation of statistical significance in NLP. Taylor Berg-Kirkpatrick, David Burkett, Dan Klein, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningTaylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statis- tical significance in NLP. In Proceedings of the 2012 Joint Conference on Empirical Methods in Nat- ural Language Processing and Computational Natu- ral Language Learning, pages 995-1005.
e-SNLI: Natural Language Inference with Natural Language Explanations. Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, Phil Blunsom, Advances in Neural Information Processing Systems. 31Oana-Maria Camburu, Tim Rocktäschel, Thomas Lukasiewicz, and Phil Blunsom. 2018. e-SNLI: Nat- ural Language Inference with Natural Language Ex- planations. In Advances in Neural Information Pro- cessing Systems 31, pages 9539-9549.
Aitor Soroa, Eneko Agirre, and Gorka Azkune. 2020. Improving conversational question answering systems after deployment using feedback-weighted learning. Jon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsJon Ander Campos, Kyunghyun Cho, Arantxa Otegi, Aitor Soroa, Eneko Agirre, and Gorka Azkune. 2020. Improving conversational question answering systems after deployment using feedback-weighted learning. In Proceedings of the 28th International Conference on Computational Linguistics, pages 2561-2571.
Reading wikipedia to answer opendomain questions. Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsLong Papers1Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.
Building watson: An overview of the deepqa project. David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, William Murdock, Eric Nyberg, John Prager, AI magazine. 313David Ferrucci, Eric Brown, Jennifer Chu-Carroll, James Fan, David Gondek, Aditya A Kalyanpur, Adam Lally, J William Murdock, Eric Nyberg, John Prager, et al. 2010. Building watson: An overview of the deepqa project. AI magazine, 31(3):59-79.
Do explanations help users detect errors in open-domain QA? an evaluation of spoken vs. visual explanations. Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, Srinivasan Iyer, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Ana Valeria González, Gagan Bansal, Angela Fan, Yashar Mehdad, Robin Jia, and Srinivasan Iyer. 2021. Do explanations help users detect errors in open-domain QA? an evaluation of spoken vs. vi- sual explanations. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 1103-1116.
Learning from dialogue after deployment: Feed yourself. Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, Jason Weston, chatbot! In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. Braden Hancock, Antoine Bordes, Pierre-Emmanuel Mazare, and Jason Weston. 2019. Learning from di- alogue after deployment: Feed yourself, chatbot! In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 3667- 3684.
Poly-encoders: Transformer Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, Jason Weston, arXiv:1905.01969[cs].ArXiv:1905.01969Samuel Humeau, Kurt Shuster, Marie-Anne Lachaux, and Jason Weston. 2020. Poly-encoders: Trans- former Architectures and Pre-training Strategies for Fast and Accurate Multi-sentence Scoring. arXiv:1905.01969 [cs]. ArXiv: 1905.01969.
Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness?. Alon Jacovi, Yoav Goldberg, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsAlon Jacovi and Yoav Goldberg. 2020. Towards faith- fully interpretable NLP systems: How should we de- fine and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Compu- tational Linguistics, pages 4198-4205. Association for Computational Linguistics.
Optimizing search engines using clickthrough data. Thorsten Joachims, https:/dl.acm.org/doi/abs/10.1145/775047.775067SIGKDD. Association for Computing MachineryThorsten Joachims. 2002. Optimizing search engines using clickthrough data. In SIGKDD. Association for Computing Machinery.
Dense passage retrieval for open-domain question answering. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, Wen-Tau Yih, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781.
Adaptive document retrieval for deep question answering. Bernhard Kratzwald, Stefan Feuerriegel, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBernhard Kratzwald and Stefan Feuerriegel. 2018. Adaptive document retrieval for deep question an- swering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 576-581.
Nile: Natural language inference with faithful natural language explanations. Sawan Kumar, Partha Talukdar, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsSawan Kumar and Partha Talukdar. 2020. Nile: Natu- ral language inference with faithful natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computational Lin- guistics, pages 8730-8742.
Natural questions: a benchmark for question answering research. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Transactions of the Association for Computational Linguistics. 7Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, et al. 2019. Natural questions: a bench- mark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466.
Scaling question answering to the web. C T Cody, Oren Kwok, Daniel S Etzioni, Weld, https:/dl.acm.org/doi/abs/10.1145/371920.371973Proceedings of the 10th international conference on World Wide Web. the 10th international conference on World Wide WebCody CT Kwok, Oren Etzioni, and Daniel S Weld. 2001. Scaling question answering to the web. In Proceedings of the 10th international conference on World Wide Web, pages 150-161.
Qed: A framework and dataset for explanations in question answering. Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, Michael Collins, https:/direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00398/106795/QED-A-Framework-and-Dataset-for-Explanations-inTransactions of the Association for Computational Linguistics. 9Matthew Lamm, Jennimaria Palomaki, Chris Alberti, Daniel Andor, Eunsol Choi, Livio Baldini Soares, and Michael Collins. 2021. Qed: A framework and dataset for explanations in question answering. Transactions of the Association for Computational Linguistics, 9:790-806.
Latent retrieval for weakly supervised open domain question answering. Kenton Lee, Ming-Wei Chang, Kristina Toutanova, Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsKenton Lee, Ming-Wei Chang, and Kristina Toutanova. 2019. Latent retrieval for weakly supervised open domain question answering. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 6086-6096.
Bart: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal ; Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, Luke Zettlemoyer, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880.
Jiwei Li, H Alexander, Sumit Miller, Marc'aurelio Chopra, Jason Ranzato, Weston, arXiv:1611.09823Dialogue learning with human-in-the-loop. arXiv preprintJiwei Li, Alexander H Miller, Sumit Chopra, Marc'Aurelio Ranzato, and Jason Weston. 2016. Dialogue learning with human-in-the-loop. arXiv preprint arXiv:1611.09823.
Parlai: A dialog research software platform. H Alexander, Will Miller, Dhruv Feng, Antoine Batra, Adam Bordes, Jiasen Fisch, Devi Lu, Jason Parikh, Weston, EMNLP (System Demonstrations). Alexander H Miller, Will Feng, Dhruv Batra, Antoine Bordes, Adam Fisch, Jiasen Lu, Devi Parikh, and Jason Weston. 2017. Parlai: A dialog research software platform. In EMNLP (System Demonstra- tions).
Expbert: Representation engineering with natural language explanations. Shikhar Murty, Pang Wei Koh, Percy Liang, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsShikhar Murty, Pang Wei Koh, and Percy Liang. 2020. Expbert: Representation engineering with natural language explanations. In Proceedings of the 58th Annual Meeting of the Association for Computa- tional Linguistics, pages 2106-2113.
Explain yourself! leveraging language models for commonsense reasoning. Bryan Nazneen Fatema Rajani, Caiming Mccann, Richard Xiong, Socher, 10.18653/v1/P19-1487Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsNazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.
Squad: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392.
Coqa: A conversational question answering challenge. Siva Reddy, Danqi Chen, Christopher D Manning, https:/direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00266/43511/CoQA-A-Conversational-Question-Answering-ChallengeTransactions of the Association for Computational Linguistics. 7Siva Reddy, Danqi Chen, and Christopher D Manning. 2019. Coqa: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.
Joint concept learning and semantic parsing from natural language explanations. Shashank Srivastava, Igor Labutov, Tom Mitchell, Proceedings of the 2017 conference on empirical methods in natural language processing. the 2017 conference on empirical methods in natural language processingShashank Srivastava, Igor Labutov, and Tom Mitchell. 2017. Joint concept learning and semantic parsing from natural language explanations. In Proceedings of the 2017 conference on empirical methods in nat- ural language processing, pages 1527-1536.
Learning to summarize with human feedback. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul F Christiano, Advances in Neural Information Processing Systems. 33Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learn- ing to summarize with human feedback. In Ad- vances in Neural Information Processing Systems, volume 33, pages 3008-3021.
Learning from explanations with neural execution tree. Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, Xiang Ren, International Conference on Learning Representations. Ziqi Wang, Yujia Qin, Wenxuan Zhou, Jun Yan, Qinyuan Ye, Leonardo Neves, Zhiyuan Liu, and Xi- ang Ren. 2019. Learning from explanations with neural execution tree. In International Conference on Learning Representations.
Dialog-based language learning. Jason E Weston, https:/dl.acm.org/doi/10.5555/3157096.3157189Advances in Neural Information Processing Systems. 29Jason E Weston. 2016. Dialog-based language learn- ing. Advances in Neural Information Processing Systems, 29:829-837.
End-to-end open-domain question answering with bertserini. Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, Jimmy Lin, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Wei Yang, Yuqing Xie, Aileen Lin, Xingyu Li, Luchen Tan, Kun Xiong, Ming Li, and Jimmy Lin. 2019. End-to-end open-domain question answering with bertserini. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), pages 72-77.
| [] |
[
"Towards a Benchmark of Natural Language Arguments",
"Towards a Benchmark of Natural Language Arguments"
] | [
"Elena Cabrio \nINRIA Sophia Antipolis\nFrance\n",
"Serena Villata \nINRIA Sophia Antipolis\nFrance\n"
] | [
"INRIA Sophia Antipolis\nFrance",
"INRIA Sophia Antipolis\nFrance"
] | [] | The connections among natural language processing and argumentation theory are becoming stronger in the latest years, with a growing amount of works going in this direction, in different scenarios and applying heterogeneous techniques. In this paper, we present two datasets we built to cope with the combination of the Textual Entailment framework and bipolar abstract argumentation. In our approach, such datasets are used to automatically identify through a Textual Entailment system the relations among the arguments (i.e., attack, support), and then the resulting bipolar argumentation graphs are analyzed to compute the accepted arguments. | null | [
"https://arxiv.org/pdf/1405.0941v1.pdf"
] | 15,799,347 | 1405.0941 | 2d12be41b29d0e956c2aaa1b03e30cc66781396b |
Towards a Benchmark of Natural Language Arguments
Elena Cabrio
INRIA Sophia Antipolis
France
Serena Villata
INRIA Sophia Antipolis
France
Towards a Benchmark of Natural Language Arguments
The connections among natural language processing and argumentation theory are becoming stronger in the latest years, with a growing amount of works going in this direction, in different scenarios and applying heterogeneous techniques. In this paper, we present two datasets we built to cope with the combination of the Textual Entailment framework and bipolar abstract argumentation. In our approach, such datasets are used to automatically identify through a Textual Entailment system the relations among the arguments (i.e., attack, support), and then the resulting bipolar argumentation graphs are analyzed to compute the accepted arguments.
Introduction
Until recent years, the idea of "argumentation" as the process of creating arguments for and against competing claims was a subject of interest to philosophers and lawyers. In recent years, however, there has been a growth of interest in the subject from formal and technical perspectives in Artificial Intelligence, and a wide use of argumentation technologies in practical applications. However, such applications are always constrained by the fact that natural language arguments cannot be automatically processed by such argumentation technologies. Arguments are usually presented either as the abstract nodes of a directed graph where the edges represent the relations of attack and support (e.g., in abstract argumentation theory (Dung 1995) and in bipolar argumentation (Cayrol and Lagasquie-Schiex 2005), respectively).
Natural language arguments are usually used in the argumentation literature to provide ad-hoc examples to help the reader in the understanding of the rationale behind the formal approach which is then introduced, but the need to find automatic ways to process natural language arguments is becoming more and more important. On the one side, when dealing with natural language processing techniques, the first step consists in finding the data on which the system is trained and evaluated. On the other side, in argumentation theory there is a growing need to define benchmarks for argumentation to test implemented systems and proposed theories. In this paper, we address the following research question: how to build a dataset of natural language arguments?
The definition of a dataset of natural language arguments is not a straightforward task: first, there is the need to iden-tify the kind of natural language arguments to be collected (e.g., online debates, newspaper articles, blogs and forums, etc.), and second, there is the need to annotate the data according to the addressed task from the natural language processing point of view (e.g., classification, textual entailment (Dagan et al. 2009), etc.).
Our goal is to analyze natural language debates in order to understand, given a huge debate, what are the winning arguments (through acceptability semantics) and who proposed them. In order to achieve such goal, we have identified two different scenarios to extract our data: (i) online debate platforms like Debatepedia 1 and ProCon 2 present a set of topics to be discussed, and participants argue about the issue the platform proposes on a selected topic, highlighting whether their "arguments" are in favor or against the central issue, or with respect to the other participants' arguments, and (ii) the screenplay of a movie titled "Twelve Angry Men" where the jurors of a trial discuss in order to decide whether a young boy is guilty or not, and before the end of each act they vote to verify whether they all agree about his guiltiness. These two scenarios lead to two different resources: the online debates resource collects the arguments in favor or against the main issue or the other arguments into small bipolar argumentation graphs, while the "Twelve Angry Men" resource collects again pro and con arguments but they compose three bipolar argumentation graphs whose complexity is higher than debates graphs. Note that the first resource consists of an integration of the dataset of natural language arguments we presented in with new data extracted from the ProCon debate platform.
These two resources represent a first step towards the construction of a benchmark of natural language arguments, to be exploited by existing argumentation systems as data-driven examples of argumentation frameworks. In our datasets, arguments are cast into pairs where the two arguments composing the pair are linked by a positive relation (a support relation in argumentation) or a negative relation (an attack relation in argumentation). From these pairs, the argumentation graphs are constructed.
The remainder of the paper is organized as follows: the next section presents the two datasets from Debatepedia/ProCon and Twelve Angry Men and how they have been extracted and annotated, then some conclusions are drawn.
Natural Language Arguments: datasets
As introduced before, the rationale underlying the datasets of natural language arguments we created was to support the task of understanding, given a huge debate, what are the winning arguments, and who proposed them. In an application framework, we can divide such task into two consecutive subtasks, namely i) the recognition of the semantic relations between couples of arguments in a debate (i.e. if one statement is supporting or attacking another claim), ii) and given all the arguments that are part of a debate and the acceptability semantics, to reason over the graph of arguments with the aim of deciding which are the accepted ones.
To reflect this separation into two subtasks, each dataset that we will describe in detail in the following subsections is therefore composed of two layers. Given a set of arguments linked among them (e.g in a debate):
1. we couple each argument with the argument to which it is related (i.e. that it attacks or supports). The first layer of the dataset is therefore composed of couples of arguments (each one labeled with a univocal ID), annotated with the semantic relations linking them (i.e. attack or support); 2. starting from the pairs of arguments in the first layer of the dataset, we then build a bipolar entailment graph for each of the topics in the dataset. In the second layer of the dataset, we find therefore graphs of arguments, where the arguments are the nodes of the graph, and the relations among the arguments correspond to the edges of the graphs.
To create the data set of arguments pairs, we follow the criteria defined and used by the organizers of the Recognizing Textual Entailment challenge. 3 To test the progress of TE systems in a comparable setting, the participants to RTE challenge are provided with data sets composed of T-H pairs involving various levels of entailment reasoning (e.g. lexical, syntactic), and TE systems are required to produce a correct judgment on the given pairs (i.e. to say if the meaning of one text snippet can be inferred from the other). Two kinds of judgments are allowed: two-way (yes or no entailment) or three-way judgment (entailment, contradiction, unknown). To perform the latter, in case there is no entailment between T and H systems must be able to distinguish whether the truth of H is contradicted by T, or remains unknown on the basis of the information contained in T. To correctly judge each single pair inside the RTE data sets, systems are expected to cope both with the different linguistic phenomena involved in TE, and with the complex ways in which they interact. The data available for the RTE challenges are not suitable for our goal, since the pairs are extracted from news and are not linked among each others (i.e. they do not report opinions on a certain topic). However, the task of recognizing semantic relations among pairs of textual fragments is very close to ours, and therefore we follow the guidelines provided by the organizers of RTE for the creation of their datasets. For instance, in we experiment with the application of a TE (Dagan et al. 2009) to automatically identify the arguments in the text and to specify which kind of relation links each couple of arguments.
Debatepedia dataset
To build our first benchmark of natural language arguments, we selected Debatepedia and ProCon, two encyclopedias of pro and con arguments on critical issues. To fill in the first layer of the dataset, we manually selected a set of topics (Table 2 column Topics) of Debatepedia/ProCon debates, and for each topic we apply the following procedure:
1. the main issue (i.e., the title of the debate in its affirmative form) is considered as the starting argument;
2. each user opinion is extracted and considered as an argument;
3. since attack and support are binary relations, the arguments are coupled with:
(a) the starting argument, or (b) other arguments in the same discussion to which the most recent argument refers (i.e., when a user opinion supports or attacks an argument previously expressed by another user, we couple the former with the latter), following the chronological order to maintain the dialogue structure;
4. the resulting pairs of arguments are then tagged with the appropriate relation, i.e., attack or support 4 .
Using Debatepedia/ProCon as case study provides us with already annotated arguments (pro ⇒ entailment 5 , and con ⇒ contradiction), and casts our task as a yes/no entailment task. To show a step-by-step application of the procedure, let us consider the debated issue Can coca be classified as a narcotic?. At step 1, we transform its title into the affirmative form, and we consider it as the starting argument (a). Then, at step 2, we extract all the users opinions concerning this issue (both pro and con), e.g., (b), (c) and (d):
Example 1. (a) Coca can be classified as a narcotic.
(b) In 1992 the World Health Organization's Expert Committee on Drug Dependence (ECDD) undertook a "prereview" of coca leaf at its 28th meeting. The 28th ECDD report concluded that, "the coca leaf is appropriately scheduled as a narcotic under the Single Convention on Narcotic Drugs, 1961, since cocaine is readily extractable from the leaf." This ease of extraction makes coca and cocaine inextricably linked. Therefore, because cocaine is defined as a narcotic, coca must also be defined in this way.
(c) Coca in its natural state is not a narcotic. What is absurd about the 1961 convention is that it considers the coca leaf in its natural, unaltered state to be a narcotic. The paste or the concentrate that is extracted from the coca leaf, commonly known as cocaine, is indeed a narcotic, but the plant itself is not.
(d) Coca is not cocaine. Coca is distinct from cocaine. Coca is a natural leaf with very mild effects when chewed. Cocaine is a highly processed and concentrated drug using derivatives from coca, and therefore should not be considered as a narcotic.
At step 3a we couple the arguments (b) and (d) with the starting issue since they are directly linked with it, and at step 3b we couple argument (c) with argument (b), and argument (d) with argument (c) since they follow one another in the discussion (i.e. user expressing argument (c) answers back to user expressing argument (b), so the arguments are concatenated -the same for arguments (d) and (c)). At step 4, the resulting pairs of arguments are then tagged with the appropriate relation:
(b) supports (a), (d) attacks (a), (c) attacks (b) and (d) supports (c).
We have collected 260 T-H pairs (Table 2), 160 to train and 100 to test the TE system. The training set is composed by 85 entailment and 75 contradiction pairs, while the test set by 55 entailment and 45 contradiction pairs. The pairs considered for the test set concern completely new topics.
Basing on the TE definition, an annotator with skills in linguistics has carried out a first phase of manual annotation of the Debatepedia data set. Then, to assess the validity of the annotation task and the reliability of the obtained data set, the same annotation task has been independently carried out also by a second annotator, so as to compute interannotator agreement. It has been calculated on a sample of 100 argument pairs (randomly extracted).
The statistical measure usually used in NLP to calculate the inter-rater agreement for categorical items is Cohen's kappa coefficient (Carletta 1996), that is generally thought to be a more robust measure than simple percent agreement calculation since κ takes into account the agreement occurring by chance. More specifically, Cohen's kappa measures the agreement between two raters who each classifies N items into C mutually exclusive categories. The equation for κ is:
κ = Pr(a) − Pr(e) 1 − Pr(e)(1)
where Pr(a) is the relative observed agreement among raters, and Pr(e) is the hypothetical probability of chance agreement, using the observed data to calculate the probabilities of each observer randomly saying each category. If the raters are in complete agreement then κ = 1. If there is no agreement among the raters other than what would be expected by chance (as defined by Pr(e)), κ = 0. For NLP tasks, the inter-annotator agreement is considered as significant when κ >0.6. Applying the formula (1) to our data, the interannotator agreement results in κ = 0.7. As a rule of thumb, this is a satisfactory agreement, therefore we consider these annotated data sets as the goldstandard. The goldstandard is the reference data set to which the performances of automated systems can be compared.
To build the bipolar argumentation graphs associated to the Debatepedia dataset, we have considered the pairs annotated in the first layer and we have built a bipolar entailment graph for each of the topic in the dataset (12 topics in the training set and 10 topics in the test set, listed in Table 2). Figure 1 shows the average dimension of a bipolar argumentation graph in the Debatepedia/ProCon dataset. Note that no cycle is present, as well as in all the other graphs of such dataset. All graphs are available online, together with the XML data set.
Debatepedia extended dataset The dataset described in the previous section was created respecting the assumption that the TE relation and the support relation are equivalent, i.e. in all the previously collected pairs both TE and support relations (or contradiction and attack relations) hold.
For the second study described in we wanted to move a step further, to understand whether it is always the case that support is equivalent to TE Figure 1: The bipolar argumentation framework resulting from the topic "Obesity" of Pro/Con (red edges represent attack and green ones represent support).
(and contradiction to attack). We therefore apply again the extraction methodology described in the previous section to extend our data set. In total, our new data set contains 310 different arguments and 320 argument pairs (179 expressing the support relation among the involved arguments, and 141 expressing the attack relation, see Table 2). We consider the obtained data set as representative of human debates in a non-controlled setting (Debatepedia users position their arguments with respect to the others as PRO or CON, the data are not biased). Again, an annotator with skills in linguistics has carried out a first phase of annotation of the extended Debatepedia data set. The goal of such annotation was to individually consider each pair of support and attack among arguments, and to additionally tag them as entailment, contradiction or null. The null judgment can be assigned in case an argument is supporting another argument without inferring it, or the argument is attacking another argument without contradicting it. As exemplified in Example 1, a correct entailment pair is (b) ⇒ (a), while a contradiction is (d) (a). A null judgment is assigned to (d) -(c), since the former argument supports the latter without inferring it. Our data set is an extended version of (Cabrio and Villata 2012)'s one allowing for a deeper investigation.
Again, to assess the validity of the annotation task, we have calculated the inter-annotator agreement. Another annotator with skills in linguistics has therefore independently annotated a sample of 100 pairs of the data set. We calculated the inter-annotator agreement considering the argument pairs tagged as support and attacks by both annotators, and we verify the agreement between the pairs tagged as entailment and as null (i.e. no entailment), and as contradiction and as null (i.e. no contradiction), respectively. Applying κ to our data, the agreement for our task is κ = 0.74. As a rule of thumb, this is a satisfactory agreement. Table 3 reports the results of the annotation on our Debatepedia data set, as resulting after a reconciliation phase carried out by the annotators 6 . On the 320 pairs of the data set, 180 represent a support relation, while 140 are attacks. Considering only the supports, 111 argument pairs (i.e., 61.6%) are an actual entailment, while in 38.4% of the cases the first argument of the pair supports the second one without inferring it (e.g. (d) -(c) in Example 1). With respect to the attacks, 100 argument pairs (i.e., 71.4%) are both attack and contradiction, while only the 28.6% of the argument pairs does not contradict the arguments they are attacking, as in Example 2.
Relations
Example 2.
(e) Coca chewing is bad for human health. The decision to ban coca chewing fifty years ago was based on a 1950 report elaborated by the UN Commission of Inquiry on the Coca Leaf with a mandate from ECOSOC: "We believe that the daily, inveterate use of coca leaves by chewing is thoroughly noxious and therefore detrimental".
(f) Chewing coca offers an energy boost. Coca provides an energy boost for working or for combating fatigue and cold.
Differently from the relation between support-entailment, the difference between attack and contradiction is more sub-tle, and it is not always straightforward to say whether an argument attacks another argument without contradicting it. In Example 2, we consider that (e) does not contradict (f) even if it attacks (f), since chewing coca can offer an energy boost, and still be bad for human health. This kind of attacks is less frequent than the attacks-contradictions (see Table 3).
Debatepedia additional attacks dataset Starting from the comparative study addressed by (Cayrol and Lagasquie-Schiex 2011), in the third study of we have considered four additional attacks proposed in the literature: supported (if argument a supports argument b and b attacks argument c, then a attacks c) and secondary (if a supports b and c attacks a, then c a supports b and a attacks c, then b attacks c).
In order to investigate the presence and the distribution of these attacks in NL debates, we extended again the data set extracted from Debatepedia to consider all these additional attacks, and we showed that all these models are verified in human debates, even if with a different frequency. More specifically, we took the original argumentation framework of each topic in our data set (Table 2), the following procedure is applied: the supported (secondary, mediated, and extended, respectively) attacks are added, and the argument pairs resulting from coupling the arguments linked by this relation are collected in the data set "supported (secondary, mediated, and extended, respectively) attack". Collecting the argument pairs generated from the different types of complex attacks in separate data sets allows us to independently analyze each type, and to perform a more accurate evaluation. 7 Figures 2a-d show the four AFs resulting from the addition of the complex attacks in the example Can coca be classified as a narcotic?. Note that the AF in Figure 2a, where the supported attack is introduced, is the same of Figure 2b where the mediated attack is introduced. Notice that, even if the additional attack which is introduced coincide, i.e., d attacks b, this is due indeed to different interactions among supports and attacks (as highlighted in the figure), i.e., in the case of supported attacks this is due to the support from d to c and the attack from c to b, while in the case of mediated attacks this is due to the support from b to a and the attack from d to a.
A second annotation phase is then carried out on the data set, to verify if the generated argument pairs of the four data sets are actually attacks (i.e., if the models of complex attacks proposed in the literature are represented in real data). More specifically, an argument pair resulting from the application of a complex attack can be annotated as: attack (if it is a correct attack) or as unrelated (in case the meanings of the two arguments are not in conflict). For instance, the argument pair (g)-(h) (Example 3) resulting from the insertion of a supported attack, cannot be considered as an attack since the arguments are considering two different aspects of the issue. Example 3. (g) Chewing coca offers an energy boost. Coca provides an energy boost for working or for combating fatigue and cold.
(h) Coca can be classified as a narcotic.
In the annotation, attacks are then annotated also as contradiction (if the first argument contradicts the other) or null (in case the first argument does not contradict the argument it is attacking, as in Example 2). Due to the complexity of the annotation, the same annotation task has been independently carried out also by a second annotator, so as to compute inter-annotator agreement. It has been calculated on a sample of 80 argument pairs (20 pairs randomly extracted from each of the "complex attacks" data set), and it has the goal to assess the validity of the annotation task (counting when the judges agree on the same annotation). We calculated the inter-annotator agreement for our annotation task in two steps. We (i) verify the agreement of the two judges on the argument pairs classification attacks/unrelated, and (ii) consider only the argument pairs tagged as attacks by both annotators, and we verify the agreement between the pairs tagged as contradiction and as null (i.e. no contradiction). Applying κ to our data, the agreement for the first step is κ = 0.77, while for the second step κ = 0.71. As a rule of thumb, both agreements are satisfactory, although they reflect the higher complexity of the second annotation (contradiction/null).
The distribution of complex attacks in the Debatepedia data set, as resulting after a reconciliation phase carried out by the annotators, is shown in Table 4. As can be noticed, the mediated attack is the most frequent type of attack, generating 335 new argument pairs in the NL sample we considered (i.e. the conditions that allow the application of this kind of complex attacks appear more frequently in real debates). Together with secondary attacks, they appear in the AFs of all the debated topics. On the contrary, extended attacks are added in 11 out of 19 topics, and supported attacks in 17 out of 19 topics. Considering all the topics, on average only 6 pairs generated from the additional attacks were already present in the original data set, meaning that considering also these attacks is a way to hugely enrich our data set of NL debates. Table 4: Complex attacks distribution in our data set.
Twelve Angry Men
As a second scenario to extract natural language arguments we chose the scripts of "Twelve Angry Men". The play con- cerns the deliberations of the jury of a homicide trial. As in most American criminal cases, the twelve men must unanimously decide on a verdict of "guilty" or "not guilty". At the beginning, they have a nearly unanimous decision of guilty, with a single dissenter of not guilty, who throughout the play sows a seed of reasonable doubt.
The play is divided into three acts: the end of each act corresponds to a fixed point in time (i.e. the halfway votes of the jury, before the official one), according to which we want to be able to extract a set of consistent arguments. For each act, we manually selected the arguments (excluding sentences which cannot be considered as self-contained arguments), and we coupled each argument with the argument it is supporting or attacking in the dialogue flow (as shown in Examples 4 to 7). More specifically, in discussions, one character's argument comes after the other (entailing or contradicting one of the arguments previously expressed by another character): therefore, we create our pairs in the graph connecting the former to the latter (more recent arguments are placed as T and the argument w.r.t. whom we want to detect the relation is placed as H). For instance, in Example 6, juror 1 claims argument (o), and he is attacked by juror 2, claiming argument (l). Juror 3 claims then argument (i) to support juror's 2 opinion. In the dataset we have therefore annotated the following couples: (o) is contradicted by (l); (l) is entailed by (i).
In Example 7, juror 1 claims argument (l) supported by juror 2 (argument (i)); juror 3 attacks juror's 2 opinion with argument (p). More specifically, (l) is entailed by (i); (i) is contradicted by (p). Example 4. (i) Maybe the old man didn't hear the boy yelling "I'm going to kill you". I mean with the el noise. Given the complexity of the play, and the fact that in human linguistic interactions a lot is left implicit, we simplified the arguments: i) adding the required context in T to make the pairs self-contained (in the TE framework entailment is detected based on the evidences provided in T); and ii) solving intra document coreferences, as in: Nobody has to prove that!, transformed into Nobody has to prove [that he is not guilty].
We collected 80 T-H pairs 8 , composed by 25 entailment pairs, 41 contradiction and 14 unknown pairs (contradiction and unknown pairs are then collapsed in the judgment non entailment for the two-way classification task). 9 To calculate the inter annotator agreement, the same annotation task has been independently carried out on half of argument pairs (40 T-H pairs) also by a second annotator. Cohen's kappa (Carletta 1996) is 0.74. Again, this is a satisfactory agreement, confirming the reliability of the obtained resource.
Also in this scenario, we consider the pairs annotated in the first layer and we then build a bipolar entailment graph for each of the topic in the dataset (the three acts of the play). Again, the arguments are the nodes of the graph, and the relations among the arguments correspond to the edges of the graphs. The complexity of the graphs obtained for the Twelve Angry Men scenario is higher than the debates graphs (on average, 27 links per graph with respect to 9 links per graph in the Debatepedia dataset).
Conclusions
In this paper, we describe two datasets of natural language arguments used in the context of debates. The only existing dataset composed of natural language arguments proposed and exploited in the argumentation community is Araucaria. 10 Araucaria (Reed and Rowe 2004) is based on argumentation schemes (Walton, Reed, and Macagno 2008), and it is an online repository of arguments from heterogenous sources like newspapers (e.g., Wall Street Journal), parliamentary records (e.g., UK House of Parliament debates) and discussion fora (e.g., BBC talking point). Arguments are classified by argumentation schemes. Also in the context of argumentation schemes, (Cabrio, Tonelli, and Villata 2013) propose a new resource based on the Penn Discourse Treebank (PDTB), where a part of the corpus has been annotated with a selection of five argumentation schemes. This effort goes in the direction of trying to export a well known existing benchmark in the field of natural language processing (i.e., PDTB) into the argumentation field, through the identification and annotation of the argumentation schemes.
The benchmark of natural language arguments we presented in this paper has several potential uses. As all the data we presented is available on the Web in a machinereadable format, researchers interested in testing their own argumentation-based tool (both for arguments visualization and for reasoning) are allowed to download the data sets and verify on real data the performances of the tool. More-10 http://araucaria.computing.dundee.ac.uk over, also from the theoretical point of view, the data set can be used by argumentation researchers to find real world example supporting the introduction of new theoretical frameworks. One of the aims of such benchmark is actually to move from artificial natural language examples of argumentation towards more realistic ones where other problems, maybe far from the ones addressed at the present stage in current argumentation research, emerge.
It is interesting to note that the abstract (bipolar) argumentation graphs resulting from our datasets result to be rather simple structures, where usually arguments are inserted in reinstatement chains, rather than complex structures with the presence of several odd and even cycles, as usually challenged in the argumentation literature. In this perspective, we plan to consider other sources of arguments, like costumer's opinions about a service or a product, to see whether more complex structures are identified, with the final goal to built a complete resource where also such complex patterns are present.
A further point which deserves investigation concerns the use of abstract argumentation. Some of the examples we provided may suggest that in some cases adopting abstract argumentation might not be fully appropriate since such natural language arguments have (possibly complex) internal structures and may include sub-arguments (for example argument (d) of the "Coca as narcotic" example). We will investigate how to build a dataset of structured arguments, taking into account the discourse relations.
Finally, in this paper, we have presented a benchmark of natural language arguments manually annotated by humans with skills in linguistics. Given the complexity of the annotation task, a manual annotation was the best choice ensuring an high quality of the data sets. However, in other tasks like discourse relations extraction, it is possible to adopt automated extraction techniques then further verified by human annotators to ensure an high resource's confidence.
attacks b) attacks (Cayrol and Lagasquie-Schiex 2010), mediated attacks (Boella et al. 2010) (if a supports b and c attacks b, then c attacks a), and extended attacks (Nouioua and Risch 2010; 2011) (if
Figure 2 :
2The bipolar argumentation framework with the introduction of complex attacks. The top figures show which combination of support and attack generates the new additional attack.
(l) I don't think the old man could have heard the boy yelling. Example 5. (m) I never saw a guiltier man in my life. You sat right in court and heard the same thing I did. The man's a dangerous killer. (n) I don't know if he is guilty. Example 6. (i) Maybe the old man didn't hear the boy yelling "I'm going to kill you". I mean with the el noise.
(l) I don't think the old man could have heard the boy yelling.(o) The old man said the boy yelled "I'm going to kill you" out. That's enough for me.Example 7. (p) The old man cannot be a liar, he must have heard the boy yelling.(i) Maybe the old man didn't hear the boy yelling "I'm going to kill you". I mean with the el noise. (l) I don't think the old man could have heard the boy yelling.
Figure 3 :
3The bipolar argumentation framework resulting from Act 1 of Twelve Angry Men (red edges represent attack and green ones represent support).
Figure 3
3shows the average dimension of a bipolar argumentation graph in the Twelve Angry Men dataset. Note that no cycle is present, as well as in all the other graphs of such dataset.
Table 1 :
1The Debatepedia/ProCon data set
Table 2 :
2Debatepedia extended data set
Table 3 :
3Support and TE relations on Debatepedia data set.
Since its inception in 2004, the PASCAL RTE Challenges have promoted research in RTE http://www.nist.gov/ tac/2010/RTE/
The data set is freely available at http://www-sop. inria.fr/NoDE/. 5 Here we consider only arguments implying another argument. Arguments "supporting" another argument, but not inferring it will be discussed in the next subsection.
In this phase, the annotators discuss the results to find an agreement on the annotation to be released.
Data sets freely available for research purposes at http://www-sop.inria.fr/NoDE/NoDE-xml.html# debatepedia
The dataset is available at http://www-sop.inria.fr/ NoDE/NoDE-xml.html#12AngryMen. It is built in standard RTE format.9 The unknown pairs in the dataset are arguments attacking each others, without contradicting. Collapsing both judgments into one category for our experiments does not impact on our framework evaluation.
Support in abstract argumentation. G Boella, D M Gabbay, L Van Der Torre, S Villata, Procs of COMMA. s of COMMA216Boella, G.; Gabbay, D. M.; van der Torre, L.; and Villata, S. 2010. Support in abstract argumentation. In Procs of COMMA, Frontiers in Artificial Intelligence and Applica- tions 216, 111-122.
Natural language arguments: A combined approach. E Cabrio, S Villata, Procs of ECAI. s of ECAI242Cabrio, E., and Villata, S. 2012. Natural language argu- ments: A combined approach. In Procs of ECAI, Frontiers in Artificial Intelligence and Applications 242, 205-210.
A natural language bipolar argumentation approach to support users in online debate interactions. E Cabrio, S Villata, Argument & Computation. 43Cabrio, E., and Villata, S. 2013. A natural language bipolar argumentation approach to support users in online debate interactions;. Argument & Computation 4(3):209-230.
A natural language account for argumentation schemes. E Cabrio, S Tonelli, S Villata, Lecture Notes in Computer Science. Baldoni, M.Baroglio, C.Boella, G.and Micalizio, R., eds., AI*IASpringerCabrio, E.; Tonelli, S.; and Villata, S. 2013. A natural lan- guage account for argumentation schemes. In Baldoni, M.; Baroglio, C.; Boella, G.; and Micalizio, R., eds., AI*IA, vol- ume 8249 of Lecture Notes in Computer Science, 181-192. Springer.
Assessing agreement on classification tasks: the kappa statistic. J Carletta, Comput. Linguist. 222Carletta, J. 1996. Assessing agreement on classification tasks: the kappa statistic. Comput. Linguist. 22(2):249-254.
On the acceptability of arguments in bipolar argumentation frameworks. C Cayrol, M.-C Lagasquie-Schiex, Procs of ECSQARU. s of ECSQARU3571Cayrol, C., and Lagasquie-Schiex, M.-C. 2005. On the acceptability of arguments in bipolar argumentation frame- works. In Procs of ECSQARU, LNCS 3571, 378-389.
Coalitions of arguments: A tool for handling bipolar argumentation frameworks. C Cayrol, M.-C Lagasquie-Schiex, Int. J. Intell. Syst. 251Cayrol, C., and Lagasquie-Schiex, M.-C. 2010. Coalitions of arguments: A tool for handling bipolar argumentation frameworks. Int. J. Intell. Syst. 25(1):83-109.
Bipolarity in argumentation graphs: Towards a better understanding. C Cayrol, M.-C Lagasquie-Schiex, Procs of SUM. s of SUM6929Cayrol, C., and Lagasquie-Schiex, M.-C. 2011. Bipolarity in argumentation graphs: Towards a better understanding. In Procs of SUM, LNCS 6929, 137-148.
Recognizing textual entailment: Rational, evaluation and approaches. I Dagan, B Dolan, B Magnini, D Roth, Natural Language Engineering (JNLE). 1504Dagan, I.; Dolan, B.; Magnini, B.; and Roth, D. 2009. Recognizing textual entailment: Rational, evaluation and ap- proaches. Natural Language Engineering (JNLE) 15(04):i- xvii.
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. P M Dung, Artif. Intell. 772Dung, P. M. 1995. On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic pro- gramming and n-person games. Artif. Intell. 77(2):321-358.
Bipolar argumentation frameworks with specialized supports. F Nouioua, V Risch, Procs of ICTAI. s of ICTAIIEEE Computer SocietyNouioua, F., and Risch, V. 2010. Bipolar argumentation frameworks with specialized supports. In Procs of ICTAI, 215-218. IEEE Computer Society.
Argumentation frameworks with necessities. F Nouioua, V Risch, Procs of SUM. s of SUM6929Nouioua, F., and Risch, V. 2011. Argumentation frameworks with necessities. In Procs of SUM, LNCS 6929, 163-176.
Araucaria: Software for argument analysis, diagramming and representation. C Reed, Rowe , G , International Journal on Artificial Intelligence Tools. 134Reed, C., and Rowe, G. 2004. Araucaria: Software for argument analysis, diagramming and representation. Inter- national Journal on Artificial Intelligence Tools 13(4):961- 980.
D Walton, C Reed, F Macagno, Argumentation Schemes. Cambridge University PressWalton, D.; Reed, C.; and Macagno, F. 2008. Argumentation Schemes. Cambridge University Press.
| [] |
[
"Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text",
"Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text"
] | [
"Snigdha Chaturvedi \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n",
"Dan Goldwasser \nDepartment of Computer Science\nPurdue University\n\n",
"Hal Daumé Iii \nDepartment of Computer Science\nUniversity of Maryland\nCollege Park\n"
] | [
"Department of Computer Science\nUniversity of Maryland\nCollege Park",
"Department of Computer Science\nPurdue University\n",
"Department of Computer Science\nUniversity of Maryland\nCollege Park"
] | [] | The ability to comprehend desires and their fulfillment is important to Natural Language Understanding. This paper introduces the task of identifying if a desire expressed by a subject in a given short piece of text was fulfilled. We propose various unstructured and structured models that capture fulfillment cues such as the subject's emotional state and actions. Our experiments with two different datasets demonstrate the importance of understanding the narrative and discourse structure to address this task. | 10.1609/aaai.v30i1.10359 | [
"https://arxiv.org/pdf/1511.09460v1.pdf"
] | 9,661,560 | 1511.09460 | d47efe6d595588f72213496ab5813b4906d082f3 |
Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text
Snigdha Chaturvedi
Department of Computer Science
University of Maryland
College Park
Dan Goldwasser
Department of Computer Science
Purdue University
Hal Daumé Iii
Department of Computer Science
University of Maryland
College Park
Ask, and shall you receive?: Understanding Desire Fulfillment in Natural Language Text
The ability to comprehend desires and their fulfillment is important to Natural Language Understanding. This paper introduces the task of identifying if a desire expressed by a subject in a given short piece of text was fulfilled. We propose various unstructured and structured models that capture fulfillment cues such as the subject's emotional state and actions. Our experiments with two different datasets demonstrate the importance of understanding the narrative and discourse structure to address this task.
Introduction
Understanding expressions of desire is a fundamental aspect of understanding intentional human-behavior. The strong connection between desires and the ability to plan and execute appropriate actions was studied extensively in contexts of rational agent behavior [16], and modeling human dialog interactions [19].
In this paper we recognize the significant role that expressions of desire play in natural language understanding. Such expressions can be used to provide rationale for character behaviors when analyzing narrative text [18,10], extract information about human wishes [17], explain positive and negative sentiment in reviews, and support automatic curation of community forums by identifying unresolved issues raised by users.
We follow the intuition that at the heart of the applications mentioned above is the ability to recognize whether the expressed desire was fulfilled or not, and suggest a novel reading comprehension task: Given text, denoted as Desire-expression (e.g., "Before Lenin died, he said he wished to be buried beside his mother.") containing a desire ("be buried beside his mother") by the Desire-subject ("he"), and the subsequent text (denoted Evidence fragments or simply Evidences) appearing after the Desire-expression in the paragraph, we predict if the Desire-subject was successful in fulfilling their desire. Fig. 1 illustrates our setting.
Similar to many other natural language understanding tasks [8,28,2], performance is evaluated using prediction accuracy. However, unlike tasks such as text categorization or sentiment classification which rely on lexical information, understanding desire fulfillment requires complex inferences connecting expression of desire, actions affecting the Desire-subject, and the extent to which these actions contribute to fulfilling the subject's goals. For example, in Fig. 1 the action of 'preserving' Lenin's body led to non-fulfillment of his desire.
We address these complexities by representing the narrative flow of Evidence fragments, and assessing if the events (and emotional states) mentioned in this flow con-tribute to (or provide indication of) fulfilling the desire expressed in the preceding Desire-expression. Following previous work on narrative representation [4], we track the events and states associated with the narrative's central character (the Desire-subject).
While this representation captures important properties required by the desire-fulfillment prediction task, such as the actions taken by the Desire-subject, it does not provide us with an indication about the outcome of these actions. Recent attempts to support supervised learning of such detailed narrative structures by annotating data [11], result in highly complex structures even for restricted domains. Instead we model this information by associating a state, indicating if the outcome of an action (or the mention of an emotional state) provides evidence for making progress towards achieving the desired goal. We model the transitions between states as a latent sequence model, and use it to predict if the value of the final latent state in this sequence is indicative of a positive or negative prediction for our task.
We demonstrate the strength of our approach by comparing it against two strong baselines. First, we demonstrate the importance of analyzing the complete text by comparing with a textual-entailment based model that analyzes individual Evidence fragments independently. We then compare our latent structured model, which incorporates the narrative structure with an unstructured model, and show improvements in prediction performance. Our key contributions are:
• We introduce the problem of understanding desire fulfillment, annotate and release two datasets for further research on this problem. • We present a latent structured model for this task, incorporating the narrative structure of the text, and propose relevant features that incorporate world knowledge. • Empirically demonstrate that such a model outperforms competitive baselines.
Problem Setting
Our problem consists of instances of short texts (called Desire-expressions), which were collected in a manner so that each consists of an indication of a desire (characterized using a Desire-verb) by a Desire-subject(s). The Desire-verb is identified by the following verb phrases: 'wanted to', 'wished to' or 'hoped to' 1 . The three Desireverbs were identified using lexical matches while the 1 We chose to use these three phrases for data collection. However, one can include other expressions of desire if needed. We plan to include Desire-subject(s) was marked manually. Each Desireexpression is followed by five or fewer pieces of Evidence fragments (or simply Evidences). The Desireexpression and the Evidences (in order) consist of individual sentences that appeared contiguously in a paragraph. We address the binary classification task of predicting the Desire Fulfillment status, i.e. whether the indicated desire was fulfilled in the text, given the Evidences and the Desire-expression with Desire-verb and Subject identified. Fig. 1 shows an example of the problem.
Inference Models for Understanding Desire Fulfillment in Narrative Text
In this section we present three textual inference approaches, each following different assumptions when approaching the desire-fulfillment task, thus allowing a principled discussion about which aspects of the narrative text should be modeled. Our first approach assumes the indication of desire fulfillment will be contained in a single Evidence fragment. We test this assumption by adapting the well-known Textual Entailment task to our settings, by generating entailment candidates from Desire-expression and Evidence fragments.
Our second approach assumes the decision depends on the Evidence text as a whole, rather than on a single Evidence fragment. We test this assumption by representing relevant information extracted from the entire Evidence text. This representation (depicted in Fig. 3) connects the central character in the narrative, the Desiresubject, with their actions and emotional states exhibited in the Evidence text. This representation is then used for feature extraction when training a binary classifier for the desire-fulfillment task.
Our final model provides a stronger structure for the actions and emotional states expressed in the Evidence text. The model treats individual Evidence fragments as parts of a plan carried out by the Desire-subject to achieve the desired goal, and makes judgments about the contribution of each step towards achieving the desired goal.
that in future work.
Textual Entailment (TE) Model
Recognizing Textual Entailment (RTE) is the task of recognizing the existence of an entailment relationship between two text fragments [8]. From this perspective, a textual entailment based method might be a natural way to address the desire fulfillment task. RTE systems often rely on aligning the entities appearing in the text fragments. Hence we reduce the desire fulfillment task into several RTE instances consisting of text-hypothesis pairs, by pairing the Desire-expression (hypothesis) with each of the Evidence fragments (text) in that example. However, we "normalized" the Desire-expression, so that it would be directly applicable for the RTE task. For example, the Desire-expression, "One day Jerry wanted to paint his barn.", gets converted to "Jerry painted his barn.". This process followed several steps:
• If the Desire-subject is pronominal, replace it with the appropriate named entity when possible (we used the Stanford CoreNLP coreference resolution system) [23]. • Ignore the content of the Desire-expression appearing before the Desire-subject. • Remove the clause containing the Desire-verb ('wanted to', 'wished to' etc.), and convert the succeeding verb to its past tense. The desire was considered 'fulfilled' if the RTE model predicted entailment for at least one of the texthypothesis pairs of the example. E.g., the model could infer that the normalized Desire-expression example mentioned above, would be entailed by the following Evidence fragment-"It took Jerry six days to paint his barn that way." and hence it would conclude that the desire was fulfilled. Table 1 shows the performance of BIUTEE [30,21], an RTE system, on the two datasets (see Sec. 4) used in our experiments 2 . Our results show that the RTE Model performs better with normalization. We use this model (with normalization) as a baseline in Sec. 5.
Unstructured Model
The Textual Entailment model described above assumes that the Desire-expression would be entailed by one of the individual Evidences. This assumption might not hold in all cases. Firstly, the indication of desire fulfillment (or its negation) can be subtle and expressed using indirect cues. More commonly, multiple Evidence fragments can collectively provide the cues needed to identify desire fulfillment. This suggests a need to treat the entire text as a whole when identifying cues about desire fulfillment.
We begin by identifying the Desire-subject and the desire expressed (using 'focal-word' described in Sec. 3) in the Desire-expression. Thereafter, we design several semantic features to model coreferent mentions of the Desire-subject, actions taken (and respective semanticroles of the Desire-subject), and emotional state of the Desire-subject in the Evidences. We enhance this representation using several knowledge resources identifying word connotations [15] and relations. Fig. 3 presents a visual representation of this process and Sec. 3 presents further details.
Based on these features, extracted from the collection of all Evidences instead of individual Evidence fragments, we train supervised binary classifiers (Unstructured models).
Latent Structure Narrative Model (LSNM)
The Unstructured Model described above captures nuanced indications of desire-fulfillment, by associating the Desire-subject with actions, events and mental states. However, it ignores the narrative structure as it fails to model the 'flow of events' depicted in the transition between the Evidences. Our principal hypothesis is that the input text presents a story. The events in the story describe the evolving attempts of the story's main character (the Desire-subject) to fulfill its desire. Therefore, it is essential to understand the flow of the story to make better judgments about its outcome.
We propose to model the evolution of the narrative using latent variables. We associate a latent state (denoted h j ), with each Evidence fragment (denoted e j ). The latent states take discrete values (out of H possible values, where H is a parameter to the model), which abstractly represent various degrees of optimism or pessimism with respect to fulfillment, f of the desire expressed in the Desire-expression, d. These latent states are arranged sequentially, in the order of occurrence of the corresponding Evidence fragments, and hence capture the evolution of the story (see Fig. 2).
The linear process assumed by our model can be summarized as: The model starts by predicting the latent state, h 0 , based on the first Evidence, e 0 . Thereafter, depending on the current latent state, and the content of the following Evidence fragment, the model transitions to another latent state. This process is repeated until all the Evidence fragments are associated with a latent state. We formulate the transition between narrative states as sequence prediction. We associate a set of Content features with each latent state, and Evolution features with the transitions between states.
Note that the desire fulfillment status, f , is viewed as an outcome of this inference process and is modeled as the last step of this chain using a discriminative classifier which makes its prediction based on the final latent state and a Structure-independent feature set, φ(d). This feature set can be handcrafted to include information that could not be modeled by the latent states, such as longrange dependencies, and other cumulative features based on the Desire-expression, d, and the Evidence fragments, e j s.
We quantify these predictions using a linear model which depends on the various features, φ, and corresponding weights, w. Using the Viterbi algorithm we can compute the score associated with the optimal state sequence, for a given input story as:
score = max h [w · φ(e, d, f, h)](1)
Learning and Inference
During training, we maximize the cumulative scores of all data instances using an iterative process (Alg. 1). Each iteration of this algorithm consists of two steps. In the first step, for every instance, it uses Viterbi algorithm (and weights from previous iteration, w t−1 ) to find the highest scoring latent state sequence, h, that agrees with the provided label (the fulfillment state), f . In the following step, it uses the state sequence determined above Ei refers to the i th evidence out of a total of N evidences.
ĥ i = arg max hi [w t−1 · φ(e i , d i , f, h i )] such that f = f i ∀i ∈ {1 . . . D} 6: w t = StructuredPerceptron({(d, e,ĥ, f ) i }) 7: end for
to get refined weights for the t th iteration, w t , using structured perceptron [7]. The algorithm is similar to an EM algorithm with 'hard' assignments albeit with a different objective. While testing, we use the learned weights and Viterbi decoding to compute the fulfillment state and the best scoring state sequence. Our approach is related to latent structured perceptron though we only use the last state (and structure-independent features) for prediction. We define a focal word as the clausal complement of the Desire-verb ('wanted to', 'hoped to', 'wished to'). If the clausal complement is a verb, the focal word is its past tense form. e.g., the focal word in the Desire expression in Fig. 4 is 'helped'. A focal word is not simply the verb following the Desireverb: e.g. in the Desire-expression in Fig. 1
Emotional State (F10-F11):
Signals about the fulfillment status could also emanate from the emotional state of the Subject. A happy or content Desire-subject can be indicative of a fulfilled desire (e.g. in Evidence e3 in Fig. 4), and vice versa. We quantify the emotional state of the Subject(s) using connotations of the adjectives modifying their mentions. 6. Action features (F12-F15): These features analyze the intended action and the actions taken by various entities. We first identify the intended action -the verb immediately following the Desire-verb in the Desire Expression. e.g., in Fig. 4 the intended action is to 'help'. Thereafter, we design features that capture the connotative agreement between the intended action and the actions taken by the Desire-subject(s) in the Evidences. We also include features that describe connotations of actions (verbs) affecting the Desire-subject(s). E.g. in e1 of Fig. 4, the action by the Desire-subject (marked in blue), 'offered', is in connotative agreement with the intended action, 'help' (both have positive connotations according to [15]). Also, the actions affecting the subject ('thanked', 'gifted') have positive connotations indicating desire fulfillment. 7. Sustenance Features (F16-F17): LSNM uses a chain of latent states to abstractly represent the content of the Evidences with respect to Desire fulfillment Status. At any point in the chain, the model has an expectation of the fulfillment status. The sustenance features indicate if the expectation should intensify, remain the same or be reversed by the incoming Evidence fragment. This is achieved by designing features indicating if the Evidence fragment starts with a 'conforming' or a 'dissenting' phrase. E.g. e3 in Fig. 4 starts with a conforming phrase, 'Overall', indicating that the fulfillment status expectation (positive in e2) should not change. Table 3 presents some examples of the two categories. These phrases were chosen using various discourse senses mentioned in [27]. The complete list is available on the first author's webpage.
Unstructured Models
For the unstructured models, we directly used the Entailment and Discourse features (F1 to F3 in Table 2). For features F4 to F15, we summed their values across all Evidences of an instance. This ensured a constant size of the feature set in spite of variable number of Evidence fragments per instance.
Latent Structure Narrative Model
Our Structured model requires three types of features: (a) Content features that help the model assign latent states to Evidence fragments based on their content, (b) Evolution features that help in modeling the evolution of the story expressed by the Evidence fragments (c) Structure Independent features used while making the final prediction. Content features: These features depend on the latent state of the model, h j , and the content of the corresponding Evidence, e j (expressed using features F4 to F15 in Table 2). 1. φ(h j , e j ) = α if the current state is h j ; 0 otherwise where α ∈ F4 to F15 Evolution features: These features depend on the current and previous latent states, h j and h j−1 and/or the current Evidence fragment, e j :
1. φ(h j−1 , h j ) = 1 if previous state is h j−1 and cur- rent state is h j ; 0 otherwise. 2. φ(h j−1 , h j , e j ) = α if previous state is h j−1 , cur- rent state is h j ; 0 otherwise where α ∈ F16 and
Datasets
We have used two real-world datasets for our experiments: MCTest and SimpleWiki consisting of 174 and 1004 manually annotated instances respectively. Both the datasets (available on the first author's webpage) were collected and annotated in a similar fashion.
Collection and annotation: The MCTest data originated from the Machine Comprehension Test dataset [28] which contained of a set of 660 stories and associated questions. The vocabulary and concepts are limited to the extent that the stories would be understandable by 7 year olds. We discard the questions and only consider the free text of the stories.
The SimpleWiki dataset was created from the textual content of an October, 2014 4 dump of the Simple English Wikipedia. We discarded all lists, tables and titles in the wiki pages. We chose Simple English Wikipedia instead of Wikipedia articles to limit the complexity of the vocabulary and world knowledge required to comprehend the content thus making the task simpler and manageable.
The Desire-subject(s) and the Desire Fulfillment Status were manually annotated on CrowdFlower 5 . Each instance was annotated by 3 or more annotators as determined by CrowdFlower using expected annotation accuracy. Annotators were also required to demonstrate proficiency on an initial set of 5 test instances. To avoid annotator fatigue, each annotator was presented only 3 instances per session. The mean CrowdFlower confidence (inter-annotator agreement weighted by their trust scores) of the annotations was 0.92.
Training and Test Sets: The SimpleWiki and MCTest data consisted of about 1000 and 175 instances, 20% of which was held-out as test sets. In the test sets of SimpleWiki and MCTest, 28% and 56% of the data belonged to the positive (desire fulfilled) class respectively.
Empiricial Evaluation
For evaluation, we compared test set performances using F1 score of the positive (desire fulfilled) class. We also included a simple Logistic Regression baseline based on Bag-of-Words (BoW) features. and show the results for the best two models: LR (Logistic Regression) and DT (Decision Trees). We report median performance values over 100 random restarts of our model since its performance depends on the initialization of the weights. Also, our model requires the number of latent states, H, as input which was set to be 2 and 15 for the MCTest and SimpleWiki datasets respectively using cross-validation. The difference in optimal H values (and F1 scores) for the two datasets could be attributed to the difference in complexity of the language and concepts used in them. The MCTest dataset consists of children stories, focusing on simple concepts and goals (e.g., 'wanting to go skating') and their fulfillment is indicated explicitly, in simple and focused language (e.g., They went to the skating rink together.). On the other hand, SimpleWiki describes real-life desires (e.g., 'wanting to conquer a country'), which require sophisticated planning over multiple steps, which may provide only indirect indication of the desire fulfillment status. This added complexity resulted in a harder classification problem, and increased the complexity of inference over several latent states.
The table shows that LSNM outperforms the unstructured models indicating the benefit of modeling narrative structure. Also, the unstructured models perform better than the TE model emphasizing the need for simultaneous analysis all of the Evidence text. We obtained similar results during cross validation. For instance, the TE, unstructured models (best) and LSNM yielded F1 scores of 56.9, 67.9 and 70.2 respectively on the MCTest data. This shows that modeling the narrative presented by the Evidences results in better prediction of the desire fulfillment status.
Related Work
Expressions of desires and wishes have attracted psycholinguists [29] and linguists [1] alike. [17] detect wishes from text. Analyzing desires adds a new dimension to more general tasks like opinion mining [26] where the manufacturers and advertisers want to discover users' desires or needs from online reviews etc. Another use-case would be in resolving issues for community forum users. For instance, the number of posts in Massive Open Online Courses forums often overwhelm the instructional staff [6]. Identifying posts containing unresolved issues can help focus the efforts of the instructional staff.
Our problem is related to Machine Comprehension [28]. However, unlike most systems, designed for understanding large textual collections (macroreading) [12,3,13], this work focuses on Micro-reading, understanding short pieces of text. [2] also address micro-reading but with a different goal -answering domain-specific questions about entities in a paragraph.
Our task is also related to Recognizing Textual Entailment (RTE) [8,9]. However, we show that solving it additionally requires modeling the narrative structure of the text.
There have been several attempts at modeling narrative structures which include narrative schemas [5,4], plot units [20] and Story Intention Graphs [11]. Previous work has also studied connotations and word effects on narrative modeling [15,18]. Our approach is closely related to these methods. While focusing on a specific classification task, our structured model and features, share similar motivation.
The AI task of recognizing plans of characters in a narrative viewing them as intentional agents [25,32,22] is also relevant. However, the focused nature of our task lets us employ latent variables to model the transitions between expectations and plans.
Latent structured models have been used previously for solving various problems in computer vision and NLP [31,33,14] though their problem settings and goals are different.
Conclusion
In this paper we have addressed the novel task of analyzing small pieces of text containing expression of a desire to identify if the desire was fulfilled in the given text. For solving this problem, we adopt three approaches based on different assumptions. We first use a textual entailment model to analyze small fragments of texts independently. Our second approach, an unstructured model, assumes that it is not sufficient to analyze different pieces of text independently. Instead, the complete text should be analyzed as a whole to identify desire fulfillment. Our third approach, a structured model, is based on the hypothesis that identifying desire fulfillment requires an understanding of the narrative structure and models the same using latent variables. We compare performances of these models on two different datasets that we have annotated and release. Our experiments establish the need to incorporate the narrative structure of the storyline offered by the text to better understand desire fulfillment.
Figure 1 :
1Example of a Desire Expression (d), Evidence fragments (e1. . .e5) and a binary Desire Fulfillment Status (f). The Desire-subject and Desire-verb are marked in blue and bold fonts respectively in the Desire-expression.
Figure 2 :
2Structured model (LSNM) Diagram. Evidence ei, Desire Fulfillment, f , and Structure-independent features, φ(d), are observed, States, hi, are hidden.
Input: Labeled set {(d, e, f ) i ∀i ∈ {1 . . . D}}; and T : number of iterations 2: Output: Weights w 3: Initialization: Initialize w randomly 4: for t : 1 to T do 5:
Figure 3 :
3Framework for feature extraction for an example.
Figure 4 :
4Artificial example indicating feature utility. The Desire-subject mentions are marked in blue, actions in bold and emotions in italics. Discourse feature is underlined. dependency-parsing based rules. 1. Entailment (F1): This feature simply incorporates the output of the Textual Entailment model. 2. Discourse (F2-F3): These features aim to identify indications of obstacles or progress of desire fulfillment in the Desire-expression itself, based on discourse connectives. E.g. 'so' (underlined) in the Desire-expression in Fig. 4 indicates progress of desire fulfillment. 3. Focal words (F4-F8): These features identify the word(s) most closely related to the desire, and look for their presence in the Evidences.
F17 3. φ(h 0 ) = 1 if start state is h 0 ; 0 otherwise. Structure Independent features φ(d): This feature set is exactly same as that used by the Unstructured models.
+Patient, -Patient count: Count of occurrences of 'positive' and 'negative' verbs (respectively) in the Evidence which had the Desire-subject as the patient.Feature Type
Id
Definition
Entailment
F1
TEPrediction: Binary prediction of the Textual Entailment model [30].
Discourse
F2,
F3
ButPresent, SoPresent: Binary features indicating if a 'but' or 'so' (respectively) followed the Desire-verb
('wanted to', 'wished to' etc.) in the Desire-expression.
Focal Word
F4,
F5,
F6
focal count, focal syn and focal ant count: Count of occurrences of the focal word(s), their WordNet [24]
synonyms and antonyms (respectively) in the Evidence. Occurrences of synonyms or antonyms were
identified only when they had the same POS tag as the focal word(s).
F7
focal+syn count: Sum of F4 and F5
F8
focal lemm count: Count of occurrences of lemmatized forms of the focal word(s) in the Evidence.
Desire-subject mentions
F9
sub count: Count of all mentions (direct and co-referent) of the Desire-subject in the Evidence.
Emotional State
F10,
F11
+adj, -adj count: Counts of occurrences of 'positive' and 'negative' adjectives (respectively) modifying
the direct and co-referent mentions of the Desire-subject in the Evidence.
Action
F12,
F13
+Agent, -Agent count: Number of times the connotation of verbs appearing in the Evidence agreed with
and disagreed with (respectively) that of the intended action.
F14,
F15
Sustenance
F16,
F17
isConforming, isDissenting: Binary features indicating if the Evidence starts with a conforming or
dissenting phrase (respectively). See Table 3 for example phrases.
Table 2 :
2Feature definitions (Sec. 3). F1-F3 are extracted for each example while F4-F17 are extracted for each evidence.
Table 3 :
3Some examples of conforming and dissenting phrases.
Table 4
4reports the performances of these models. For training the unstructured model, we experimented with different algorithmsData
Model type
Name
P
R
F
Bag-Of-Words
BoW
41.2
50.0
45.2
Textual Entailment
TE
76.1
45.4
56.9
MC
Unstructured
LR
70.6
63.2
66.7
Test
DT
71.4
52.6
60.6
Structured
LSNM 69.6
84.2
74.4
Bag-Of-Words
BoW
28.2
20.0
23.4
Textual Entailment
TE
37.0
8.9
14.3
Simple
Unstructured
LR
50.0
8.9
15.2
Wiki
DT
42.9
5.4
9.5
Structured
LSNM 37.5
21.3
27.1
Table 4 :
4Test set performances. Our structured model, LSNM, outperforms the unstructured, TE and BoW models.
We also tested the TE model by using the default setting, optimized for the RTE task, however it performed very poorly.
FeaturesWe now describe our features and how they are used by the models.Table 2 defines our features and Fig. 3describes their extraction for an example. They capture different semantic aspects of the desire-expression and evidences, such as entities, their actions and connotations, and their emotive states using lexical resources like Connotation Lexicon[15], WordNet and our lexicon of conforming and dissenting phrases. Before extracting features, we pre-processed the text 3 and extracted all adjectives and verbs (with their negation statuses and connotations) associated with the Desire-subject using3 We obtained pos tags, dependency parses, and resolved co-references using Stanford CoreNLP[23].
http://dumps.wikimedia.org/simplewiki/ 5 http://www.crowdflower.com/
Acquisition of desires before beliefs: A computational investigation. Proceedings of CoNLL-2013. L Barak, A Fazly, S Stevenson, L. Barak, A. Fazly, and S. Stevenson. Acquisition of de- sires before beliefs: A computational investigation. Pro- ceedings of CoNLL-2013, 2013.
Modeling biological processes for reading comprehension. J Berant, V Srikumar, P.-C Chen, A Vander Linden, B Harding, B Huang, P Clark, C D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarJ. Berant, V. Srikumar, P.-C. Chen, A. Vander Linden, B. Harding, B. Huang, P. Clark, and C. D. Manning. Mod- eling biological processes for reading comprehension. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Doha, Qatar, October 2014.
Toward an architecture for neverending language learning. A Carlson, J Betteridge, B Kisiel, B Settles, E R H Jr, T M Mitchell, Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010. the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010Atlanta, Georgia, USAA. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. R. H. Jr., and T. M. Mitchell. Toward an architecture for never- ending language learning. In Proceedings of the Twenty- Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11-15, 2010, 2010.
Unsupervised learning of narrative event chains. N Chambers, D Jurafsky, Proceedings of the 46th annual meeting of the Association for Computational Linguistics. the 46th annual meeting of the Association for Computational LinguisticsN. Chambers and D. Jurafsky. Unsupervised learning of narrative event chains. In Proceedings of the 46th annual meeting of the Association for Computational Linguistics, 2008.
Unsupervised learning of narrative schemas and their participants. N Chambers, D Jurafsky, Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP2N. Chambers and D. Jurafsky. Unsupervised learning of narrative schemas and their participants. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2, pages 602-610, 2009.
Predicting instructor's intervention in mooc forums. S Chaturvedi, D Goldwasser, H Daumé, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, MarylandLong Papers1Association for Computational LinguisticsS. Chaturvedi, D. Goldwasser, and H. Daumé III. Pre- dicting instructor's intervention in mooc forums. In Pro- ceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1501-1511, Baltimore, Maryland, June 2014. As- sociation for Computational Linguistics.
Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. M Collins, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingM. Collins. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 1-8, 2002.
Recognizing textual entailment: Rational, evaluation and approaches. I Dagan, B Dolan, B Magnini, D Roth, Natural Language Engineering. 1601I. Dagan, B. Dolan, B. Magnini, and D. Roth. Recog- nizing textual entailment: Rational, evaluation and ap- proaches. Natural Language Engineering, 16(01):105- 105, 2010.
The PASCAL recognising textual entailment challenge. I Dagan, O Glickman, B Magnini, Machine Learning Challenges. Springer3944I. Dagan, O. Glickman, and B. Magnini. The PASCAL recognising textual entailment challenge. In Machine Learning Challenges. Lecture Notes in Computer Sci- ence, volume 3944, pages 177-190. Springer, 2006.
Detecting story analogies from annotations of time, action and agency. D K Elson, Proceedings of the LREC 2012 Workshop on Computational Models of Narrative. the LREC 2012 Workshop on Computational Models of NarrativeD. K. Elson. Detecting story analogies from annotations of time, action and agency. In Proceedings of the LREC 2012 Workshop on Computational Models of Narrative, 2012.
Dramabank: Annotating agency in narrative discourse. D K Elson, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC-2012). the Eighth International Conference on Language Resources and Evaluation (LREC-2012)D. K. Elson. Dramabank: Annotating agency in narra- tive discourse. In Proceedings of the Eighth Interna- tional Conference on Language Resources and Evalua- tion (LREC-2012), 2012.
Machine reading. O Etzioni, M Banko, M J Cafarella, Proceedings, The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference. The Twenty-First National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence ConferenceO. Etzioni, M. Banko, and M. J. Cafarella. Machine reading. In Proceedings, The Twenty-First National Con- ference on Artificial Intelligence and the Eighteenth Inno- vative Applications of Artificial Intelligence Conference, pages 1517-1519, 2006.
Identifying relations for open information extraction. A Fader, S Soderland, O Etzioni, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingStroudsburg, PA, USAA. Fader, S. Soderland, and O. Etzioni. Identifying re- lations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 1535-1545, Stroudsburg, PA, USA, 2011.
A discriminatively trained, multiscale, deformable part model. P F Felzenszwalb, D A Mcallester, D Ramanan, IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR). P. F. Felzenszwalb, D. A. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part model. In IEEE Computer Society Conference on Com- puter Vision and Pattern Recognition (CVPR), 2008.
Connotation lexicon: A dash of sentiment beneath the surface meaning. S Feng, J S Kang, P Kuznetsova, Y Choi, Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics. the 51th Annual Meeting of the Association for Computational LinguisticsSofia, Bulgaria, AngustShort Papers2S. Feng, J. S. Kang, P. Kuznetsova, and Y. Choi. Conno- tation lexicon: A dash of sentiment beneath the surface meaning. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), Sofia, Bulgaria, Angust 2013.
The belief-desire-intention model of agency. M Georgeff, B Pell, M Pollack, M Tambe, M Wooldridge, Intelligent Agents V: Agents Theories, Architectures, and Languages. SpringerM. Georgeff, B. Pell, M. Pollack, M. Tambe, and M. Wooldridge. The belief-desire-intention model of agency. In Intelligent Agents V: Agents Theories, Archi- tectures, and Languages, pages 1-10. Springer, 1999.
May all your wishes come true: A study of wishes and how to recognize them. A B Goldberg, N Fillmore, D Andrzejewski, Z Xu, B Gibson, X Zhu, Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational LinguisticsA. B. Goldberg, N. Fillmore, D. Andrzejewski, Z. Xu, B. Gibson, and X. Zhu. May all your wishes come true: A study of wishes and how to recognize them. In Proceed- ings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Asso- ciation for Computational Linguistics, pages 263-271, 2009.
Automatically producing plot unit representations for narrative text. A Goyal, E Riloff, H Daumé, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingA. Goyal, E. Riloff, and H. Daumé III. Automatically producing plot unit representations for narrative text. In Proceedings of the 2010 Conference on Empirical Meth- ods in Natural Language Processing, pages 77-86, 2010.
Attention, intentions, and the structure of discourse. B J Grosz, C L Sidner, Computational linguistics. 123B. J. Grosz and C. L. Sidner. Attention, intentions, and the structure of discourse. Computational linguistics, 12(3):175-204, 1986.
Plot units and narrative summarization. W G Lehnert, Cognitive Science. 54W. G. Lehnert. Plot units and narrative summarization. Cognitive Science, 5(4):293-331, 1981.
The excitement open platform for textual inferences. B Magnini, R Zanoli, I Dagan, K Eichler, G Neumann, T Noh, S Padó, A Stern, O Levy, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, System Demonstrations. the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, System DemonstrationsB. Magnini, R. Zanoli, I. Dagan, K. Eichler, G. Neumann, T. Noh, S. Padó, A. Stern, and O. Levy. The excitement open platform for textual inferences. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics, ACL 2014, System Demonstrations, pages 43-48, 2014.
Computational Modeling of Narrative. Synthesis Lectures on Human Language Technologies. I Mani, Morgan & Claypool PublishersI. Mani. Computational Modeling of Narrative. Synthesis Lectures on Human Language Technologies. Morgan & Claypool Publishers, 2012.
The Stanford CoreNLP natural language processing toolkit. C D Manning, M Surdeanu, J Bauer, J Finkel, S J Bethard, D Mcclosky, Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 52nd Annual Meeting of the Association for Computational Linguistics: System DemonstrationsC. D. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. J. Bethard, and D. McClosky. The Stanford CoreNLP natu- ral language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55-60, 2014.
Wordnet: A lexical database for english. G A Miller, Commun. ACM. 3811G. A. Miller. Wordnet: A lexical database for english. Commun. ACM, 38(11):39-41, Nov. 1995.
Understanding goal-based stories through model finding and planning. E T Mueller, Intelligent Narrative Technologies: Papers from the AAAI Fall Symposium. Magerko, B., and Riedl, M.E. T. Mueller. Understanding goal-based stories through model finding and planning. Magerko, B., and Riedl, M., eds., Intelligent Narrative Technologies: Papers from the AAAI Fall Symposium, pages 95-101, 2007.
Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. B Pang, L Lee, 2B. Pang and L. Lee. Opinion mining and sentiment anal- ysis. Foundations and Trends in Information Retrieval, 2(1-2):1-135, 2007.
The penn discourse tree-bank 2.0 annotation manual. R Prasad, E Miltsakaki, N Dinesh, A Lee, A Joshi, L Robaldo, B Webber, University of Pennsylvania, Institute for Research in Cognitive ScienceTechnical reportR. Prasad, E. Miltsakaki, N. Dinesh, A. Lee, A. Joshi, L. Robaldo, and B. Webber. The penn discourse tree-bank 2.0 annotation manual. Technical report, University of Pennsylvania, Institute for Research in Cognitive Science, December 2007.
Mctest: A challenge dataset for the open-domain machine comprehension of text. M Richardson, C J C Burges, E Renshaw, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingM. Richardson, C. J. C. Burges, and E. Renshaw. Mctest: A challenge dataset for the open-domain machine com- prehension of text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 193-203, 2013.
The acquisition of mental verbs: A systematic investigation of the first reference to mental state. M Shatz, H M Wellman, S Silber, Cognition. 143M. Shatz, H. M. Wellman, and S. Silber. The acquisition of mental verbs: A systematic investigation of the first reference to mental state. Cognition, 14(3):301-321, 1983.
BIUTEE: A modular open-source system for recognizing textual entailment. A Stern, I Dagan, The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the System Demonstrations. A. Stern and I. Dagan. BIUTEE: A modular open-source system for recognizing textual entailment. In The 50th Annual Meeting of the Association for Computational Linguistics, Proceedings of the System Demonstrations, pages 73-78, 2012.
Discovering finegrained sentiment with latent variable structured prediction models. O Täckström, R Mcdonald, Proceedings of the 33rd European Conference on Advances in Information Retrieval, ECIR'11. the 33rd European Conference on Advances in Information Retrieval, ECIR'11O. Täckström and R. McDonald. Discovering fine- grained sentiment with latent variable structured predic- tion models. In Proceedings of the 33rd European Con- ference on Advances in Information Retrieval, ECIR'11, pages 368-374, 2011.
Understanding Goal-based Stories. R Wilensky, New Haven, CT, USAPhD thesisR. Wilensky. Understanding Goal-based Stories. PhD thesis, New Haven, CT, USA, 1978. AAI7916531.
Multi-level structured models for document-level sentiment classification. A Yessenalina, Y Yue, C Cardie, Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. the 2010 Conference on Empirical Methods in Natural Language ProcessingA. Yessenalina, Y. Yue, and C. Cardie. Multi-level struc- tured models for document-level sentiment classification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1046- 1056, 2010.
| [] |
[
"NEURAL SPEED READING WITH STRUCTURAL-JUMP- LSTM",
"NEURAL SPEED READING WITH STRUCTURAL-JUMP- LSTM"
] | [
"Christian Hansen c.hansen@di.ku.dk \nDepartment of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen\n",
"Casper Hansen \nDepartment of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen\n",
"Stephen Alstrup s.alstrup@di.ku.dk \nDepartment of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen\n",
"Jakob Grue \nDepartment of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen\n",
"Simonsen simonsen@di.ku.dk \nDepartment of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen\n",
"Christina Lioma c.lioma@di.ku.dk \nDepartment of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen\n"
] | [
"Department of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen",
"Department of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen",
"Department of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen",
"Department of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen",
"Department of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen",
"Department of Computer Science\nUniversity of Copenhagen Denmark\n2100Copenhagen"
] | [] | Recurrent neural networks (RNNs) can model natural language by sequentially "reading" input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as "neural speed reading", either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text. | null | [
"https://arxiv.org/pdf/1904.00761v2.pdf"
] | 90,258,012 | 1904.00761 | d647a64de005113f7bb5859347f5edca81bc0eec |
NEURAL SPEED READING WITH STRUCTURAL-JUMP- LSTM
Christian Hansen c.hansen@di.ku.dk
Department of Computer Science
University of Copenhagen Denmark
2100Copenhagen
Casper Hansen
Department of Computer Science
University of Copenhagen Denmark
2100Copenhagen
Stephen Alstrup s.alstrup@di.ku.dk
Department of Computer Science
University of Copenhagen Denmark
2100Copenhagen
Jakob Grue
Department of Computer Science
University of Copenhagen Denmark
2100Copenhagen
Simonsen simonsen@di.ku.dk
Department of Computer Science
University of Copenhagen Denmark
2100Copenhagen
Christina Lioma c.lioma@di.ku.dk
Department of Computer Science
University of Copenhagen Denmark
2100Copenhagen
NEURAL SPEED READING WITH STRUCTURAL-JUMP- LSTM
Published as a conference paper at ICLR 2019
Recurrent neural networks (RNNs) can model natural language by sequentially "reading" input tokens and outputting a distributed representation of each token. Due to the sequential nature of RNNs, inference time is linearly dependent on the input length, and all inputs are read regardless of their importance. Efforts to speed up this inference, known as "neural speed reading", either ignore or skim over part of the input. We present Structural-Jump-LSTM: the first neural speed reading model to both skip and jump text during inference. The model consists of a standard LSTM and two agents: one capable of skipping single words when reading, and one capable of exploiting punctuation structure (sub-sentence separators (,:), sentence end symbols (.!?), or end of text markers) to jump ahead after reading a word. A comprehensive experimental evaluation of our model against all five state-of-the-art neural reading models shows that Structural-Jump-LSTM achieves the best overall floating point operations (FLOP) reduction (hence is faster), while keeping the same accuracy or even improving it compared to a vanilla LSTM that reads the whole text.
INTRODUCTION
Recurrent neural networks (RNNs) are a popular model for processing sequential data. The Gated Recurrent Unit (GRU) (Chung et al., 2014) and Long Short Term Memory (LSTM) (Hochreiter & Schmidhuber, 1997) are RNN units developed for learning long term dependencies by reducing the problem of vanishing gradients during training. However, both GRU and LSTM incur fairly expensive computational costs, with e.g. LSTM requiring the computation of 4 fully connected layers for each input it reads, independently of the input's importance for the overall task.
Based on the idea that not all inputs are equally important, and that relevant information can be spread throughout the input sequence, attention mechanisms were developed (Bahdanau et al., 2015) to help the network focus on important parts of the input. With soft attention, all inputs are read, but the attention mechanism is fully differentiable. In comparison, hard attention completely ignores part of the input sequence. Hard attention mechanisms have been considered in many areas, ranging from computer vision (Mnih et al., 2014;Campos et al., 2018) where the model learns what parts of the image it should focus on, to natural language processing (NLP), such as text classification and question answering (Yu et al., 2017;Campos et al., 2018;Yu et al., 2018), where the model learns which part of a document it can ignore. With hard attention, the RNN has fewer state updates, and therefore fewer floating point operations (FLOPs) are needed for inference. This is often denoted as speed reading: obtaining the same accuracy while using (far) fewer FLOPs (Yu et al., 2017;Seo et al., 2018;Huang et al., 2017;Fu & Ma, 2018). Prior work on speed reading processes text as chunks of either individual words or blocks of contiguous words. If the chunk being read is important enough, a full state update is performed; if not, the chunk is either ignored or a very limited amount of computations are done. This is followed by an action aiming to speed up the reading, e.g. skipping or jumping forward in text.
Inspired by human speed reading, we contribute an RNN speed reading model that ignores unimportant words in important sections, while also being able to jump past unimportant sections of the Jump based models. The method of Yu et al. (2017) reads a fixed number of words, and then may decide to jump a varying number of words ahead (bounded by a maximum allowed amount) in the text, or to jump directly to the end. The model uses a fixed number of total allowed jumps, and the task for the network is therefore to learn how best to spend this jump budget. The decision is trained using reinforcement learning, with the REINFORCE algorithm (Williams, 1992), where the reward is defined based only on if the model predicts correctly or not. Thus, the reward does not reflect how much the model has read. The model of Fu & Ma (2018) also has a fixed number of total jumps and is very similar to the work by Yu et al. (2017), however it allows the model to jump both back and forth a varying number of words in order to allow for re-reading important parts. Yu et al. (2018) use a CNN-RNN network where a block of words is first read by a CNN and then read as a single input by the RNN. After each block is read, the network decides to either re-read the block, jump a varying number of blocks ahead, or jump to the end. The decision is trained using reinforcement learning, where both REINFORCE and actor-critic methods were tested, with the actor-critic method leading to more stable training. The reward is based on the loss for the prediction and the FLOPs used by the network to make the prediction. FLOP reduction is thereby directly tied into the reward signal. Huang et al. (2017) propose a simple early-stopping method that uses a RNN and reads on a word level, and where the network learns when to stop. This can be considered a single large jump to the end of the text.
Skip and skim based models. Seo et al. (2018) present a model with two RNNs, a "small" RNN and a "big" RNN. At each time step the model chooses either the big or the small RNN to update the state, based on the input and previous state, which can be considered as text skimming when the small RNN is chosen. This network uses a Gumbel softmax to handle the non-differentiable choice, instead of the more common REINFORCE algorithm. Campos et al. (2018) train a LSTM that may choose to ignore a state update, based on the input. This can be considered as completely skipping a word, and is in contrast to skimming a word as done by Seo et al. (2018). This network uses a straight-through estimator to handle the non-differentiable action choice. This approach is applied on image classification, but we include it in this overview for completeness.
Other speed reading models. Johansen & Socher (2017) introduce a speed reading model for sentiment classification where a simple model with low computational cost first determines if an LSTM model should be used, or a bag-of-words approach is sufficient. The method of Choi et al. (2017) performs question answering by first using a CNN-based sentence classifier to find candidate sentences, thereby making a summary of the whole document relevant for the given query, and then using the summary in an RNN.
More widely, gated units, such as GRU and LSTM, face problems with long input sequences (Neil et al., 2016). Speed reading is one way of handling this problem by reducing the input sequence. In contrast, Neil et al. (2016) handle the problem by only allowing updates to part of the LSTM at a current time point, where an oscillating function controls what part of the LSTM state can currently be updated. Cheng et al. (2016) handle this by using memory networks to store an array of states, and the state at a given point in time comes from applying an attention mechanism over the stored states, handling the issues of older states being written over. Figure 1: Overview of our proposed model. The input at a given time is the action of the previous skip agent (S t−1 ), the previous jump agent action (J t−1 ), and the word embedded token (token i ). token next corresponds to the next word considered after skipping or jumping. Depending on the skip decision, the no/yes in the diamond shaped skip-box corresponds to which LSTM output and state should be used for the next input (updated or previous respectively).
Overall, the above state-of-the-art models are either jump or skip/skim based. We present the first speed reading model that both jumps and skips. Furthermore, current jumping-based models use a variable jump size for each input, without considering the inherent structure of the text. In contrast, our model defines jumps based on the punctuation structure of the text. This combined approach of both skipping and jumping according to text structure yields notable gains in efficiency (reduced FLOPs) without loss of effectiveness (accuracy). We next present our model.
STRUCTURAL-JUMP-LSTM MODEL
Our Structural-Jump-LSTM model consists of an ordinary RNN with LSTM units and two agents: the skip agent and the jump agent. Each of these agents compute a corresponding action distribution, where the skip and jump actions are sampled from. The skip agent chooses to either skip a single word, thereby not updating the LSTM, or let the LSTM read the word leading to an update of the LSTM. The jump agent is responsible for jumping forward in the text based on punctuation structure (henceforth referred to as structural jumps). A structural jump is a jump to the next word, or the next sub-sentence separator symbol (,;), or the next end of sentence symbol (.!?), or to the end of the text (which is also an instance of end of sentence). The purpose of using two agents is that the skip agent can ignore unimportant words with very little computational overhead, while the jump agent can jump past an unimportant part of the text. As both the skip and jump agent contribute to a reduction in FLOPs (by avoiding LSTM state updates), the Structural-Jump-LSTM is faster at inference than a vanilla LSTM. Figure 1 shows an overview of our model: The input in each time step is the previous actions of the skip agent (S), of the jump agent (J), and of the current input. The output from the previous LSTM is used in combination with the input to make a skip decision -if the word is skipped, the last LSTM state will not be changed. From this we use a standard LSTM cell where the output is fed to the jump agent, and a jump decision is made. Both agents make their choice using a fully connected layer, with a size that is significantly smaller than the LSTM cell size, to reduce the number of FLOPs by making the overhead of the agents as small as possible.
Section 3.1 details how inference is done in this model, and Section 3.2 presents how the network is trained.
INFERENCE
At a given time step t, Structural-Jump-LSTM reads input x i ∈ R d , and the LSTM has a previous output o t−1 ∈ R m and state s t−1 ∈ R m . At time step t − 1 the skip agent first takes action a skip t−1 sampled from the skip-action distribution p skip t−1 and the jump agent takes action a jump t−1 sampled from the jump-action distribution p jump t−1 . If a skip t−1 is to skip the word, then the jump agent takes no action, i.e. no jump is made. At time step t the network first needs to sample a skip t from p skip t , which is computed in each time step as:
p skip t = softmax(d LIN (state skip t )) (1) state skip t = d ReLU (x t o t−1 onehot(a skip t−1 ) onehot(a jump t−1 ))(2)
where d activation is a fully connected layer with the given activation, ReLU is the Rectified Linear Unit, LIN is the linear activation, and denotes concatenation of vectors. At inference time the action can either be sampled from the distribution, or chosen greedily, by always choosing the most probable action.
If the action a skip t = 0, we skip the word and set o t = o t−1 and s t = s t−1 , and the network will move to the next word at position i + 1. If a skip t = 1, the word is read via the LSTM. The output and new state of the RNN is calculated and produces o t and s t for the next step. The probability distribution p jump t from which action a jump t is sampled from is computed as:
p jump t = softmax(d LIN (state jump t )) (3) state jump t = d ReLU (o t )(4)
If the sampled action a jump t corresponds to e.g. a jump to the next sentence, then the current LSTM output and state will be kept, and all following inputs will be ignored until a new sentence begins. When there are no more inputs the output of the RNN is used to make a final prediction. If the action is to jump to the end of the text, then the final prediction will be made immediately based on the current output.
TRAINING
During training, Structural-Jump-LSTM is optimized with regards to two different objectives: 1) Producing an output that can be used for classification, and 2) learning when to skip and jump based on the inputs and their context, such that the minimum number of read inputs gives the maximum accuracy.
For objective 1, the output of the RNN can be used for classification, and the loss is computed as the cross entropy L class against the target.
For objective 2, the agents are non-differentiable due to the discrete sampling of the actions. Thus we choose to reformulate the problem as a reinforcement learning problem, where we define a reward function to maximize. In essence, the reward is given based on the amount read and whether or not the prediction is correct. We denote R as the total reward associated with a sampled sequence of actions, e.g. a skip 1 , a skip 2 a jump 2 , ..., a skip T , a jump T , and R t as the sum of reward from time t. Note that if the network chooses to skip a word, the sequence will not have a jump at that time step. We use an advantage actor-critic approach (Konda & Tsitsiklis, 2000) to train the agents in order to reduce the variance. The loss for the skip agent is given as:
L actor = − T t=0 log(p skip t (a skip t |state skip t )) · (R t − V skip t ) (5) V skip t = d LIN (state skip t )(6)
where V skip t is a value estimate of the given state, which is produced by a fully connected layer with output size 1. For the jump agent we do exactly the same as the skip agent, and the sum of the two actor losses is denoted L actors . The value estimate of a state corresponds to how much reward is (Zhang et al., 2015) Topic 4 topics 101,999 18,000 7,599 8 41,903 CBT-CN (Hill et al., 2016) Q/A 10 answers 120,769 2,000 2,500 429 51,774 CBT-NE (Hill et al., 2016) Q/A 10 answers 108,719 2,000 2,500 394 51,672 collected when acting from this state, such that the estimated value is a smoothed function of how much reward the network expects to collect later. Using advantage instead of reward is beneficial as the sign of the loss then depends on whether the achieved reward is higher or lower than the expected reward in a given state. This loss for the agent corresponds to the loss in the popular A3C algorithm (Mnih et al., 2016) in the case of t max being large enough to always reach a terminal condition. This is not a problem in our setting as documents are of finite length. The value estimate is trained together with both agents using the squared difference with the targets being the observed values for each state, (we denote this L critics ). Lastly, to provoke exploration in the network we add an entropy loss, where both agents' distributions are targeting a uniform distribution over the actions, (we denote this loss L entropies ). The total loss for the network is then:
L total = αL class + βL actors + γL critics + δL entropies (7) where α, β, γ, and δ control the trade-offs between the components.
For each action a reward is given; the reward for a skip action at time t is:
r skip t = − 1 |doc| if a skip t is a read action − cskip |doc| if a skip t is a skip action(8)
where |doc| is the number of words in the document such that the reward for skipping a word scales with the document length. The reward is negative for both cases as there is a cost associated with reading a word. The jump action gives no reward, as the reward is implicit by the network collecting less negative reward. At the end an additional reward is given based on whether the network makes a correct prediction, such that the summed reward from time t is given by:
R t = 1 + w rolling T t =t r skip t if y pred = y target p(y target ) + w rolling T t =t r skip t if y pred = y true(9)
y pred is the prediction by the network, y true is the target, and p(y target ) is the probability the network gives for the target class. w rolling controls the trade-off between the rolling reward and the reward based on model performance. The final reward is designed such that a large reward is given in the case of a correct prediction, while the agents are still rewarded for increasing the probability for the correct class, even if they did not predict correctly.
EXPERIMENTAL EVALUATION
We present the experimental evaluation of our method.
EXPERIMENTAL SETUP AND TRAINING
We use the same tasks and datasets used by the state-of-the-art in speed reading (displayed in Table 1), and evaluate against all 5 state-of-the-art models (Seo et al., 2018;Yu et al., 2017;Fu & Ma, 2018;Huang et al., 2017) in addition to a vanilla LSTM full reading baseline.
For the sentiment and topic classification datasets we apply a fully connected layer on the LSTM output followed by a traditional softmax prediction, where the fully connected layer has the same size as the cell size. On the Question Answering datasets we follow Yu et al. (2017) by choosing the candidate answer with the index that maximizes softmax(CW o) ∈ R 10 , where C ∈ R 10×d is the word embedding matrix of the candidate answers, d is the embedding size, W ∈ R d×cell size is a trained weight matrix, and o is the output state of the LSTM. This transforms the answering task into a classification problem with 10 classes. In addition, we read the query followed by the document, to condition reading of the document by the query as done by Yu et al. (2017). On all datasets we initialize the word embedding with GloVe embeddings (Pennington et al., 2014), and use those as the input to the skip agent and LSTM.
We use the predefined train, validation, and testing splits for IMDB, SST, CBT-CN, and CBT-NE, and use 15% of the training data in the rest as validation. For Rotten Tomatoes there is no predefined split, so we set aside 10% for testing as done by Yu et al. (2017). For training the model we use RM-Sprop with a learning rate chosen from the set {0.001, 0.0005}, with optimal of 0.001 on question answering datasets (CBT-CN and CBT-NE) and 0.0005 on the topic and sentiment datasets. We use a batch size of 32 on AG news, Rotten Tomatoes, and SST and a batch size of 100 for the remaining. Similarly to Yu et al. (2017), we employ dropout to reduce overfitting, with 0.1 on the embedding and 0.1 on the output of the LSTM. For RNN we use an LSTM cell with a size of 128, and apply gradient clipping with a tresholded value of 0.1. For both agents, their small fully connected layer is fixed to 25 neurons.
On all datasets we train by first maximizing the full read accuracy on the validation set, and the agents are then activated afterwards. While training we include the entropy loss in the total loss to predispose the speed reading to start with a full read behaviour, where the action distributions are initialized to only reading and never skipping or jumping. While maximizing full read accuracy the word embedding is fixed for the Question Answering datasets and trainable on the rest, while being fixed for all datasets during speed read training. As described in Equation 9, w rolling controls the trade-off between correct prediction and speed reading, and was chosen via cross validation from the set {0.05, 0.1, 0.15}, where most datasets performed best with 0.1. For simplicity we fix the cost of skipping c skip in Equation 8 to 0.5, such that skipping a word costs half of reading a word, which was done to promote jumping behaviour.
For the speed reading phase the total loss, as seen in Equation 7, is a combination of the prediction loss, actor loss, critic loss, and entropy loss, where the actor loss is scaled by a factor of 10 to make it comparable in size to the other losses. The entropy loss controls the amount of exploration and is chosen via cross validation from the set {0.01, 0.05, 0.1, 0.15}, where most datasets performed best with 0.1. We also cross validate choosing the actions greedily or via sampling from the action distributions, where for QA sampling was optimal and greedy was optimal for the others. Lastly, all non-QA datasets use uniform action target distributions for increased exploration, however CBT-CB and CBT-CN are trained with distribution with 95% probability mass on the "read" choice of both agents to lower skipping and jumping exploration, which was necessary to stabilize the training.
EVALUATION METRICS
The objective of speed reading consists of two opposing forces, as the accuracy of the model should be maximized while reading as few words as possible. We consider two different ways the model can skip a word: i) One or more words can be jumped over, e.g. as done in our model, LSTM-Jump (Yu et al., 2017), Yu-LSTM (Yu et al., 2018) and Adaptive-LSTM (Huang et al., 2017), where the latter implements a jump as early stopping. ii) A word can be skipped or skimmed, where the model is aware of the word (in contrast to jumping), but chooses to do a very limited amount of computations based on it, e.g. as done as skipping in our model or as skimming in Skim-LSTM (Seo et al., 2018). In order to capture both of these speed reading aspects, we report the percentage of words jumped over, and the total reading percentage (when excluding skipped and jumped words).
We calculate the total FLOPs used by the models as done by Seo et al. (2018) and Yu et al. (2018), reported as a FLOP reduction (FLOP-r) between the full read and speed read model. This is done to avoid runtime dependencies on optimized implementations, hardware setups, and whether the model is evaluated on CPU or GPU.
RESULTS
We now present the results of our evaluation.
(Class: Mean Of transportation) The Alexander Dennis Enviro200 Dart is a midibus manufactured by Alexander Dennis since 2006 for the British market as the successor of the Dart SLF chassis and Pointer body. The Enviro200 Dart is manufactured and marketed in North America by New Flyer as the MiDi. Figure 2: Example of jumping and skipping behaviour of our Structural-Jump-LSTM from DB-Pedia. Skipped words have a single strike-through while jumps consist of a sequence of strikethroughed words. The words that are jumped over or skipped are considered by the model not important for classifying the means of transportation (even though they include several nouns and name entinites that are generally considered to be important (Lioma & van Rijsbergen, 2008). Table 2 displays the accuracy, the percentage of text being jumped over, and the total reading percentage (when excluding jumped and skipped words), of our approach versus a full reading baseline. Our approach obtains similar accuracies compared to vanilla LSTM, while in some cases reducing the reading percentage to below 20% (IMDB and DBPedia), and with the worst speed reading resulting in a reading percentage of 68.6% (Yelp). The speed reading behaviour varies across the datasets with no skipping on CBT-CN, CBT-NE, and Yelp, no jumping on Rotten Tomatoes, and a mix of skipping and jumping on the remaining datasets.
STRUCTURAL-JUMP-LSTM VERSUS FULL READING
In 7 out of 8 datasets Structural-Jump-LSTM improves accuracy and in 1 the accuracy is the same (IMDB). At all times, our model reads significantly less text than the full reading baseline, namely 17.5% -68.8% of the text. Overall this indicates that the jumps/skips of our model are meaningful (if important text was skipped or jumped over, accuracy would drop significantly). An example of this speed reading behaviour (from DBPedia) can be seen in Figure 2. Generally, the model learns to skip some uninformative words, read those it deems important for predicting the target (Mean of transportation), and once it is certain of the prediction it starts jumping to the end of the sentences. Interestingly, in this setting it does not learn to just jump to the end, but rather to inspect the first two words of the last sentence. Table 3 displays the scores of our approach against all five state-of-the-art speed reading models. We report the values from the original papers, which all report the speed reading result with the highest accuracy. We list the reported FLOP reductions when available. If FLOP reductions are not reported in the original paper, we report the speed increase, which should be considered a lower bound on the FLOP reduction. Note that the state-of-the-art models use different network configurations of the RNN network and training schemes, resulting in different full read and speed read accuracies for the same dataset. To allow a consistent comparison of the effect of each speed reading model, we report the accuracy difference between each paper's reported full read (vanilla LSTM) and speed read accuracies.
STRUCTURAL-JUMP-LSTM VERSUS STATE-OF-THE-ART SPEED READING
On all datasets our approach provides either the best or shared best FLOP reduction, except on CBT-CN where LSTM-Jump provides a speed increase of 6.1x (compared to 3.9x for our approach). The second best method with regards to FLOP reduction is Skim-LSTM, and the worst is the Adaptive-LSTM model that implements early stopping when the model is certain of its prediction. Skim-LSTM has an evaluation advantage in this FLOP reduction setting, since the FLOP reduction is directly tied to the difference between the small and large LSTM used by the model, such that an unnecessarily large LSTM will lead to very attractive reductions. Skim-LSTM uses a LSTM size of 200 for Question Answering tasks and a default size of 100 for the rest, whereas the small is tested between 5 to 20. In the case of large skimming percentage, it could be argued that the size of the large LSTM could be reduced without affecting the performance. In contrast, jumping based models are less prone to this evaluation flaw because they cannot carry over information from skipped or jumped words.
Most models perform at least as well as a vanilla LSTM. LSTM-Shuttle provides consistent accuracy improvements, but does so at a noticeable FLOP reduction cost compared to Skim-LSTM and our Table 3: Comparison of state-of-the-art speed reading models. ∆Acc is the difference between the accuracy of the full read LSTM and the model (the higher the better), and FLOP-r is the FLOP reduction compared to a full read model. A star (*) indicates that the original paper provided only a speed increase, which should be considered a lower bound for FLOP-r.
approach. This can be explained by its ability to make backwards jumps in the text in order to reread important parts, which is similar to the idea of Yu-LSTM. The largest accuracy improvements appear on CBT-CN and CBT-NE with LSTM-Jump. The performance obtained by reading just the query has been reported to be very similar to using both the query and the 20 sentences (Hill et al., 2016), which could indicate a certain noise level in the data that speed reading models are able to identify and reduce the number of read words between the high information section of the text and the final prediction. LSTM-Jump and LSTM-Shuttle are optimized via maximizing a jumping budget, where only a certain specified number of jumps are allowed to be made, which provides an edge in comparison to the other methods in this setting because prior knowledge about the high information level of the query can be encoded in the budget (cf. Yu et al. (2017) where the best accuracy is obtained using 1 jump for CBT-CN and 5 for CBT-NE). In the setting of speed reading the query is read first to condition the jumping based on the query -this makes the model very likely to prefer jumping shortly after the query is read, to not degrade the LSTM state obtained after reading the query. Overall, budgets can be beneficial if prior information about the document is available, but this is most often not the case for a large set of real world datasets. However, methods based on budgets are in general significantly more rigid, as every document in a collection has the same budget, but the required budget for each document is not necessarily the same.
CONCLUSION
We presented Structural-Jump-LSTM, a recurrent neural network for speed reading. Structural-Jump-LSTM is inspired by human speed reading, and can skip irrelevant words in important sections, while also jumping past unimportant parts of a text. It uses the dynamically spaced punctuation structure of text to determine whether to jump to the next word, the next sub-sentence separator (,;), next end of sentence (.!?), or to the end of the text. In addition, it allows skipping a word after observing it without updating the state of the RNN. Through an extensive experimental evaluation against all five state-of-the-art baselines, Structural-Jump-LSTM obtains the overall largest reduction in floating point operations, while maintaining the same accuracy or even improving it over a vanilla LSTM model that reads the full text. We contribute the first ever neural speed reading model that both skips and jumps over dynamically defined chunks of text without loss of effectiveness and with notable gains in efficiency. Future work includes investigating other reward functions, where most of the reward is not awarded in the end, and whether this would improve agent training by having a stronger signal spread throughout the text.
Table 1 :
1Dataset statistics.
Jump-LSTM (ours) 0.882 70.7% 19.7% 0.985 68.1% 17.5% 0.958 31.2% 68.8% 0.883 32.2% 52.0% Jump-LSTM (ours) 0.841 19.1% 53.9% 0.790 0.4% 57.8% 0.522 67.4% 32.6% 0.463 68.7% 31.3%Model
IMDB
DBPedia
Yelp
AG news
Acc
Jump
Read
Acc
Jump
Read
Acc
Jump
Read
Acc
Jump
Read
vanilla LSTM
0.882
0%
100% 0.972
0%
100% 0.955
0%
100% 0.880
0%
100%
Structural-Model
SST
Rotten Tomatoes
CBT-CN
CBT-NE
Acc
Jump
Read
Acc
Jump
Read
Acc
Jump
Read
Acc
Jump
Read
vanilla LSTM
0.837
0%
100% 0.787
0%
100%
0.515
0%
100% 0.453
0%
100%
Structural-
Table 2 :
2vanilla LSTM refers to a standard LSTM full reading. The columns show the accuracy (Acc), the percentage of text being jumped over, and the total reading percentage. FLOP-r ∆Acc FLOP-r ∆Acc FLOP-r ∆Acc FLOP-r Structural-Jump-LSTM (ours) NE ∆Acc FLOP-r ∆Acc FLOP-r ∆Acc FLOP-r ∆Acc FLOP-r Structural-Jump-LSTM (ours)Model
IMDB
DBPedia
Yelp
AG news
∆Acc 0.000
6.3x
0.013
7.0x
0.003
1.9x
0.003
2.4x
Skim-LSTM (Seo et al., 2018)
0.001
5.8x
-
-
-
-
0.001
1.4x
LSTM-Jump (Yu et al., 2017)
0.003
1.6x*
-
-
-
-
0.012
1.1x*
Yu-LSTM (Yu et al., 2018)
0.005
3.4x
0.002
2.3x
0.002
1.4x
0.001
1.7x
LSTM-Shuttle (Fu & Ma, 2018)
0.008
2.1x*
-
-
-
-
0.020
1.3x*
Adaptive-LSTM (Huang et al., 2017)
-
-
-0.016
1.1x*
-
-
-0.012
1.1x*
Model
SST
Rotten Tomatoes
CBT-CN
CBT-0.004
2.4x
0.003
2.1x
0.007
3.9x
0.010
4.1x
Skim-LSTM (Seo et al., 2018)
0.000
2.4x
0.017
2.1x
0.014
1.8x
0.024
3.6x
LSTM-Jump (Yu et al., 2017)
-
-
0.002
1.5x*
0.044
6.1x*
0.030
3.0x*
Yu-LSTM (Yu et al., 2018)
-
-
-
-
-
-
-
-
LSTM-Shuttle (Fu & Ma, 2018)
-
-
0.007
1.7x*
-
-
0.019
3.0x*
Adaptive-LSTM (Huang et al., 2017)
-
-
-
-
-
-
-
-
https://github.com/Varyn/Neural-Speed-Reading-with-Structural-Jump-LSTM
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, ICLRDzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. ICLR, 2015.
Víctor Campos, Brendan Jou, Xavier Giró-I Nieto, Jordi Torres, Shih-Fu Chang, Skip rnn: Learning to skip state updates in recurrent neural networks. ICLR. Víctor Campos, Brendan Jou, Xavier Giró-i Nieto, Jordi Torres, and Shih-Fu Chang. Skip rnn: Learning to skip state updates in recurrent neural networks. ICLR, 2018.
Long short-term memory-networks for machine reading. Jianpeng Cheng, Li Dong, Mirella Lapata, Conference on Empirical Methods in Natural Language Processing (EMNLP). Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. Conference on Empirical Methods in Natural Language Processing (EMNLP), 2016.
Coarse-to-fine question answering for long documents. Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, Jonathan Berant, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational Linguistics1Eunsol Choi, Daniel Hewlett, Jakob Uszkoreit, Illia Polosukhin, Alexandre Lacoste, and Jonathan Berant. Coarse-to-fine question answering for long documents. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pp. 209-220, 2017.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, arXiv:1412.3555arXiv preprintJunyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555, 2014.
Speed reading: Learning to read forbackward via shuttle. Tsu-Jui Fu, Wei-Yun Ma, Conference on Empirical Methods in Natural Language Processing. Tsu-Jui Fu and Wei-Yun Ma. Speed reading: Learning to read forbackward via shuttle. In Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), 2018.
The goldilocks principle: Reading children's books with explicit memory representations. Felix Hill, Antoine Bordes, Sumit Chopra, Jason Weston, Proceedings of the 4th International Conference on Learning Representations (ICLR 2016). the 4th International Conference on Learning Representations (ICLR 2016)San Juan, Puerto RicoFelix Hill, Antoine Bordes, Sumit Chopra, and Jason Weston. The goldilocks principle: Reading children's books with explicit memory representations. In Proceedings of the 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico, May 2016.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8): 1735-1780, 1997.
Length adaptive recurrent model for text classification. Zhengjie Huang, Zi Ye, Shuangyin Li, Rong Pan, Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. the 2017 ACM on Conference on Information and Knowledge ManagementACMZhengjie Huang, Zi Ye, Shuangyin Li, and Rong Pan. Length adaptive recurrent model for text classification. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, pp. 1019-1027. ACM, 2017.
Learning when to skim and when to read. Alexander Johansen, Richard Socher, Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPAlexander Johansen and Richard Socher. Learning when to skim and when to read. In Proceedings of the 2nd Workshop on Representation Learning for NLP, pp. 257-264, 2017.
Actor-critic algorithms. R Vijay, John N Konda, Tsitsiklis, Advances in neural information processing systems. Vijay R Konda and John N Tsitsiklis. Actor-critic algorithms. In Advances in neural information processing systems, pp. 1008-1014, 2000.
Dbpedia-a largescale, multilingual knowledge base extracted from wikipedia. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, Semantic Web. 62Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick Van Kleef, Sören Auer, et al. Dbpedia-a large- scale, multilingual knowledge base extracted from wikipedia. Semantic Web, 6(2):167-195, 2015.
Part of speech n-grams and information retrieval. Christina Lioma, C J Keith Van Rijsbergen, French Review of applied linguistics. 131Christina Lioma and CJ Keith van Rijsbergen. Part of speech n-grams and information retrieval. French Review of applied linguistics, 13(1):9-22, 2008.
Learning word vectors for sentiment analysis. L Andrew, Raymond E Maas, Daly, T Peter, Dan Pham, Huang, Y Andrew, Christopher Ng, Potts, Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies. the 49th annual meeting of the association for computational linguistics: Human language technologiesAssociation for Computational Linguistics1Andrew L Maas, Raymond E Daly, Peter T Pham, Dan Huang, Andrew Y Ng, and Christopher Potts. Learning word vectors for sentiment analysis. In Proceedings of the 49th annual meeting of the association for computational linguistics: Human language technologies-volume 1, pp. 142-150. Association for Computational Linguistics, 2011.
Recurrent models of visual attention. Volodymyr Mnih, Nicolas Heess, Alex Graves, Advances in neural information processing systems. Volodymyr Mnih, Nicolas Heess, Alex Graves, et al. Recurrent models of visual attention. In Advances in neural information processing systems, pp. 2204-2212, 2014.
Asynchronous methods for deep reinforcement learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, Koray Kavukcuoglu, International conference on machine learning. Volodymyr Mnih, Adria Puigdomenech Badia, Mehdi Mirza, Alex Graves, Timothy Lillicrap, Tim Harley, David Silver, and Koray Kavukcuoglu. Asynchronous methods for deep reinforcement learning. In International conference on machine learning, pp. 1928-1937, 2016.
Phased lstm: Accelerating recurrent network training for long or event-based sequences. Daniel Neil, Michael Pfeiffer, Shih-Chii Liu, Advances in Neural Information Processing Systems. Daniel Neil, Michael Pfeiffer, and Shih-Chii Liu. Phased lstm: Accelerating recurrent network train- ing for long or event-based sequences. In Advances in Neural Information Processing Systems, pp. 3882-3890, 2016.
Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. Bo Pang, Lillian Lee, Proceedings of the 43rd annual meeting on association for computational linguistics. the 43rd annual meeting on association for computational linguisticsAssociation for Computational LinguisticsBo Pang and Lillian Lee. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for com- putational linguistics, pp. 115-124. Association for Computational Linguistics, 2005.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp. 1532-1543, 2014.
Neural speed reading via skimrnn. ICLR. Minjoon Seo, Sewon Min, Ali Farhadi, Hannaneh Hajishirzi, Minjoon Seo, Sewon Min, Ali Farhadi, and Hannaneh Hajishirzi. Neural speed reading via skim- rnn. ICLR, 2018.
Recursive deep models for semantic compositionality over a sentiment treebank. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, D Christopher, Andrew Manning, Christopher Ng, Potts, Proceedings of the 2013 conference on empirical methods in natural language processing. the 2013 conference on empirical methods in natural language processingRichard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D Manning, Andrew Ng, and Christopher Potts. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language pro- cessing, pp. 1631-1642, 2013.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83-4Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4):229-256, 1992.
Learning to skim text. Adams Wei Yu, Hongrae Lee, Quoc Le, 10.18653/v1/P17-1172Annual Meeting of the Association for Computational Linguistics (ACL). Adams Wei Yu, Hongrae Lee, and Quoc Le. Learning to skim text. In Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), pp. 1880-1890, 2017. doi: 10.18653/v1/P17-1172. URL http://www.aclweb.org/anthology/P17-1172.
Fast and accurate text classification: Skimming, rereading and early stopping. Keyi Yu, Yang Liu, Alexander G Schwing, Jian Peng, Keyi Yu, Yang Liu, Alexander G. Schwing, and Jian Peng. Fast and accurate text classification: Skimming, rereading and early stopping, 2018. URL https://openreview.net/forum? id=ryZ8sz-Ab.
Character-level convolutional networks for text classification. Xiang Zhang, Junbo Zhao, Yann Lecun, Advances in neural information processing systems. Xiang Zhang, Junbo Zhao, and Yann LeCun. Character-level convolutional networks for text clas- sification. In Advances in neural information processing systems, pp. 649-657, 2015.
| [
"https://github.com/Varyn/Neural-Speed-Reading-with-Structural-Jump-LSTM"
] |
[
"Bayesian Ensembles of Crowds and Deep Learners for Sequence Tagging",
"Bayesian Ensembles of Crowds and Deep Learners for Sequence Tagging"
] | [
"Edwin Simpson \nDepartment of Computer Science\nUbiquitous Knowledge Processing Lab\nTechnische Universität Darmstadt\n\n",
"Iryna Gurevych \nDepartment of Computer Science\nUbiquitous Knowledge Processing Lab\nTechnische Universität Darmstadt\n\n"
] | [
"Department of Computer Science\nUbiquitous Knowledge Processing Lab\nTechnische Universität Darmstadt\n",
"Department of Computer Science\nUbiquitous Knowledge Processing Lab\nTechnische Universität Darmstadt\n"
] | [] | Current methods for sequence tagging, a core task in NLP, are data hungry. Crowdsourcing is a relatively cheap way to obtain labeled data, but the annotators are unreliable. To address this, we develop a modular Bayesian method for aggregating sequence labels from multiple annotators and evaluate different models of annotator errors and labeling biases. Our approach integrates black-box sequence taggers as components in the model to improve the quality of predictions. We evaluate our model on crowdsourced data for named entity recognition and information extraction tasks, showing that our sequential annotator model outperforms previous methods. | null | [
"https://arxiv.org/pdf/1811.00780v2.pdf"
] | 53,296,348 | 1811.00780 | 8fa907b2ec895c823d80e165adaa87c1b5f55020 |
Bayesian Ensembles of Crowds and Deep Learners for Sequence Tagging
Edwin Simpson
Department of Computer Science
Ubiquitous Knowledge Processing Lab
Technische Universität Darmstadt
Iryna Gurevych
Department of Computer Science
Ubiquitous Knowledge Processing Lab
Technische Universität Darmstadt
Bayesian Ensembles of Crowds and Deep Learners for Sequence Tagging
Current methods for sequence tagging, a core task in NLP, are data hungry. Crowdsourcing is a relatively cheap way to obtain labeled data, but the annotators are unreliable. To address this, we develop a modular Bayesian method for aggregating sequence labels from multiple annotators and evaluate different models of annotator errors and labeling biases. Our approach integrates black-box sequence taggers as components in the model to improve the quality of predictions. We evaluate our model on crowdsourced data for named entity recognition and information extraction tasks, showing that our sequential annotator model outperforms previous methods.
Introduction
The high demand for labeled training data in current NLP methods, particularly deep learning, is widely recognized (Zoph et al., 2016;Rastogi et al., 2016;Gormley et al., 2014). A common NLP task that has benefited from deep learning is sequence tagging, which involves classifying sequences of tokens for tasks such as named entity recognition, part-of-speech tagging, or information extraction. Neural network sequence taggers are typically trained on tens of thousands of documents (Ma and Hovy, 2016;Lample et al., 2016), which presents a challenge when facing new domains or tasks, where obtaining labels is often time-consuming or costly.
Labeled data can be obtained cheaply by crowdsourcing, in which large numbers of untrained workers annotate documents instead of more expensive experts. For sequence tagging, this results in multiple sequences of unreliable labels for each document. Probabilistic methods for aggregating these labels have been shown to be more accurate than simple heuristics such as majority voting (Raykar et al., 2010;Sheshadri and Lease, 2013;Rodrigues et al., 2013;Hovy et al., 2013). However, work on sequence tagging is limited and existing methods cannot model dependencies between the annotators' labels and hence miss error patterns such as a tendency to label overly long spans (Rodrigues et al., 2014;Nguyen et al., 2017). In this paper, we remedy this by proposing a sequential annotator model and applying it to tasks that follow a beginning, inside, outside (BIO) scheme, in which the first token in a span of type 'x' is labeled 'B-x', subsequent tokens are labeled 'I-x', and tokens outside spans are labeled 'O'.
When learning from noisy or small datasets, commonly-used methods based on maximum likelihood estimation may produce over-confident predictions (Xiong et al., 2011;Srivastava et al., 2014). In contrast, Bayesian inference accounts for model uncertainty when making predictions, and enables hyperparameter tuning in unsupervised scenarios through Bayesian model selection (Bishop, 2006). Unlike alternative methods that optimize the values for model parameters, Bayesian inference integrates over all possible values of a parameter, weighted by a prior distribution that captures background knowledge. The resulting posterior probabilities improve downstream decision making as they include the probability of errors due to a lack of knowledge. For example, during active learning, posterior probabilities assist with selecting the most informative data points (Settles, 2010). We therefore develop a Bayesian sequence combination method, building on prior work that has demonstrated the advantages of Bayesian inference for aggregating unreliable classifications (Kim and Ghahramani, 2012;Simpson et al., 2013;Felt et al., 2016;Paun et al., 2018).
Aggregated label quality can be improved by modeling the text features as well as the annotators (Simpson et al., 2015;Felt et al., 2016).
For complex tasks such as sequence tagging, we may wish to exploit existing state-of-the-art models, such as neural networks that do not account for model uncertainty. In this paper, we show how to integrate existing black box methods into the aggregation model to construct ensembles of deep learners and human annotators. Our method learns the reliability of each black box method and avoids the need to aggregate crowdsourced data using a separate pre-processing step before training a sequence tagger.
This paper provides the following contributions:
• We propose Bayesian sequence combination (BSC), a method for aggregating sequence labels from multiple annotators that models sequential dependencies between tags
• A technique for wrapping existing black-box sequence taggers into the aggregation model to improve the quality of aggregated labels
• Theoretical and empirical comparisons of annotator models for sequence tagging, including a novel model that captures sequential dependencies between annotations (referred to later as seq)
The following sections discuss related work, annotator models for sequence tagging, our BSC model, and our variational inference approach that enables us to integrate existing sequence taggers. Then, we evaluate a range of Bayesian and non-Bayesian aggregation methods with simulated annotators and two crowdsourced NLP datasets, showing that our sequential model consistently outperforms the previous state-of-the-art, and benefits from the inclusion of automated sequence taggers. We make all of our code freely available 1 .
Related Work
Sheshadri and Lease (2013) benchmarked several aggregation models for non-sequential classifications, obtaining the most consistent performance from that of Raykar et al. (2010), who model the reliability of individual annotators using probabilistic confusion matrices, as proposed by Dawid and Skene (1979). Simpson et al. (2013) showed that a Bayesian variant of of Dawid and Skene (1979) We expand this work by detailing the relationships between several annotator models and extending them to sequential classification. Here we focus on the core annotator representation, rather than extensions for clustering annotators (Venanzi et al., 2014;Moreno et al., 2015), modeling their dynamics (Simpson et al., 2013), adapting to task difficulty (Whitehill et al., 2009;Bachrach et al., 2012), or time spent (Venanzi et al., 2016).
To accout for disagreement between annotators when training a sequence tagger, Plank et al. (2014) modify the loss function of the learner. However, typical cross entropy loss naturally accommodates probabilities of labels as well as discrete labels (Bekker and Goldberger, 2016). A contrasting approach is CRF-MA (Rodrigues et al., 2014), a CRF-based model that assumes only one annotator is correct for any given label. Recently, Nguyen et al. (2017) proposed a hidden Markov model (HMM) approach that outperformed CRF-MA, called HMM-crowd. Both CRF-MA and HMM-crowd use simpler annotator models than Dawid and Skene (1979) that do not capture the effect of sequential dependencies on annotator reliability. Neither CRF-MA nor HMM-crowd use a fully Bayesian approach. In this paper, we develop a sequential annotator model and a fully Bayesian method for aggregating sequence labels.
While HMM-crowd uses only a simple conditional independence model of text features, Nguyen et al. (2017) and Rodrigues and Pereira (2018) also train neural network sequence taggers directly on crowdsourced data by adding a layer to handle worker reliability. However, the proposed approaches did not outperform either CRF-MA (Rodrigues and Pereira, 2018) or HMM-crowd (Nguyen et al., 2017). A similar approach by Albarqouni et al. (2016) integrates a CNN classifier for image annotation into an aggregation method based on expectation maximization (EM) (Dempster et al., 1977). Yang et al. (2018) adapt a Bayesian neural network so that it can be trained concurrently with an annotator model, also using EM. In contrast to previous work, we do not require neural networks to be adapted, nor assume that their predictions are reliable when aggregating annotations. Instead, we propose to learn the reliability of existing sequence taggers, allowing untrusted, off-theshelf taggers to enhance the performance of the aggregation method.
Modeling Sequential Annotators
When combining multiple annotators with varying skill levels, we can improve performance by modeling their individual reliability. Here, we describe several existing models that do not consider dependencies between annotations in a sequence, then provide an extension that captures sequential dependencies. Each of the approaches presented employs a different function, A, to model the likelihood of the annotator choosing the label c τ given the true label, t τ , for token τ .
Accuracy model (acc): simply models the annotator's accuracy, π, as follows:
A = p(c τ = i|t τ = j, π) = π where i = j 1−π J−1 otherwise ,(1)
where c τ is the label given by the annotator for token τ , t τ is its true label and J is the number of classes. This is the basis of several previous methods (Donmez et al., 2010;Rodrigues et al., 2013). It assumes reliability is constant, which means that when one class label is far more common than others, a spammer who always selects the most common label will nonetheless have a high π. MACE (Hovy et al., 2013): assumes constant accuracy, π, but when an annotator is incorrect, they label according to a spamming distribution, ξ, that is independent of the true label, t τ .
A = p(c τ = i|t τ = j, π, ξ) = π + (1 − π)ξ j where i = j (1 − π)ξ j otherwise .(2)
This addresses the case where spammers choose the most common label when the classes are imbalanced. While MACE can capture spamming patterns, it does not explicitly model different rates of errors per class. This could be an issue for sequence tagging using the BIO encoding, for example, if an annotator frequently labels longer spans than the true spans by starting the spans early. In this case, they may more frequently mis-label the 'B' tokens than the 'I' or 'O' tokens, which cannot be modeled by MACE. Confusion vector (CV): this approach learns a separate accuracy for each class label (Nguyen et al., 2017) using parameter vector, π, of size J:
A = p(c τ = i|t τ = j, π) = π j where i = j 1−π j J−1 otherwise .(3)
This model does not capture spamming patterns where one of the incorrect labels has a much higher likelihood than the others. Confusion matrix (CM) (Dawid and Skene, 1979): this model can be seen as an expansion of the confusion vector so that π becomes a J × J matrix with values given by:
A = p(c τ = i|t τ = j, π) = π j,i .(4)
This requires a larger number of parameters, J 2 , compared to the J + 1 parameters of MACE or J parameters of the confusion vector. CM can model spammers who frequently chose one label regardless of the ground truth, as well as annotators with different error rates for each type of 'B-x', 'I-x' and 'O' label. For example, if an annotator is better at detecting type 'x' spans than type 'y', or if they frequently mis-label the start of a span as 'O' when the true label is 'B-x', but are otherwise accurate. However, the confusion matrix ignores dependencies between annotations in a sequence, such as the fact that an 'I' cannot immediately follow an 'O'. Sequential Confusion Matrix (seq): we introduce a new extension to the confusion matrix to model the dependency of each label in a sequence on its predecessor, giving the following likelihood:
A = p(c τ = i|c τ −1 = ι, t τ = j, π) = π j,ι,i ,(5)
where π is now three-dimensional with size J × J × J. In the case of disallowed transitions, e.g. from c τ −1 ='O' to c τ ='I', the value π j,c τ −1 ,cτ = 0, ∀j is fixed a priori. The sequential model can capture phenomena such as a tendency toward overly long sequences, by learning that π O,O,O > π O,I,O , or a tendency to split spans by inserting 'B' in place of 'I' by increasing the value of π I,I,B without affecting π I,B,B and π I,O,B .
The annotator models we presented, which include the most widespread models for NLP annotation tasks, can therefore be seen as extensions of one another. The choice of annotator model for a particular annotator depends on the developer's understanding of the annotation task: if the annotations have sequential dependencies, this suggests the seq model; for non-sequential classifications CM may be effective with small (≤ 5) numbers of classes; MACE may be more suitable if there are more classes. However, there is also a tradeoff between the expressiveness of the model and the number of parameters that must be learned. Simpler models with fewer parameters, such as acc, which may be effective if there are only small numbers of annotations from each annotator. Our experiments in Section 5 investigate this trade-off on NLP tasks involving sequential annotation. The next section shows how these models can be used as part of a model for aggregating sequential annotations.
A Generative Model for Bayesian Sequence Combination
The generative story for our approach, Bayesian sequence combination (BSC), is as follows. We assume a transition matrix, T , where each entry is T j,ι = p(t τ = ι|t τ −1 = j). We draw each row of the transition matrix, T j ∼ Dir(γ j ), where Dir is the Dirichlet distribution. For each document, n, in a set of N documents, we draw a sequence of class labels, t n = [t n,1 , ..., t n,Ln ], of length L n , from a categorical distribution: t n,τ ∼ Cat(T t n,τ −1 ). The set of all labels for all documents is referred to as t = {t 1 , ..., t N }. In the generative model, we assume one of the annotator models described in Section 2 for each of K annotators. The number of parameters depends on the choice of annotator model: for acc, only one parameter, π (k) , is drawn for annotator k; for MACE, we draw a single value π (k) and a vector ξ (k) of length J, while for CV we draw J independent values of π (k) j , and for CM we draw a vector π (k) j of size J for each true label value j ∈ {1, ..., J}; in the case of seq, we draw vectors π (k) j,ι for each true label value for each previous label value, ι. All parameters of these annotator models are probabilities, so are drawn from Dirichlet priors. We refer to the set of hyperparameters for k's annotator model as α (k) . Given its parameters, the annotator model defines a likelihood function, A (k) (t n,τ , c n,τ , c n,τ −1 ), where c n,τ is the τ th label of document n. The argument c n,τ −1 is only required if A (k) is an instance of seq and is ignored by the other annotator models. We draw annotator k's label for each token τ in each document n according to:
c (k) n,τ ∼ Cat([A (k) (t n,τ , 1, c (k) n,τ −1 ), ..., A (k) (t n,τ , J, c (k) n,τ −1 )]).(6)
The annotators are assumed to be conditionally independent of one another given the true labels, t, which means that their errors are assumed to be uncorrelated. This is a strong assumption when considering that the annotators have to make their decisions based on the same input data. However, in practice, dependencies do not usually cause the most probable label to change (Zhang, 2004), hence the performance of classifier combination methods is only slightly degraded, while avoiding the complexity of modeling dependencies between annotators (Kim and Ghahramani, 2012). Black-box Sequence Taggers: As an extension to our model, we can integrate S automated methods as additional noisy annotators. In comparison to human annotators, sequence taggers can quickly label large numbers of documents, providing a cheap source of additional annotations across the whole dataset. We model each sequence tagger, s, using an annotator model, B (s) , of one of the types described in Section 2 (analogous to A (k) for a human annotator), with hyperparameters β (s) .
We extend the generative model for BSC with additional steps as follows. Each sequence tagger generates a sequence of labels, d (s) n , for each document n (analogous to c (k) n produced by human annotators) according to:
d (s) n,τ ∼ Cat([B (s) (t n,τ , 1, d (s) n,τ −1 ), ..., B (s) (t n,τ , J, d (s) n,τ −1 )]).(7)
In the generative model, we draw a sequence of text tokens, x n , from a likelihood, p x n |d (s) n , θ (s) , given internal parameters, θ (s) , and label sequence, d (s) n . This likelihood is defined by the black-box sequence tagger. If the sequence tagger is Bayesian, its parameters, θ (s) , may also be drawn from an unknown prior distribution. However, since we are treating the tagger as a black box, we do not need to know these internal details. In the next section, we explain how we can avoid computing this likelihood explicitly during inference, and instead use only the sequence tagger's existing training and prediction functions to learn θ (s) in parallel with the parameters of the BSC model. Like the human annotators, each sequence tagger is assumed to produce labels that are conditionally independent of the other sequence taggers given t.
Joint distribution: the complete model can be represented by the joint distribution, given by:
p(t, A, B, T , θ, c, d, x|α, β, γ) (8) = K k=1 p(A (k) |α (k) ) N n=1 p(c (k) n |A (k) , t) J j=1 p(T j |γ j ) N n=1 Ln τ =1 p(t n |T t n,τ −1 ) S s=1 p(θ (s) ) p(B (s) |β (s) ) N n=1 p(x|d (s) , θ (s) )p(d (s) |B (s) , t) ,
where each term is defined by the distributions of the generative model described in this section.
Inference using Variational Bayes
Given a set of annotations, c = {c (1) , .., c (K) }, from K annotators, our aim is to obtain a posterior distribution over sequence labels, t. To do this, we employ variational Bayes (VB) (Attias, 2000). In comparison to other Bayesian approaches such as Markov chain Monte Carlo (MCMC), VB is often faster, readily allows incremental learning, and provides easier ways to determine convergence (Bishop, 2006). Unlike maximum likelihood methods such as standard expectation maximization (EM), VB considers prior distributions and accounts for parameter uncertainty in a Bayesian manner. The trade-off is that VB requires us to approximate the posterior distribution. Here, we apply the mean field assumption to assume a variational approximation that factorizes between subsets of parameters or latent variables, so that each subset, z, has a variational factor, q(z):
p(t, A, B, T , θ|c, x, α, β, γ) ≈ K k=1 q(A (k) ) J j=1 q(T j ) N n=1 q(t n ) S s=1 q(B (s) )q(θ (s) ) . (9)
The labels produced by the sequence taggers, d, can be marginalized analytically so do not require a separate factor. Each variational factor has the form ln q(z) = E[ln p(z|c, ¬z)], where ¬z contains all the latent variables except z. We perform approximate inference by using coordinate ascent to update each variational factor, q(z), in turn, taking expectations with respect to the current estimates of the other variational factors. Each iteration reduces the KL-divergence between the true and approximate posteriors of Equation 9, and hence optimizes a lower bound on the log marginal likelihood, also called the evidence lower bound or ELBO (Bishop, 2006;Attias, 2000). The complete VB algorithm is described in Algorithm 1, which makes use of the update equations for the log variational factors given below.
Input: Annotations, c 1 Randomly initialize E ln A (k) , ∀k, E ln B (s) , ∀s, E ln T j , ∀j and d (s)
n,τ (i), ∀s, ∀n, ∀τ, ∀i. while not_converged(r n,τ,j , ∀n, ∀τ, ∀j) do 2 Update r j,n,τ , s t j,n,τ−1 ,tι,n,τ , ∀j, ∀τ, ∀i, ∀ι, Update ln q(B (s) ) and E ln B (s) , ∀s , given currentd, r j,n,τ 7
Update ln q(T j ) and E ln T j,ι , ∀j, ∀ι, given current s t j,n,τ−1 ,tι,n,τ end Output: Label posteriors, r n,τ,j , ∀n, ∀τ, ∀j, most probable sequence of labels, t n , ∀n using Viterbi algorithm
Algorithm 1: The VB algorithm for BSC.
The prior distributions chosen for our generative model are conjugate to the distributions over the latent variables and model parameters, meaning that each q(z) is the same type of distribution as the corresponding prior distribution defined in Section 3. The parameters of each variational distribution can be computed in terms of expectations over the other subsets of variables. For the true la-bels, t, the variational factor is:
ln q(t n ) = N n=1 Ln τ =1 S s=1 E lnB (s) t n,τ , d (s) n,τ , d (s) n,τ−1 + N n=1 Ln τ =1 K k=1 E lnA (k) t n,τ , c (k) n,τ , c (k) n,τ −1 + E ln T t n,τ −1 ,tn,τ + const.(10)
From this factor, we compute the posterior probability of each true token label, r n,τ,j = E[p(t n,τ = j|c)], and of each label transition, s n,τ,j,ι = E[p(t n,τ−1 = j, t n,τ = ι|c)], using the forwardbackward algorithm (Ghahramani, 2001), which consists of two passes. The forward pass for each document, n, starts from τ = 1 and computes: ln λ n,Ln,j = 0, ln λ n,τ,j = ln J ι=1 exp ln λ i,τ +1,ι + E ln T j,ι + ll n,τ +1 (ι) .
ln r − n,τ,j = ln J ι=1 r − n,τ −1,ι e E ln T ι,j + ll n,τ (j), ll n,τ (j) = K k=1 E ln A (k) j, c (k) n,τ , c (k) n,τ−1 + S s=1 J i=1 J ι=1 E ln B (s) (j, i, ι)d (s) n,τ (i)d (s) n,τ −1 (ι),(11)
By applying Bayes' rule, we arrive at r n,τ,j and s n,τ,j,ι :
r n,τ,j = r − n,τ,j λ n,τ,j J j =1 r − n,τ,j λ n,τ,j(13)s n,τ,j,ι =s n,τ,j,ι J j =1 J ι =1s n,τ,j ,ι(14)
s n,τ,j,ι = r − n,τ −1,j λ n,τ,ι exp{E ln T j,ι + ll n,τ (ι)}.
Each row of the transition matrix has the factor:
ln q(T j ) = ln Dir ([N j,ι + γ j,ι , ∀ι ∈ {1, .., J}]) ,
where N j,ι = N n=1
Ln τ =1 s n,τ,j,ι is the expected number of times that label ι follows label j.
The forward-backward algorithm requires expectations of ln T that can be computed using standard equations for a Dirichlet distribution:
E ln T j,ι = Ψ(N j,ι + γ j,ι ) − Ψ J ι=1 (N j,ι + γ j,ι ) ,(16)
where Ψ is the digamma function.
The variational factor for each annotator model is a distribution over its parameters, which differs between models. For seq, the variational factor is:
ln q A (k) = J j=1 J l=1 Dir N (k) j,l,m ∀m ∈ {1, .., J} N (k) j,l,m = α (k) j,l,m + N n=1 Ln τ =1 r n,τ,j δ l,c (k) n,τ−1 δ m,c (k) n,τ ,(17)
where δ is the Kronecker delta. For CM, MACE, CV and acc, the factors follow a similar pattern of summing pseudo-counts of correct and incorrect answers. The forward-backward passes also require the following expectation terms for seq, which are standard equations for Dirichlet distributions and can be simplified for the other annotator models:
E lnA (k) (j, l, m) = Ψ N (k) j,l,m −Ψ J m =1 N (k) j,l,m .(18)
The variational factor, q(B (s) ), for each sequence tagger's annotator model has the same form as q(A (k) ), substituting δ l,c (k) n,τ−1 ford (s) n,τ (i), as defined in below in Equation 20.
Black-box sequence taggers: the parameters of tagger s have the following variational factor:
ln q θ (s) = ln p x|θ (s) ,d (s) +ln p θ (s) +const, d n,τ = E p(d (s) n,τ = i|B (s) , t n,τ ) = J j=1 J ι=1 r n,τ,jdn,τ−1 EB (s) (j, i, ι).(19)
The expectations,d
n , fill the role of training labels, allowing us to use the training function of the black-box sequence taggers to update the variational factor, q θ (s) . Many black-box sequence taggers, including most neural networks, use maximum likelihood (ML) to find optimal point values,θ (s) , rather than their posterior distribution. If we integrate such sequence taggers, our complete inference procedure becomes a hybrid between VB and ML expectation maximization (EM) (see Bishop (2006)). The sequence tagger may also require training using discrete labels, in which case we introduce a further ML step and approximatẽ d
d (s) n,τ (i) = E p(d (s) n,τ = i|x n , θ (s) ) ≈ p d (s) n,τ = i|x n ,θ (s)(20)
These values are the predictions obtained from the black-box sequence tagger given tokens x. Therefore, our method requires only training and prediction functions to integrate a sequence tagger, while its annotator model, B (s) , accounts for the sequence tagger's reliability. This means we can treat sequence taggers as black boxes, even if their predictions are noisy or over-confident. Pretrained taggers can also be used, for example, to make use of taggers that were trained on different domains with more annotated data.
Predicting the Sequence Labels
The approximate posterior probabilities of the true labels, r j,n,τ , provide confidence estimates for the labels. However, it is often useful to compute the most probable sequence of labels,t n , using the Viterbi algorithm (Viterbi, 1967 n,τ (i), ∀s, ∀n, ∀τ, ∀i. The most probable sequence is particularly useful because, unlike r j,n,τ , the sequence will be consistent with any transition constraints imposed by the priors on the transition matrix T , such as preventing 'O'→'I' transitions by assigning them zero probability. We can also make predictions for unlabeled documents in a similar manner, simply omitting the human annotations, c, and relying only on the predictions of the black-box sequence taggers,d (s) .
Modular Implementation of Variational Inference
The variational inference method described in Section 4 is naturally suited to a modular implementation. We divide the BSC model, as defined in Section 3 and Equation 8, into three modules:
(a) the true label model, which defines the distribution over sequences of labels, q(t n ); (b) the annotator model, which may be one of those described in Section 2 and implements q(A (k) ) and q(B (s) ); and (c) black-box sequence taggers, which are existing implementations that provide training and prediction functions to predict true labels given text tokens, x. The true label model exposes methods to compute r n,τ,j and s n,τ,j,ι , ∀n, ∀τ, ∀j, ∀ι, while the annotator models provide methods to initialize and update q(A (k) ) and q(B (s) ), and compute expectations according to Equation 18. By allowing individual functions to be replaced without rewriting the inference method, the modular implementation makes it easier to adapt the model to different types of annotations, and to test each component part. For example, new annotator models could, in future, be introduced to aggregate continuous-valued ratings or pairwise preferences.
Experiments
We evaluate Bayesian sequence combination (BSC) with each of the annotator models described in Section 3 to assess whether the sequential annotator model, seq, improves the quality of the inferred sequence tags. The first experiment uses simulated annotators to investigate the effects of different types of error on aggregation methods. We then introduce two NLP datasets to test performance in passive and active learning scenarios, analyze errors, and visualize the learned annotator models. The experiments also assess whether including including sequence taggers into the probabilistic model improves the aggregated sequence tags as well as the sequence taggers' predictions on test data.
Evaluated Methods
As well-established non-sequential baselines, we include token-level majority voting (MV), MACE (Hovy et al., 2013), Dawid-Skene (DS) (Dawid and Skene, 1979) and independent Bayesian classifier combination (IBCC) (Kim and Ghahramani, 2012), a Bayesian treatment of Dawid-Skene. We also test the sequential of BSC-MACE and BSC-CM, respectively, with non-sequential true label models. HMM-Crowd and DS use non-Bayesian inference steps and can be compared with their Bayesian variants, BSC-CV and IBCC, respectively.
BSC is tested with each of the different annotator models described in Section 2 and two black box sequence taggers. As the default for all annotator models, we integrate a simple black-box classifier that treats all text features as conditionally independent of each other and of the sequence of labels. To determine the effect of each component of the model we also test BSC-CM and BSC-seq without a text model (notext), and with the transition matrix, T , replaced by simple independent class probabilities (labeled \T ). We also test the integration of BSC-seq with the BiLSTM-LSTM-CRF of Lample et al. (2016) as a black-box sequence tagger, labeled BSC-seq+LSTM. This ensemble is compared against the same LSTM-based method trained on the output predictions of HMMcrowd and BSC-seq (labeled LSTM). We use the implementation of Lample et al. (2016), which must be trained on discrete labels and outputs discrete predictions rather than probabilities. We follow the authors' recommendations for hyperparameters except for the optimizer, for which we use Adam to improve the convergence rate as recommended by Reimers and Gurevych (2017).
Simulated Annotators
Simulated data allows us to test the effect of one type of error in the crowdsourced data, while keeping other characteristics of the data constant. This can be seen as a sanity check to ensure that both our model and the proposed inference method can handle certain types of error when we know them to be present. We generate crowds of 10 annotators for four experiments, which test the effect of varying (a) average annotator accuracy, (b) short span bias, i.e. the probability of not including the last tokens in a span, (c) missed span bias, i.e. the probability of missing a span entirely, and (d) the ratio of good to uninformative annotators in the crowd. We simulate annotators using the generative model of BSC-seq, drawing annotator labeling probabilities from Dirichlet distributions. By default, Dirichlet parameters corresponding to incorrect answers are 1, those for correct answers are 2.5, and disallowed transitions (O I) are close to 0. We then change the parameters of these Dirichlet distributions to obtain the variations described above. We repeat each experiment 25 times, in each case generating 25 documents of 100 tokens each. Figure 1 shows the F1-scores for our tested methods. Where annotator accuracy is high, majority voting is less accurate than methods that model individual annotator behavior, although the difference decreases as we introduce more errors. Among the BSC variants, performance increases with the complexity of the annotator model, from BSC-acc to BSC-seq, suggesting that the richer seq model can be successfully learned on a small dataset. There are some benefits for the Bayesian approaches, IBCC and BSC-CV, over the similar models, DS and HMM-crowd, respectively, in handling all four types of annotator error. This experiment on simulated data showed that our inference technique was able to handle certain types of error when generated from a BSC model. The following sections describe experiments test whether the benefits apply when BSC is used with real crowdsourced data.
Crowdsourced Datasets
We use two datasets containing both crowdsourced and gold sequential annotations. The CoNLL 2003 named-entity recognition dataset (Tjong Kim Sang and De Meulder, 2003), NER, contains gold labels for four named entity categories (PER, LOC, ORG, MISC), with crowdsourced labels provided by (Rodrigues et al., 2014). PICO (Nguyen et al., 2017), consists of medical paper abstracts that have been annotated by a crowd to indicate text spans that identify the population enrolled in a clinical trial. Further information about the datasets is shown in Table 5. Note that NER spans are typically much shorter than those in PICO.
Evaluation metrics: For NER we use the CoNLL 2003 F1-score, which considers only exact span matches to be correct. For PICO, we use the relaxed F1-measure (Nguyen et al., 2017), which counts the matching fractions of spans when computing precision and recall. Since the spans in PICO are longer than those of NER, partial matches may still contain much of the required information. We also compute the cross entropy error (CEE) at the level of tokens to compare the probability estimates produced by aggregation methods, which are useful for decision-making tasks such as active learning.
Aggregating Crowdsourced Labels
In this task, we use the aggregation methods to combine multiple crowdsourced labels and predict the true labels for the same documents. For both datasets, we provide all the crowdsourced labels as input to the aggregation method. In both cases, we split the gold-labeled documents into 50% validation and test sets. For NER, we use the split given by Nguyen et al. directly comparable to theirs.
We tune the hyperparameters using a validation set. To limit the number of hyperparameters to tune, we optimize only three values for BSC. Hyperparameters of the transition matrix, γ j , are set to the same value, γ 0 , except for disallowed transitions, (O I, transitions between types, e.g. I-PER I-ORG), which are set to 0.1. For the annotator models (both A and B), all values are set to α 0 , except for disallowed transitions, which are set to 0.1, then 0 is added to hyperparameters corresponding to correct annotations (e.g. diagonal entries in a confusion matrix). We use 0 to encode the prior assumption that annotators are more likely to have an accuracy greater than random. This avoids the non-identifiability problem, in which the class labels become switched around. We use validation set F1-scores to choose values from [0.1, 1, 10, 100], training on a small subset of 250 documents for NER and 500 documents for PICO. For the integrated BSC-seq+LSTM, we found better validation set performance for both our datasets if the LSTM is first excluded while the other parameters converge before training the LSTM. This simply means that we follow Algorithm 1, but omit steps 3, 4 and 6 in the first few iterations. Note that we use the dev set at each VB iteration to select the best LSTM model after each epoch. These steps reduce over-fitting resulting from the maximum likelihood step used to integrate the LSTM as a black-box sequence tagger.
The results of the aggregation task are shown in Table 2. Although DS and IBCC do not consider sequence information nor the text itself, they both perform well on both datasets, with IBCC reaching better cross entropy error than DS due to its Bayesian treatment. The improvement of DS over the results given by Nguyen et al. (2017) may be due to implementation differences. Neither MACE, BSC-acc nor BSC-MACE perform strongly, with F1-scores sometimes falling below MV. The acc and MACE annotator models may be a poor match for the sequence labeling task if annotator competence varies greatly depending on the true class label.
BSC-seq outperforms the other approaches, although without the text model (BSC-seq-notext) or the transition matrix (BSC-seq\T ), its performance decreases. However, for BSC-CM, the results are less clear: BSC-CM-notext differs from IBCC only in the inclusion of the transition matrix, T , yet IBCC outperforms BSC-CM-notext. This suggests that the combination of these elements is important: the seq annotator model is effective in combination with the transition matrix and simple text model. Integrating an LSTM improves performance further in both datasets, and outperforms an LSTM trained on the output of HMM-crowd or BSC-seq.
We categorize the errors made by key methods and list the counts for each category in Table 3. All machine learning methods shown reduce the number of spans that were completely missed by majority voting. BSC-seq+LSTM increases the number of exact span matches on NER, but reduces this number substantially on PICO while increasing the number of partial matches and false positives (where no true span was present). This is due to a larger number of split spans, where a 'B' token is inserted incorrectly inside a span. Therefore, while BSC-seq outperforms the alternatives in terms of F1-score and missing spans, further work may be required to improve the distinction between 'B' and 'I' tokens.
To determine whether BSC-seq learns distinctive confusion matrices depending on the previous labels, we plot the learned annotator models for PICO as probabilistic confusion matrices in Figure 2. As the dataset contains a large number of annotators, we clustered the confusion matrices inferred by each model into five groups by applying K-means to their posterior expected values. In all clusters, BSC-CV learns different accuracies for B, I and O (the diagonal entries). These differences may explain its improvement over BSC-acc. BSC-CM differs from BSC-CV in that the first, fourth and fifth clusters have off-diagonal values with different heights for the same true label value. The second cluster for BSC-CM encodes likely spammers who usually choose 'O' regardless of the ground truth. The confusion matrices for BSCseq are very different depending on the worker's previous annotation. Each column in the figure shows the confusion matrices corresponding to the same cluster of annotators. The first column, for example, shows annotators with a tendency toward I I or O O transitions, while the following clusters indicate very different labeling behavior. The model therefore appears able to learn distinct confusion matrices for different workers given previous labels, which supports the use of sequential annotator models.
Active Learning
Active learning iteratively selects informative data points to be labeled so that a model can be trained using less labeled data. Posterior probabilities output by Bayesian methods account for uncertainty in the model parameters, hence can be used to choose data points that rapidly reduce uncertainty. We hypothesize that BSC will learn more quickly than non-sequential methods in an active learning scenario. While various active learning methods could be applied here, in this paper we wish to demonstrate only that BSC may serve as a good foundation for active learning, and defer a deeper investigation of active learning techniques to future work. We therefore simulate active learning using a well-established technique, uncertainty sampling (Settles and Craven, 2008;Settles, 2010) Compute the mean entropy of the sequence labels of each document: − 1
Ln
Ln τ =1 J j=1 p(t n,τ = j|c) ln p(t n,τ = j|c) 5 Select batch_size documents with highest mean entropy, add their annotations to c end Algorithm 2: Active learning simulation for each method using uncertainty sampling. et al. (2016) outputs discrete label predictions, so to allow direct comparison of BSC against a neural sequence tagger, we modify the network to output probabilities for the active learning simulation. For MV, probabilities are estimated by fractions of votes. Figure 3 plots the mean F1 scores over ten repeats of the active learning simulation. IBCC learns more rapidly than DS on NER due to its Bayesian approach, which may also explain the stronger performance of BSC-CV compared to the similar HMM-crowd model, although this does not hold for the PICO dataset. BSC variants outperform non-sequential IBCC. BSC-CM and BSC-CV are strongest on PICO with small numbers of labels, but are later overtaken by BSCseq, which may require more data to learn its more complex model. On NER, BSC-CM continues to outperform the more complex BSC-seq, but the integrated LSTM clearly improves BSC-seq+LSTM. BSC-seq LSTM performs strongly on NER but poorly on PICO, where fewer labels were provided, while BSC-seq+LSTM appears more robust to this problem.
Prediction with Crowd-Trained LSTMs
In previous work (Nguyen et al., 2017), HMMcrowd LSTM produced better predictions for documents not labeled by the crowd, compared with training an LSTM directly on crowdsourced data, or training on labels obtained from non-sequential aggregation methods. We evaluate whether the performance gains of BSC-seq LSTM for aggregation also result in better predictions on unannotated documents. We also test whether BSC-seq+LSTM can provide meaningful confidence estimates when the sequence tagger it integrates produces only discrete labels. For NER, we evaluate on the CoNLL English test set (Tjong Kim Sang and De Meulder, 2003).
The results in Table 4 show that for F1-scores, BSC-seq LSTM outperforms the previous stateof-the-art, HMM-crowd LSTM. BSC-seq+LSTM produces a low cross entropy error, indicating that the probabilities it outputs are a good reflection of confidence and are likely to be more suitable to downstream decision-making tasks than the raw outputs from the LSTM sequence tagger.
Discussion and Conclusions
We proposed BSC-Seq, a Bayesian approach to aggregating sequence labels, which models the effect of label sequences on annotator reliability. Our results reinforce previous work that has demonstrated the benefits of modeling annotator reliability when aggregating noisy data, such as crowdsourced labels. We showed that sequential models outperform non-sequential baselines and that BSC-seq improves the state-of-the-art over HMM-crowd. Its performance depends on the combination of sequential annotator model, la- bel transition matrix, and text model. We further improved the quality of aggregated labels, by integrating existing sequence taggers into our variational inference approach as black-box training and prediction functions. This technique performed well with larger amounts of labeled data, but may benefit from the use of pre-trained neural sequence taggers when the dataset is very small. Future work will evaluate integrating sequence taggers built on Bayesian deep learning, which may improve active learning. We will also investigate how to set priors for the reliability of blackbox methods by testing them on other training sets of similar size.
Update ln q(A (k) ) and E ln A (k) , ∀k, given current c, r j,n,τ 6
τ (i) is defined below in Equation 20, and r − n,0,ι = 1 where ι ='O' and 0 otherwise. The backwards pass starts from τ = L n and scrolls backwards, computing:
the most probable values at each token. The update equations for other factors require expectations of d n with respect to θ (s) , or their ML approximation:
Figure 1 :
1HMM-crowd method (Nguyen et al., 2017), which uses a combination of maximum a posteriori (or smoothed maximum likelihood) estimates for the confusion vector (CV) annotator model and variational inference for an integrated hidden Markov model (HMM). MACE and IBCC are variants F1 scores with simulated annotators. Each plot shows the effect of varying one characteristic.
Figure 2 :
2(2017), while for PICO, the split was not available so our results are not BSC-CV: Clusters of confusion matrix representations from each BSC-*** annotator model trained on PICO.
Figure 3 :
3F1-scores for active learning simulations using uncertainty sampling.
Numbers of sentences, annotators, and spans for datasets used in our experiments. Sentences with crowd all have crowdsourced labels. Only dev and test sentences have gold sequence labels.Data
Sentences with crowd
without crowd Tokens Annotators Span
Gold
Span length
-set
total
dev
test
dev
test
/sent.
total /doc type
spans mean std.
NER
6056
2800
3256
216
231
13
47
4.9
PER
6282
1.19
0.49
LOC
6482
1.73
0.57
ORG 5789
1.55
0.92
MISC 3059
1.44
0.80
PICO 9480
191
191
191
191
150
312 6.0
pop.
700
7.74
7.38
Table 1:
Table 2 :
2Aggregating crowdsourced labels: estimating true labels for documents labeled by the crowd.
, as described in Algorithm 2. The LSTM implementation provided by Lample Input: A random initial_set of training labels, the same for all methods. 1 Set training set c = initial_set while training set size < max_no_labels do Train model on c Predict sequence labels for all documents2
3
4
Table 3 :
3Counts of different types of span errors.
Table 4 :
4Prediction performance on test datasets with training on crowdsourced labels.
Aggnet: deep learning from crowds for mitosis detection in breast cancer histology images. Shadi Albarqouni, Christoph Baur, Felix Achilles, IEEE transactions on medical imaging. 355Vasileios Belagiannis, Stefanie Demirci, and Nassir NavabShadi Albarqouni, Christoph Baur, Felix Achilles, Vasileios Belagiannis, Stefanie Demirci, and Nassir Navab. 2016. Aggnet: deep learning from crowds for mitosis detection in breast can- cer histology images. IEEE transactions on medical imaging, 35(5):1313-1321.
A variational Bayesian framework for graphical models. Hagai Attias, Advances in Neural Information Processing Systems. MIT Press12Hagai Attias. 2000. A variational Bayesian frame- work for graphical models. In Advances in Neu- ral Information Processing Systems 12, pages 209-215. MIT Press.
How to grade a test without knowing the answers: a Bayesian graphical model for adaptive crowdsourcing and aptitude testing. Yoram Bachrach, Tom Minka, John Guiver, Thore Graepel, Proceedings of the 29th International Coference on International Conference on Machine Learning. the 29th International Coference on International Conference on Machine LearningOmnipressYoram Bachrach, Tom Minka, John Guiver, and Thore Graepel. 2012. How to grade a test with- out knowing the answers: a Bayesian graphi- cal model for adaptive crowdsourcing and apti- tude testing. In Proceedings of the 29th Interna- tional Coference on International Conference on Machine Learning, pages 819-826. Omni- press.
Training deep neural-networks based on unreliable labels. Alan Joseph Bekker, Jacob Goldberger, Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on. IEEEAlan Joseph Bekker and Jacob Goldberger. 2016. Training deep neural-networks based on unre- liable labels. In Acoustics, Speech and Signal Processing (ICASSP), 2016 IEEE International Conference on, pages 2682-2686. IEEE.
Pattern recognition and machine learning. C M Bishop, SpringerInformation Science and StatisticsC. M. Bishop. 2006. Pattern recognition and ma- chine learning, 4th edition. Information Sci- ence and Statistics. Springer.
Maximum likelihood estimation of observer error-rates using the EM algorithm. A P Dawid, A M Skene, Journal of the Royal Statistical Society. Series C (Applied Statistics). 281A. P. Dawid and A. M. Skene. 1979. Maximum likelihood estimation of observer error-rates us- ing the EM algorithm. Journal of the Royal Statistical Society. Series C (Applied Statistics), 28(1):20-28.
Maximum likelihood from incomplete data via the EM algorithm. A P Dempster, N M Laird, D B Rubin, Journal of the Royal Statistical Society. Series B (Methodological). 391A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological), 39(1):1-38.
A probabilistic framework to learn from multiple annotators with time-varying accuracy. Pinar Donmez, Jaime Carbonell, Jeff Schneider, Proceedings of the 2010 SIAM International Conference on Data Mining. the 2010 SIAM International Conference on Data MiningSIAMPinar Donmez, Jaime Carbonell, and Jeff Schnei- der. 2010. A probabilistic framework to learn from multiple annotators with time-varying ac- curacy. In Proceedings of the 2010 SIAM In- ternational Conference on Data Mining, pages 826-837. SIAM.
Semantic annotation aggregation with conditional crowdsourcing models and word embeddings. Paul Felt, Eric K Ringger, Kevin D Seppi, International Conference on Computational Linguistics. Paul Felt, Eric K. Ringger, and Kevin D. Seppi. 2016. Semantic annotation aggregation with conditional crowdsourcing models and word embeddings. In International Conference on Computational Linguistics, pages 1787-1796.
An introduction to hidden markov models and Bayesian networks. Zoubin Ghahramani, International Journal of Pattern Recognition and Artificial Intelligence. 1501Zoubin Ghahramani. 2001. An introduction to hidden markov models and Bayesian networks. International Journal of Pattern Recognition and Artificial Intelligence, 15(01):9-42.
Low-resource semantic role labeling. Matthew R Gormley, Margaret Mitchell, Benjamin Van Durme, Mark Dredze, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsLong Papers)Matthew R. Gormley, Margaret Mitchell, Ben- jamin Van Durme, and Mark Dredze. 2014. Low-resource semantic role labeling. In Pro- ceedings of the 52nd Annual Meeting of the As- sociation for Computational Linguistics (Vol- ume 1: Long Papers), pages 1177-1187. Asso- ciation for Computational Linguistics.
Learning whom to trust with MACE. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, Eduard H Hovy, HLT-NAACL. Dirk Hovy, Taylor Berg-Kirkpatrick, Ashish Vaswani, and Eduard H Hovy. 2013. Learn- ing whom to trust with MACE. In HLT-NAACL, pages 1120-1130.
Bayesian classifier combination. Hyun-Chul Kim, Zoubin Ghahramani, International Conference on Artificial Intelligence and Statistics. Hyun-chul Kim and Zoubin Ghahramani. 2012. Bayesian classifier combination. In Interna- tional Conference on Artificial Intelligence and Statistics, pages 619-627.
Neural architectures for named entity recognition. Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer, Proceedings of NAACL-HLT. NAACL-HLTGuillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, and Chris Dyer. 2016. Neural architectures for named entity recognition. In Proceedings of NAACL- HLT, pages 260-270.
End-toend sequence labeling via bi-directional LSTM-CNNs-CRF. Xuezhe Ma, Eduard Hovy, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational Linguistics1Xuezhe Ma and Eduard Hovy. 2016. End-to- end sequence labeling via bi-directional LSTM- CNNs-CRF. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1064-1074.
Bayesian nonparametric crowdsourcing. Pablo G Moreno, Yee Whye Teh, Fernando Perez-Cruz, Journal of Machine Learning Research. 16Pablo G. Moreno, Yee Whye Teh, and Fernando Perez-Cruz. 2015. Bayesian nonparametric crowdsourcing. Journal of Machine Learning Research, 16:1607-1627.
Aggregating and predicting sequence labels from crowd annotations. Byron C An T Nguyen, Junyi Jessy Wallace, Ani Li, Matthew Nenkova, Lease, Proceedings of the conference. Association for Computational Linguistics. Meeting. the conference. Association for Computational Linguistics. MeetingNIH Public Access2017299An T Nguyen, Byron C Wallace, Junyi Jessy Li, Ani Nenkova, and Matthew Lease. 2017. Ag- gregating and predicting sequence labels from crowd annotations. In Proceedings of the con- ference. Association for Computational Lin- guistics. Meeting, volume 2017, page 299. NIH Public Access.
Comparing bayesian models of annotation. Bob Silviu Paun, Jon Carpenter, Dirk Chamberlain, Udo Hovy, Massimo Kruschwitz, Poesio, Transactions of the Association for Computational Linguistics. 6Silviu Paun, Bob Carpenter, Jon Chamberlain, Dirk Hovy, Udo Kruschwitz, and Massimo Poe- sio. 2018. Comparing bayesian models of an- notation. Transactions of the Association for Computational Linguistics, 6:571-585.
Learning part-of-speech taggers with inter-annotator agreement loss. Barbara Plank, Dirk Hovy, Anders Søgaard, Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics. the 14th Conference of the European Chapter of the Association for Computational LinguisticsBarbara Plank, Dirk Hovy, and Anders Søgaard. 2014. Learning part-of-speech taggers with inter-annotator agreement loss. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguis- tics, pages 742-751.
Weighting finite-state transductions with neural context. Pushpendre Rastogi, Ryan Cotterell, Jason Eisner, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesPushpendre Rastogi, Ryan Cotterell, and Jason Eisner. 2016. Weighting finite-state transduc- tions with neural context. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Lin- guistics: Human Language Technologies, pages 623-633.
Learning from crowds. V C Raykar, S Yu, L H Zhao, G H Valadez, C Florin, L Bogoni, L Moy, Journal of Machine Learning Research. 11V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. 2010. Learn- ing from crowds. Journal of Machine Learning Research, 11:1297-1322.
Optimal hyperparameters for deep lstm-networks for sequence labeling tasks. Nils Reimers, Iryna Gurevych, arXiv:1707.06799arXiv preprintNils Reimers and Iryna Gurevych. 2017. Op- timal hyperparameters for deep lstm-networks for sequence labeling tasks. arXiv preprint arXiv:1707.06799, version 2.
Learning from multiple annotators: distinguishing good from random labelers. Filipe Rodrigues, Francisco Pereira, Bernardete Ribeiro, Pattern Recognition Letters. 3412Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2013. Learning from multiple annotators: distinguishing good from random labelers. Pattern Recognition Letters, 34(12):1428-1436.
Sequence labeling with multiple annotators. Filipe Rodrigues, Francisco Pereira, Bernardete Ribeiro, Machine learning. 952Filipe Rodrigues, Francisco Pereira, and Bernardete Ribeiro. 2014. Sequence la- beling with multiple annotators. Machine learning, 95(2):165-181.
Deep learning from crowds. Filipe Rodrigues, Francisco Camara Pereira, The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI). Filipe Rodrigues and Francisco Camara Pereira. 2018. Deep learning from crowds. In The Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), 2018.
Active learning literature survey. Burr Settles, 16485211University of Wisconsin-MadisonComputer Sciences Technical ReportBurr Settles. 2010. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison, 52(55- 66):11.
An analysis of active learning strategies for sequence labeling tasks. Burr Settles, Mark Craven, Proceedings of the conference on empirical methods in natural language processing. the conference on empirical methods in natural language processingAssociation for Computational LinguisticsBurr Settles and Mark Craven. 2008. An analysis of active learning strategies for sequence label- ing tasks. In Proceedings of the conference on empirical methods in natural language process- ing, pages 1070-1079. Association for Compu- tational Linguistics.
Square: A benchmark for research on computing crowd consensus. Aashish Sheshadri, Matthew Lease, First AAAI Conference on Human Computation and Crowdsourcing. Aashish Sheshadri and Matthew Lease. 2013. Square: A benchmark for research on comput- ing crowd consensus. In First AAAI Conference on Human Computation and Crowdsourcing.
Dynamic Bayesian combination of multiple imperfect classifiers. Intelligent Systems Reference Library series, Decision Making with Imperfect Decision Makers. E Simpson, S Roberts, I Psorakis, A Smith, E. Simpson, S. Roberts, I. Psorakis, and A. Smith. 2013. Dynamic Bayesian combination of mul- tiple imperfect classifiers. Intelligent Systems Reference Library series, Decision Making with Imperfect Decision Makers:1-35.
Language understanding in the wild: Combining crowdsourcing and machine learning. Matteo Edwin D Simpson, Steven Venanzi, Pushmeet Reece, John Kohli, Guiver, J Stephen, Nicholas R Roberts, Jennings, Proceedings of the 24th International Conference on World Wide Web. the 24th International Conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeEdwin D Simpson, Matteo Venanzi, Steven Reece, Pushmeet Kohli, John Guiver, Stephen J Roberts, and Nicholas R Jennings. 2015. Lan- guage understanding in the wild: Combining crowdsourcing and machine learning. In Pro- ceedings of the 24th International Conference on World Wide Web, pages 992-1002. Interna- tional World Wide Web Conferences Steering Committee.
Dropout: a simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, The Journal of Machine Learning Research. 151Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.
Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition. Erik F Tjong Kim Sang, Fien De Meulder, Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003. the seventh conference on Natural language learning at HLT-NAACL 2003Association for Computational Linguistics4Erik F Tjong Kim Sang and Fien De Meul- der. 2003. Introduction to the CoNLL-2003 shared task: Language-independent named en- tity recognition. In Proceedings of the sev- enth conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 142-147. Association for Computational Linguistics.
Community-based Bayesian aggregation models for crowdsourcing. Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, Milad Shokouhi, 23rd international conference on World wide web. Matteo Venanzi, John Guiver, Gabriella Kazai, Pushmeet Kohli, and Milad Shokouhi. 2014. Community-based Bayesian aggregation mod- els for crowdsourcing. In 23rd international conference on World wide web, pages 155-164.
Time-sensitive Bayesian information aggregation for crowdsourcing systems. Matteo Venanzi, John Guiver, Pushmeet Kohli, Nicholas R Jennings, Journal of Artificial Intelligence Research. 56Matteo Venanzi, John Guiver, Pushmeet Kohli, and Nicholas R Jennings. 2016. Time-sensitive Bayesian information aggregation for crowd- sourcing systems. Journal of Artificial Intelli- gence Research, 56:517-545.
Error bounds for convolutional codes and an asymptotically optimum decoding algorithm. Andrew Viterbi, IEEE transactions on Information Theory. 13Andrew Viterbi. 1967. Error bounds for convo- lutional codes and an asymptotically optimum decoding algorithm. IEEE transactions on In- formation Theory, 13(2):260-269.
Whose vote should count more: Optimal integration of labels from labelers of unknown expertise. Jacob Whitehill, Ting-Fan, Jacob Wu, Bergsma, Paul L Javier R Movellan, Ruvolo, Advances in neural information processing systems. Jacob Whitehill, Ting-fan Wu, Jacob Bergsma, Javier R Movellan, and Paul L Ruvolo. 2009. Whose vote should count more: Optimal inte- gration of labels from labelers of unknown ex- pertise. In Advances in neural information pro- cessing systems, pages 2035-2043.
Bayesian prediction of tissueregulated splicing using rna sequence and cellular context. Yoseph Hui Yuan Xiong, Brendan J Barash, Frey, Bioinformatics. 2718Hui Yuan Xiong, Yoseph Barash, and Brendan J Frey. 2011. Bayesian prediction of tissue- regulated splicing using rna sequence and cellu- lar context. Bioinformatics, 27(18):2554-2562.
Leveraging crowdsourcing data for deep active learning an application: Learning intents in Alexa. Jie Yang, Thomas Drake, Andreas Damianou, Yoelle Maarek, Proceedings of the 2018 World Wide Web Conference on World Wide Web. the 2018 World Wide Web Conference on World Wide WebInternational World Wide Web Conferences Steering CommitteeJie Yang, Thomas Drake, Andreas Damianou, and Yoelle Maarek. 2018. Leveraging crowdsourc- ing data for deep active learning an applica- tion: Learning intents in Alexa. In Proceed- ings of the 2018 World Wide Web Conference on World Wide Web, pages 23-32. International World Wide Web Conferences Steering Com- mittee.
The optimality of naïve Bayes. Harry Zhang, Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, FLAIRS. the Seventeenth International Florida Artificial Intelligence Research Society Conference, FLAIRSAAAI PressHarry Zhang. 2004. The optimality of naïve Bayes. In Proceedings of the Seventeenth In- ternational Florida Artificial Intelligence Re- search Society Conference, FLAIRS 2004. AAAI Press.
Transfer learning for lowresource neural machine translation. Barret Zoph, Deniz Yuret, Jonathan May, Kevin Knight, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingBarret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer learning for low- resource neural machine translation. In Pro- ceedings of the 2016 Conference on Empiri- cal Methods in Natural Language Processing, pages 1568-1575.
| [] |
[
"Learning Lexical Entries for Robotic Commands using Crowdsourcing",
"Learning Lexical Entries for Robotic Commands using Crowdsourcing"
] | [
"Junjie Hu junjieh@cs.cmu.edu \nSchool of Computer Science\nCarnegie Mellon University\n15213PittsburghPA\n",
"Jean Oh jeanoh@nrec.ri.cmu.edu \nSchool of Computer Science\nCarnegie Mellon University\n15213PittsburghPA\n",
"Anatole Gershman anatoleg@cs.cmu.edu \nSchool of Computer Science\nCarnegie Mellon University\n15213PittsburghPA\n"
] | [
"School of Computer Science\nCarnegie Mellon University\n15213PittsburghPA",
"School of Computer Science\nCarnegie Mellon University\n15213PittsburghPA",
"School of Computer Science\nCarnegie Mellon University\n15213PittsburghPA"
] | [] | Robotic commands in natural language usually contain various spatial descriptions that are semantically similar but syntactically different. Mapping such syntactic variants into semantic concepts that can be understood by robots is challenging due to the high flexibility of natural language expressions. To tackle this problem, we collect robotic commands for navigation and manipulation tasks using crowdsourcing. We further define a robot language and use a generative machine translation model to translate robotic commands from natural language to robot language. The main purpose of this paper is to simulate the interaction process between human and robots using crowdsourcing platforms, and investigate the possibility of translating natural language to robot language with paraphrases. | null | [
"https://arxiv.org/pdf/1609.02549v3.pdf"
] | 1,467,233 | 1609.02549 | b765e507eda33741f9f8213578e526e7eba2bf39 |
Learning Lexical Entries for Robotic Commands using Crowdsourcing
Junjie Hu junjieh@cs.cmu.edu
School of Computer Science
Carnegie Mellon University
15213PittsburghPA
Jean Oh jeanoh@nrec.ri.cmu.edu
School of Computer Science
Carnegie Mellon University
15213PittsburghPA
Anatole Gershman anatoleg@cs.cmu.edu
School of Computer Science
Carnegie Mellon University
15213PittsburghPA
Learning Lexical Entries for Robotic Commands using Crowdsourcing
Robotic commands in natural language usually contain various spatial descriptions that are semantically similar but syntactically different. Mapping such syntactic variants into semantic concepts that can be understood by robots is challenging due to the high flexibility of natural language expressions. To tackle this problem, we collect robotic commands for navigation and manipulation tasks using crowdsourcing. We further define a robot language and use a generative machine translation model to translate robotic commands from natural language to robot language. The main purpose of this paper is to simulate the interaction process between human and robots using crowdsourcing platforms, and investigate the possibility of translating natural language to robot language with paraphrases.
Introduction
Natural language provides an efficient way for untrained human to instruct a robot to perform collaborative tasks, e.g., navigation and manipulation. However, learning to interpret the meaning of natural language commands is a challenging task (Dukes 2014; Perera and Allen 2013; Chen and Mooney 2011), especially when the robot has little or no prior knowledge of the phrasal expressions in natural language. Due to high flexibility of natural language, it is non-trivial for a robot to cover all the phrasal expressions in natural language when its interpretation module is initially built.
Popular crowdsourcing platforms such as Amazon Mechanical Turk, provide a fast and cheap way to collect interactive data from participants in a wide range of different communities. Hence, simulating the human machine interaction process for information extraction on crowdsourcing platforms has attracted lots of research interests (Nguyen, Wallace, and Lease 2015;Hladká, Hana, and Luksová 2014;Goldberg, Wang, and Kraska 2013). To encourage the diversity of robotic commands, we simulate the interactive process between a robot and various untrained users on Amazon Mechanical Turk, and collect robotic commands during the process. We further apply a phrase-based machine translation model to mapping natural language command to a robotic language that can be understood by a robot.
Copyright c 2016, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Phrase-based Machine Translation Model
To tackle the problem of translating natural language commands to language that can be understood by robots, we first define a robot language that consists of predefined key concepts in the robotic task domains. For example, in the navigation task domain, we define the following key concepts.
• Action:= navigate • Object:= traffic barrel | building | car • Relation:= left | right | front | back Each robot language command can be deterministically constructed by a combination of key concepts in the task domains. See Figure 1 for an illustration. We then adapt a phrase-based machine translation model to translate robotic commands from natural language to the robot language. For the phrase-based machine translation model, the key component is the extracted phrase table that stores several lexical entries. For a particular input (source-language) sentence s = s 1 · · · s n , each lexical entry is defined as a tuple (b, e, r), specifying that the span s b · · · s e in the sourcelanguage sentence can be translated as the target-language string r. For each lexical entry p = (b, e, r), we estimate a score g(p) ∈ R that measures the likelihood of translating the span to the target language string by relative frequency under the translation model. For a given lexicon entry p, b(p), e(p), r(p) denote its three components respectively. A derivation y of a source-language sentence is defined as a finite sequence of phrases, p 1 , p 2 · · · p L . For any derivation y, r(y) refers to the translation sentence constructed by concatenating the strings r(p 1 ), r(p 2 ), · · · r(p L ). For a sourcelanguage sentence s, we denote Y(s) as a set of possible derivations of s.
Based on the above notations, we aim to extract lexical entries from parallel textual corpus collected on crowdsourcing platforms, and seek the optimal derivation y * using beam search for the maximum derivation score f (y * ) among all possible derivations Y(s) under a phrase-based translation model.
In Equation 1, the score f (y) of a derivation y consists of three parts: (1) h(r(y)) is the log-probability of the target string r(y) under a smoothed trigram language model;
(2) g(p k ) is the score of p k under a translation model; (3) |e(p k ) + 1 − b(p k+1 )| is the distortion penalty for reordering word alignments between source and target languages.
f (y) = w h h(r(y)) + wg L k=1 g(p k ) + w d L−1 k=1 |e(p k ) + 1 − b(p k+1 )| (1)
where w h , w g and w d are the weights of the scores given by the language model, the translation model and the distortion penalty respectively. Hence the optimal derivation of a source-language sentence s can be obtained by arg max y∈Y(s) f (y).
Experiment
We present the process of collecting experimental data on Amazon Mechanical Turk, a popular crowdsourcing platform, and extract parallel lexical entries using Moses (Koehn et al. 2007), a machine translation tool.
Stimulation and Data Collection
By showing an image that depicts the behaviour of a robot, a turker is first asked to give a command in English (denoted as s) that clearly indicates the spatial information between objects in the environment for a robot. Next, the turker is shown some robotic concepts in several drop-down lists, and asked to select the correct robotic concepts that can be used to construct a robotic command (denoted as r) for the same image. Finally we simulate the scenario where the robot can actively ask for a paraphrase sentence (denoted as t) of the robotic command r in order to help it understand s. Totally we collect 88 tuples of (s, t, r) for navigation task and 120 tuples of (s, t, r) tuples for manipulation task.
Phrasal Lexicon Extraction and Translation
To investigate the possibility of using paraphrase sentences to enhance the phrase-based machine translation, we first use Moses to extract parallel phrases between s and r. Then we use Moses to extract parallel phrases between t and r. Table 1 shows the total number of extracted lexical entries when we translate from s to r and from t to r. Comparing the second column with the third one in Table 1, we observe that more lexical entries are extracted from parallel sentences between t and r than those between s and r. This convinces our idea that turkers usually paraphrase natural language commands that are more semantically closed to the robot language commands after the robotic concepts are shown to them. Table 2 shows some lexical entries extracted from natural language commands t paired with robot language commands r. We observe that the extracted lexical entries capture the similarity between source-language phrases and target-language phrases, thus enabling many-toone mapping from syntactic variants in natural language to unique robotic concepts. By optimizing the objective function in Equation 1, we generate the translated robot language sentence using the Robot Language go straight until you reach a car navigate to the car backyard of the building behind the building find the car to the car which stands before that is in front move forward to navigate to located at the right hand side of is on the right of (a) (b) Figure 1: Navigation examples: (a) navigate (Action) to the traffic barrel (Object) that is on the right (Relation) of the building (Object); (b) navigate (Action) to the car (Object) that is on the back (Relation) of the building (Object) extracted lexical entries. In Table 3, we show two translation results of the examples used in Figure 1. In the first result, the machine translation model can successfully translate the natural language command to the correct robot language command. While in the second result, the translation is not completely correct because the natural language command contains the detail steps for the navigation task. Mapping detail descriptions to highly abstract robot concepts requires more sophisticated semantic reasoning over the natural language. We leave it as our future work.
Conclusion
In this paper, we simulate the human robot communication on Amazon Mechanical Turk and collect robotic commands for navigation and manipulation tasks using crowdsourcing. We further investigate the possibility of bridging the gap between natural language command and robot language command using paraphrasing. We will conduct our future work in several challenging aspects. First, lexical entries extracted from different but similar robotic tasks can be shared across tasks. Second, machine teaching by paraphrasing can be integrated with active learning techniques. Robots can perform reasoning over the confusing phrases and actively ask their human partners for paraphrasing.
Table 1 :
1Number of extracted lexical entries
#phrase from (s, r) #phrase from (t, r)
Navigation
160
748
Manipulation
128
298
Table 2 :
2Examples of extracted lexical entries
Navigation Task
Natural Language
Table 3 :
3Examples of phrase-based translation Navigation Task Natural Language Translated Robot Language go to the traffic barrel that is located on the right hand side of the building navigate (Action) to the traffic barrel (Object) that is on the right (Relation) of the building (Object) go straight forward until you reach the building. go to the car behind the building.navigate (Action) to the
building (Object) that is
navigate (Action) to the
car (Object) that is behind
(Relation) the building
(Object)
AcknowledgmentsThis work was conducted in part through collaborative participation in the Robotics Consortium sponsored by the U.S Army Research Laboratory under the Collaborative Technology Alliance Program, Cooperative Agreement W911NF-10-2-0016, and in part by ONR under MURI grant "Reasoning in Reduced Information Spaces" (no. N00014-09-1-1052). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation herein.
Learning to interpret natural language navigation instructions from observations. D L Chen, R J Mooney, Proceedings of the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011. the Twenty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2011San Francisco, California, USAand Mooney. Dukes 2014] Dukes, K. 2014. Semeval-2014 task 6: Supervised semantic parsing of robotic spatial commands. Se-mEvaland Mooney 2011] Chen, D. L., and Mooney, R. J. 2011. Learning to interpret natural language navigation in- structions from observations. In Proceedings of the Twenty- Fifth AAAI Conference on Artificial Intelligence, AAAI 2011, San Francisco, California, USA, August 7-11, 2011. [Dukes 2014] Dukes, K. 2014. Semeval-2014 task 6: Su- pervised semantic parsing of robotic spatial commands. Se- mEval 2014 45.
CASTLE: crowd-assisted system for text labeling and extraction. Wang Goldberg, S L Goldberg, D Z Wang, T Kraska, Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2013. the First AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2013Palm Springs, CA, USAGoldberg, Wang, and Kraska 2013] Goldberg, S. L.; Wang, D. Z.; and Kraska, T. 2013. CASTLE: crowd-assisted system for text labeling and extraction. In Proceedings of the First AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2013, November 7-9, 2013, Palm Springs, CA, USA.
Crowdsourcing in language classes can help natural language processing. Hana Hladká, B Hladká, J Hana, I Luksová, P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions. the 45th annual meeting of the ACL on interactive poster and demonstration sessionsPittsburgh, Pennsylvania, USAAssociation for Computational LinguisticsProceedings of the Seconf AAAI Conference on Human Computation and CrowdsourcingHladká, Hana, and Luksová 2014] Hladká, B.; Hana, J.; and Luksová, I. 2014. Crowdsourcing in language classes can help natural language processing. In Proceedings of the Sec- onf AAAI Conference on Human Computation and Crowd- sourcing, HCOMP 2014, November 2-4, 2014, Pittsburgh, Pennsylvania, USA. [Koehn et al. 2007] Koehn, P.; Hoang, H.; Birch, A.; Callison-Burch, C.; Federico, M.; Bertoldi, N.; Cowan, B.; Shen, W.; Moran, C.; Zens, R.; et al. 2007. Moses: Open source toolkit for statistical machine translation. In Proceed- ings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions, 177-180. Association for Computational Linguistics.
Combining crowd and expert labels using decision theoretic active learning. Wallace Nguyen, A T Nguyen, B C Wallace, M Lease, Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing. the Third AAAI Conference on Human Computation and CrowdsourcingSan Diego, CaliforniaNguyen, Wallace, and Lease 2015] Nguyen, A. T.; Wallace, B. C.; and Lease, M. 2015. Combining crowd and expert la- bels using decision theoretic active learning. In Proceedings of the Third AAAI Conference on Human Computation and Crowdsourcing, HCOMP 2015, November 8-11, 2015, San Diego, California., 120-129.
SALL-E: situated agent for language learning. I E Perera, Allen , J F , Proceedings of the Twenty-Seventh AAAI Conference on Artificial Intelligence. the Twenty-Seventh AAAI Conference on Artificial IntelligenceBellevue, Washington, USAPerera and Allen[Perera and Allen 2013] Perera, I. E., and Allen, J. F. 2013. SALL-E: situated agent for language learning. In Proceed- ings of the Twenty-Seventh AAAI Conference on Artificial Intelligence, July 14-18, 2013, Bellevue, Washington, USA.
| [] |
[
"Workshop track -ICLR 2018 Neural Program Search: Solving Programming Tasks from Description and Examples",
"Workshop track -ICLR 2018 Neural Program Search: Solving Programming Tasks from Description and Examples"
] | [
"Illia Polosukhin \nNEAR\n\n",
"Alex Skidanov \nNEAR\n\n"
] | [
"NEAR\n",
"NEAR\n"
] | [] | We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input / output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the approach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outperforms a sequence-to-sequence model with attention baseline. | null | [
"https://arxiv.org/pdf/1802.04335v1.pdf"
] | 3,649,599 | 1802.04335 | 55117be67c7313863c74067cd71f1a79a2eb19ad |
Workshop track -ICLR 2018 Neural Program Search: Solving Programming Tasks from Description and Examples
Illia Polosukhin
NEAR
Alex Skidanov
NEAR
Workshop track -ICLR 2018 Neural Program Search: Solving Programming Tasks from Description and Examples
We present a Neural Program Search, an algorithm to generate programs from natural language description and a small number of input / output examples. The algorithm combines methods from Deep Learning and Program Synthesis fields by designing rich domain-specific language (DSL) and defining efficient search algorithm guided by a Seq2Tree model on it. To evaluate the quality of the approach we also present a semi-synthetic dataset of descriptions with test examples and corresponding programs. We show that our algorithm significantly outperforms a sequence-to-sequence model with attention baseline.
Introduction
The ability to synthesize a program from user intent (specification) is considered as one of the central problems in artificial intelligence (Green (1969)). Significant progress has been made recently in both program synthesis from examples (e.g. Balog et al. (2016), Polozov & Gulwani (2015), Ellis & Gulwani (2017)) and program synthesis from descriptions (e.g. Desai et al. (2016), Zhong et al. (2017), Lin et al. (2017), Ling et al. (2016)).
Programming by example techniques such as Flash Fill (Gulwani et al. (2012)) and BlinkFill (Singh (2016)) were developed to help users perform data transformation tasks using examples instead of writing programs. These methods rely on a small domain-specific language (DSL) and then develop algorithms to efficiently search the space of programs. Two shortcomings of these approaches are that DSL limits types of programs that can be synthesized, and that large engineering effort is needed to fine-tune such systems.
Program synthesis from description has not been applied widely in practice yet. One of the challenges is that the natural language is very ambiguous, yet there are very strict requirements for the synthesized programs (see Yin & Neubig (2017) and Rabinovich et al. (2017) for some discussion). In this paper we present Neural Program Search that learns from both description and examples and has high accuracy and speed to be applicable in practice.
We specifically consider a problem of synthesizing programs from a short description and several input / output pairs. By combining description and sample tests we address both limitations of programming by example and natural language program inference. We propose LISP-inspired DSL that is capable of representing solutions to many simple problems similar to those given as data transformation homework assignments, but is rather concise, making it more tractable to search in the space of programs in this DSL.
We propose a combination of two techniques -search in the programs space that is guided by a deep learning model. This way we can use the latest advances in natural language understanding with the precision of the search techniques. We use a Seq2Tree model (Alvarez-Melis & Jaakkola (2016)) that consists of a sequence encoder that reads the problem statement and a tree decoder augmented with attention that computes probabilities of each symbol in an AST tree node one node at a time. We then run a tree beam search that uses those probabilities to compute a number of most likely trees, and chooses one that is consistent with the given input/output pairs.
To evaluate the proposed model we have created a partially synthetic dataset AlgoLISP consisting of problem statements, solutions in our DSL and tests. We show that search guided by deep learning models achieves significantly better results than either of the two techniques separately.
Related Work
We describe the related work from domains of programming by example, programming from description, latent program induction and related field of semantic parsing.
Programming by Example There were several practical applications of programming by example based on search techniques and carefully crafted heuristics, such as Gulwani (2014). Also notable is recent work on application of deep learning for programming of examples such as RobustFill (Devlin et al. (2017)) and combining deep learning models with traditional search techniques includes DeepCoder (Balog et al. (2016)), Neuro-Symbolic Program Synthesis (Parisotto et al. (2016)) and Deep API Programmer ). "Neuro-Symbolic Program Synthesis" is similar to this work in that it predict tree structured programs and leverages that at search time. Gaunt et al. Programming from Description Program synthesis from natural language descriptions has seen revival recently with progress in natural language understanding, examples of such work include Desai et al. (2016), Zhong et al. (2017), Lin et al. (2017 and Ling et al. (2016). Advances in this field are limited by small and/or noisy datasets and limitation of existing deep learning models when it comes to decoding highly structured sequences such as programs.
Latent Program Induction There has been a plethora of recent work in teaching neural networks the functional behavior of programs by augmenting the neural networks with additional computational modules such as Neural Turing Machines (Graves et al. (2014)), Neural GPUs (Kaiser & Sutskever (2015)), stacks-augmented RNNs (Joulin & Mikolov (2015)) and Neural Program-Interpreters (Reed & De Freitas (2015)). Two main limitations of these approaches are that these models must be trained separately for each task and that they do not expose interpretable program back to the user.
Semantic Parsing Semantic parsing is a related field to program synthesis from description, in which the space of programs is limited to some structured form. Noticeable work includes Dong & Lapata (2016) and Berant et al. (2013). In another line of research latent programs for semantic parsing are learned from examples, e.g. Neelakantan et al. (2016), Liang et al. (2016).
Neural Program Search
This section describes the DSL used for modeling, our neural network architecture and an algorithm for searching in program space.
Domain Specific Language
There are multiple reasons to use a domain specific language for code generation instead of an existing programming language. One reason is to be able to convert a program to multiple target languages for practical applications (e.g. SQL, Python, Java), which requires our DSL to be sufficiently general. Second, designing a DSL from scratch allows to add constraints that would simplify its automated generation.
Our DSL is inspired by LISP -functional language that can be easily represented as an Abstract Syntax Tree and supports high-order functions. We augmented our DSL with a type system. While types do not appear in programs, each constant, argument or function has a type. A type is either an integer, a string, a boolean, a function or an array of other non-function types.
A program in the DSL comprises a set of arguments (where each argument is defined by its name and type) and a program tree where each node belongs to one of the following symbol types: constant, argument, function call, function, or lambda. See Figure 1 for a partial specification of the DSL.
The DSL also has a library of standard functions. Each function has a return type and a constant number of arguments, with each argument having its own type. The type system greatly reduces the number of possible combinations for each node in the program tree during search.
program symbol symbol constant | argument | function call | function | lambda constant number | string | True | False function call (function name arguments) function function name arguments symbol | arguments , symbol function name reduce | filter | map | head | + | -. . . lambda
lambda function call Figure 1: Partial specification of the DSL used for this work.
Seq2Tree
Our neural network model uses an attentional encoder-decoder architecture. The encoder uses RNN to embed concatenation of arguments Args and tokenized textual description of the task Text. The decoder is a doubly-recurrent neural network for generating tree structured output (Alvarez-Melis & Jaakkola (2016)). At each step of decoding, attention is used to augment current step with relevant information from encoder.
Formally, let T = {V, E, L} be connected labeled tree, where V is the set of nodes, E is set of edges and L are node labels. Let H e be a matrix of stacked problem statement encodings (outputs from encoder's RNN). Let the g p and g s be functions which apply one step of the two separate RNNs. For a node i with parent p(i) and previous sibling s(i), the ancestral and fraternal hidden states are updated via:
c p i = context(x p (i), H e ) h p i = g p (h p p (i), c p i ) (1) c s i = context(x s (i), H e ) h s i = g s (h s s (i), c s i )(2)
where x p (i), x s (i) are the vectors representing the previous siblings and parents values, respectively. And context(x, H e ) computes current context using general attention mechanism (Luong et al. (2015)) to align with encoder presentations using previous parent or sibling representation and combining it with x in a non-linear way:
a = so f tmax(H e W a x) (3) r = a T H e (4) context = tanh((r||x)W c )(5)
where || indicates vector concatenation and W a and W c are learnable parameters. Once the hidden depth and width states have been updated with these observed labels, they are combined to obtain a full hidden state:
h i = (U p h p i + U s h s i )(6)
where U p and U s learnable parameters. This state contains combined information from parent and siblings as well as attention to encoder representation and is used to predict label of the node. In a simplest form (without placeholders), the label for node i can be computed by sampling from distribution:
o i = so f tmax(Wh i )(7)
After the node's output symboll i has been obtained by sampling from o i , x i is obtained by embeddingl i using W T . Then the cell passes (h p i , x i ) to all it's children and (h s i , x i ) to the next sibling (if any), enabling them to apply Eqs (1) and (2) to compute their states. This procedure continues recursively following schema defined by DSL that is being decoded. The model is trained using back-propagation. The teacher forcing is used by using target topology of the code tree, feeding target labels for parent / sibling nodes. The error is obtained using the cross-entropy loss of o i with respect to the true label l i for each decoded node.
Exploring alternative methods of training, such as REINFORCE (similar to Zhong et al. (2017)) or using Search at training time is left for future work.
Search
One of the central ideas of this work is to use Tree-Beam search in the program space using a deep learning model to score symbols in each AST node. The search continues until a complete program is found that passes given sample input / output pairs. Search algorithm described in Algorithm 1 starts with a priority queue with a single empty program. At all times, we only keep the top Queue N most probable trees built so far in the priority queue. HeapPopMin(queue) 19: return null Figure 3: Example of tree search for a query "Given an array, find the sum of its elements". Rectangles represent nodes with a symbol, while circles represent empty nodes. We start with an empty tree on the far left. When that tree is popped from the priority queue, we consider each possible symbol for the first empty node in the pre-order traversal, and create a new tree for each. Two such trees are shown in this figure, for symbols reduce and +. When the tree with reduce is popped, several new trees are generated by filling in the first empty node in the pre-order traversal of that tree, which is the first child of reduce. The first argument of reduce is an array, so only symbols that produce arrays are considered. Two trees for such symbols are shown on the figure for a, which is an argument, and filter. The search continues until either D trees are generated, or a tree that passes all the sample tests is found. Such tree is shown on the far right.
If a program on the top of the queue is complete (no more nodes need to be added), we run evaluation with given sample input / output examples. If the results from current program match expected outputs, search is stopped. Alternatively, if over MAX VISITED programs have already been evaluated, the search stops without program found.
Each program in the priority queue is represented as an incomplete tree with some nodes already synthesized and some still empty. When such incomplete tree T is popped from the queue, we locate the first empty node n in the pre-order traversal of the tree, and use Seq2Tree model to compute probabilities of each possible symbol being in that node. At that point we already know the type of the symbol the node should contain, and thus only consider symbols of that type. For each such symbol s we construct a new tree by replacing n with s. We then push all the new trees, no matter how unlikely they are into the priority queue, and then remove least probable trees until the size of the priority queue is not Queue N or less.
In our experiments evaluating the Seq2Tree model takes comparable amount of time to cloning trees and pushing them to the queue, so optimizing both steps would contribute to the performance of the search. We use the following optimization techniques: Persistent trees. After evaluating the model once we need to clone the tree as many times as many symbols will be considered for the first empty node. Storing all the trees in full can be memory consuming, and, depending on the language in which the search is implemented, allocating objects for the nodes can take considerable amount of time. One way to save memory and time on cloning trees is to use persistent trees. When a new tree T new is created from a tree T by introducing a new node s, it is sufficient to clone only the nodes on the path from root to s, and replace their corresponding children on that path with the cloned version. This takes time and memory proportional to the height of the tree, which for larger trees is significantly smaller than the total number of nodes in the tree. The tree is then represented as a pointer to the root node.
Batched search. During training we need to read trees of different shapes, which is a challenging problem, and we use dynamic batching to address it. During search we only invoke Seq2Tree on a single node, so multiple such invocations can be trivially batched. We batch invocations of the Seq2Tree across tasks in the following way: we run the search for batch size tasks simultaneously, and on each step pop the single most likely incomplete tree for each task, identify the empty node in each of them, and compute the probabilities of the symbols in all of them at once. This approach speeds up evaluation of the search if it is run on multiple tasks simultaneously, for example when evaluating the accuracy on a held out set. However, in cases when only one task is being evaluated batching across tasks is not applicable. We evaluated the following alternative: on each iteration of search pop the top batch size incomplete trees from the priority queue instead of just the single most likely one, identify the empty node in each of them, and compute the probabilities of symbols in all of them at once. This approach did not produce any noticeable speed up, and in most cases even slowed the search down slightly. The possible reason for that is that if the model that guides the search is good, the correct incomplete tree will be the top one most of the time, so number of model evaluations that are saved due to batched execution is very small, and the extra computation time of evaluating a model on a batch instead of a single sample outweighs the time saved due to those few extra evaluations.
AlgoLisp
In this section we describe a new dataset we prepared to train and evaluate models that learn to synthesize simple data processing programs.
AlgoLisp is a dataset of problem descriptions, corresponding implementations of the problem in a Lisp-inspired programming language described in section 3.1 and tests. Each problem has 10 tests, where each test is input to be fed into the synthesized program and the expected output the program should produce. All the problems are designed in such way that the output for each input is unique. See Table 1 for dataset statistics.
There are multiple existing datasets for code synthesis task from natural language. Some recent notable ones are description to bash command (Lin et al. (2017)) and description to SQL (Zhong et al. (2017)). To the best of our knowledge, no existing dataset is applicable to our problem due to reasons such as no easy way of evaluating the results, insufficient complexity of the programs or size too small for deep learning.
Because same problem can be solved with many different programs, the solution is considered correct if it produces correct output on all the tests for given problem. For consistency and comparable results we suggest two specific approaches in which the tests are used during inference: no tests using at inference time (used for deep learning only models) and using first 3 tests for search and the remaining 7 tests as a holdout to evaluate correctness of the found program.
The dataset was synthesized with the following procedure (see Table 2 for examples). We first chose several dozen tasks from homework assignments for basic computer science and algorithms courses. For each task, we parameterized assignments (e.g. in statement "find all even elements in an array" even could be replaced by {prime, even, odd, divisible by three, positive, negative})and matching code. The final dataset is then random combination of such tasks, where other tasks can be passed into the given statement as input (e.g. two statements "find all even elements in an array" and "sort an array" will be combined to "find all even elements in an array and return them in sorted order").
This dataset is designed for the task of learning basic composition and learning to use simple concepts and routines in the DSL. Due to the fact that the number of homework assignments used for this dataset was relatively low, it is unlikely that the models trained on this dataset would generalize to new types of algorithm. Table 2: Examples from AlgoLISP dataset. First row is an example of user provided homework assignment with program in our DSL. Subsequent lines are examples of synthesized tasks and programs, showing various properties of the generator: different text for the task, combination with other sub-problems (such as "elements in a that are present in b") and variation of task properties. You are given an array a. Find the smallest, element in a, which is strictly greater than the minimum element in a.
(reduce (filter a (partial0 (reduce a inf) <)) inf min) Synthesized examples Consider an array of numbers a, your task is to compute largest element among values in a, which is strictly smaller than the maximum element among values in a.
(reduce (filter a (partial0 (reduce a -inf) >)) -inf max)
Given arrays of numbers a and b, compute largest element among elements in a that are present in b, which is strictly less than maximum element among elements in a that are present in b.
(reduce (filter (filter a (partial0 b contains)) (partial0 (reduce (filter a (partial0 b contains)) inf) <)) inf min) Given an array of numbers, your task is to find largest element among values in the given array that are divisible by two, which is strictly less than maximum element among values in the given array that are divisible by two.
(reduce (filter (filter a is odd) (partial0 (reduce (filter a is odd) -inf) >)) -inf max)
To make sure that the models are learning to compose simpler concepts for novel problems, the dataset split into train, dev, and test by surface form of the code. Thus ensuring that at training time the model has not observed any programs it will be evaluated on.
To evaluate neural networks and search driven algorithms, we compare output of the generated programs on a holdout set of tests for each task. Thus accuracy on this dataset is defined as Acc = N C N , where N is total number of tasks and N C is number of tasks for which the synthesized solution passes all the holdout tests.
Experiments
We implemented all models using PyTorch 1 and used Dynamic Batching 2 (e.g. Neubig et al. (2017)) to implement batched tree decoding at training time. We train using ADAM (Kingma & Ba (2014)), embedding and recurrent layers have hidden size of 100 units.
The placeholders are used to handle OOV (Hewlett et al. (2016)) in all neural networks. Placeholders are added to the vocabulary, increasing the vocabulary size from N v to N v + N p , where N p is a fixed size number of placeholders, selected to be larger than number of tokens in the input. The same OOV tokens from inputs and outputs are mapped to the same placeholder (selected at random from not used yet), allowing model to attend and generate them at decoding time. Given the attention mechanism this is very similar to Pointer Networks (Vinyals et al. (2015)).
Results
We compare our model with Attentional Sequence to Sequence similar to Luong et al. (2015). Sequence to sequence models have shown near state of the art results at machine translation, question answering and semantic parsing. Additional baseline we compared to model that synthesizes program from examples -IO2Seq, inspired by RobustFill Devlin et al. (2017). The model reads all inputs and output via byte encoder for each given test case, max-pools their encoding into single vector and does sequential decoding of program.
The The pattern of how accuracy changes with the number of trees visited during search shows the quality of the neural network. In general, given no limit on MAX V IS IT ED, Search will explore the entirety of the program space and find all programs that solve sample tests, which in our case contains on the order of 10 2 D programs, where D is depth of programs explored. To compare improvement that neural network model brings to search, we compare the model performance at different thresholds of MAX V IS IT ED. See Figure 4 for results. As expected, the accuracy of the model grows if the search gets to explore more trees. Interestingly, the growth of accuracy of Seq2Tree+Search slows down very quickly. It is expected if the neural network is good, since then it predicts correct symbols with high accuracy, and therefore the correct tree is more likely to be found early during the search.
Depth of the program is a reasonable proxy for complexity of the problem. Right part of Figure 4 shows accuracy of the models based on gold program depth. Note that there are relatively few programs with depth below 5 in the dev set, which leads to higher variance. As expected, with the growth of the depth of the tree, the accuracy reduces, since more nodes need to be predicted.
Conclusion
We have presented an algorithm for program synthesis from textual specification and a sample of input / output pairs, that combines deep learning network for understanding language and general programming patterns with conventional search technique that allows to find correct program in discrete space which neural models struggle with. We presented a semi-synthetic dataset to empirically evaluate learning of program composition and usage of programing constructs. Our empirical results show improvement using combination of structured tree decoding and search over attentional sequence to sequence model.
There remain some limitations, however. Our training data currently is semi-generated and contains only limited set of types of problems. It is prohibitively expensive to collect a human annotated set with large quantity of tasks per each problem type, so finding a way to learn from few examples per problem type is crucial. Additionally, in many practical use cases there will be no input / output examples, requiring interaction with the user to resolve ambiguity and improved techniques for structural output decoding in neural networks.
Pengcheng Yin and Graham Neubig. A syntactic neural model for general-purpose code generation.
In ACL, 2017.
V. Zhong, C. Xiong, and R. Socher. Seq2SQL: Generating Structured Queries from Natural Language using Reinforcement Learning. ArXiv e-prints, August 2017.
(2016) provides a comparison of various program synthesis from examples approaches on different benchmarks, showing limitations of existing gradient descent models.
Figure 2 :
2Example of Seq2Tree encoder-decoder model for "given an array, return values divisible by two". Left part is an encoder with embeddings+GRU cell, right is doubly-recurrent decoder with attention.
Figure 4 :
4Analysis of results on dev set. Left plot shows accuracy of the model varying MAX VISITED in Search algorithm. Right plot shows accuracy stratified by depth of the target code tree.
Algorithm 1 Tree-Beam Search 1: queue ← HeapCreate() 2: model ← Seq2Tree(task description) 3: trees visited ← 0 4: HeapPush(queue, EMPTY TREE) 5: while HeapLength(queue) > 0 and trees visited < MAX VISITED do if RunTests(cur tree, sample tests) = PASS then return cur tree if prob < THRESHOLD then break new tree ← CloneTreeAndSubstitute(cur tree, empty node, symbol)6:
cur tree ← HeapPopMax(queue)
7:
empty node ← FindFirstEmptyNode(cur tree)
8:
if empty node = null then
9:
trees visited ← trees visited + 1
10:
11:
else continue
12:
for all (prob, symbol) in GetProbs(model, empty node) do
In decreasing order of
probabilities
13:
14:
if SymbolMatchesNodeType(symbol, empty node) then
15:
16:
HeapPush(queue, new tree)
17:
while HeapLength(queue) > QUEUE N do
18:
Table 1 :
1AlgoLisp statistics.Train
Dev
Test
# tasks
79, 214 9, 352 10, 940
Avg text len
38.17
39.95 37.58
Avg code depth 7.61
8.23
7.97
Avg code len
24.33
29.31 27.16
Vocab size
230
Table 3 :
3Performance on AlgoLisp. Accuracy is defined in section 4.Model
Dev Acc Test Acc
Attentional Seq2Seq
54.4%
54.1%
Attentional Seq2Seq + Search 72.6%
72.3%
IO2Seq
2.5%
2.2%
IO2Seq + Search
13.3%
12.8%
Search
0.5%
0.6%
Seq2Tree
61.2%
61.0%
Seq2Tree + Search
86.1%
85.8%
Table 3
3presents results on AlgoLisp dataset for Seq2Seq+Att, IO2Sec and Seq2Tree model with and without applying search described in section 3.3. Additionally performance of the Search on its own is presented, to show result of search through program space without machine learning model guidance by only validating on input / output examples.Explicitly modeling tree structure of code in Seq2Tree improves upon attentional sequence to sequence model by 11%. Search on it's own finds very limited number of programs with the same limit MAX V IS IT ED = 100 (see 5.2 for details) as Seq2Tree + Search. Only IO model IO2Seq also doesn't perform particularly well even augmented with search, as there is a lot of problems where just test cases are not enough to recover the underlying program. Final model Seq2Tree + Search combines both neural and search into one model and improves to the best result -85.8%.5.2 Analysis
0
20
40
60
80
100
50
60
70
80
90
100
MAX VISITED
Accuracy
Seq2Tree+Search
Att. Seq2Seq+Search
2
4
6
8
10
12
14
0
20
40
60
80
100
Program depth
Accuracy
Att. Seq2Seq
Att. Seq2Seq+Search
Search
Seq2Tree+Search
http://pytorch.org 2 We used implementation described in http://near.ai/articles/2017-09-06-PyTorch-Dynamic-Batching/
Tree-structured decoding with doubly-recurrent neural networks. David Alvarez-Melis, Tommi S Jaakkola, David Alvarez-Melis and Tommi S Jaakkola. Tree-structured decoding with doubly-recurrent neural networks. 2016.
Deepcoder: Learning to write programs. Matej Balog, Alexander L Gaunt, Marc Brockschmidt, Sebastian Nowozin, Daniel Tarlow, abs/1611.01989CoRR. Matej Balog, Alexander L. Gaunt, Marc Brockschmidt, Sebastian Nowozin, and Daniel Tarlow. Deepcoder: Learning to write programs. CoRR, abs/1611.01989, 2016. URL http://arxiv. org/abs/1611.01989.
Semantic parsing on freebase from question-answer pairs. Jonathan Berant, Andrew Chou, Roy Frostig, Percy Liang, EMNLP. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. Semantic parsing on freebase from question-answer pairs. In EMNLP, 2013.
Deep api programmer: Learning to program with apis. Surya Bhupatiraju, Rishabh Singh, Abdel Rahman Mohamed, Pushmeet Kohli, abs/1704.04327CoRRSurya Bhupatiraju, Rishabh Singh, Abdel rahman Mohamed, and Pushmeet Kohli. Deep api pro- grammer: Learning to program with apis. CoRR, abs/1704.04327, 2017.
Program synthesis using natural language. Aditya Desai, Sumit Gulwani, Vineet Hingorani, Nidhi Jain, Amey Karkare, Mark Marron, R Sailesh, Subhajit Roy, Aditya Desai, Sumit Gulwani, Vineet Hingorani, Nidhi Jain, Amey Karkare, Mark Marron, Sailesh R, and Subhajit Roy. Program synthesis using natural language. May 2016. URL https://www.microsoft.com/en-us/research/publication/ program-synthesis-using-natural-language-2/.
Robustfill: Neural program learning under noisy i/o. Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel Rahman Mohamed, Pushmeet Kohli, Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel rahman Mohamed, and Pushmeet Kohli. Robustfill: Neural program learning under noisy i/o. In ICML, 2017.
Language to logical form with neural attention. Li Dong, Mirella Lapata, abs/1601.01280CoRRLi Dong and Mirella Lapata. Language to logical form with neural attention. CoRR, abs/1601.01280, 2016.
Learning to learn programs from examples: Going beyond program structure. Kevin Ellis, Sumit Gulwani, Kevin Ellis and Sumit Gulwani. Learning to learn programs from examples: Going beyond program structure. May 2017. URL https://www.microsoft.com/en-us/research/publication/ learning-learn-programs-examples-going-beyond-program-structure/.
Terpret: A probabilistic programming language for program induction. L Alexander, Marc Gaunt, Rishabh Brockschmidt, Nate Singh, Pushmeet Kushman, Jonathan Kohli, Daniel Taylor, Tarlow, arXiv:1608.04428arXiv preprintAlexander L Gaunt, Marc Brockschmidt, Rishabh Singh, Nate Kushman, Pushmeet Kohli, Jonathan Taylor, and Daniel Tarlow. Terpret: A probabilistic programming language for program induction. arXiv preprint arXiv:1608.04428, 2016.
Neural turing machines. CoRR, abs/1410. Alex Graves, Greg Wayne, Ivo Danihelka, 5401Alex Graves, Greg Wayne, and Ivo Danihelka. Neural turing machines. CoRR, abs/1410.5401, 2014.
Application of theorem proving to problem solving. Cordell Green, Proceedings of the 1st International Joint Conference on Artificial Intelligence, IJCAI'69. the 1st International Joint Conference on Artificial Intelligence, IJCAI'69San Francisco, CA, USAMorgan Kaufmann Publishers IncCordell Green. Application of theorem proving to problem solving. In Proceedings of the 1st International Joint Conference on Artificial Intelligence, IJCAI'69, pp. 219-239, San Francisco, CA, USA, 1969. Morgan Kaufmann Publishers Inc. URL http://dl.acm.org/citation. cfm?id=1624562.1624585.
Flashextract: A framework for data extraction by examples. Sumit Gulwani, Sumit Gulwani. Flashextract: A framework for data extraction by examples. June 2014. URL https://www.microsoft.com/en-us/research/publication/ flashextract-framework-data-extraction-examples/.
Spreadsheet data manipulation using examples. Sumit Gulwani, William R Harris, Rishabh Singh, Commun. ACM. 55Sumit Gulwani, William R. Harris, and Rishabh Singh. Spreadsheet data manipulation using exam- ples. Commun. ACM, 55:97-105, 2012.
Wikireading: A novel large-scale language understanding task over wikipedia. Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, David Berthelot, arXiv:1608.03542arXiv preprintDaniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey, and David Berthelot. Wikireading: A novel large-scale language understanding task over wikipedia. arXiv preprint arXiv:1608.03542, 2016.
Inferring algorithmic patterns with stack-augmented recurrent nets. Armand Joulin, Tomas Mikolov, NIPS. Armand Joulin and Tomas Mikolov. Inferring algorithmic patterns with stack-augmented recurrent nets. In NIPS, 2015.
Łukasz Kaiser, Ilya Sutskever, arXiv:1511.08228Neural gpus learn algorithms. arXiv preprintŁukasz Kaiser and Ilya Sutskever. Neural gpus learn algorithms. arXiv preprint arXiv:1511.08228, 2015.
Adam: A method for stochastic optimization. Diederik Kingma, Jimmy Ba, arXiv:1412.6980arXiv preprintDiederik Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. Chen Liang, Jonathan Berant, Quoc Le, Kenneth D Forbus, Ni Lao, abs/1611.00020CoRRChen Liang, Jonathan Berant, Quoc Le, Kenneth D. Forbus, and Ni Lao. Neural symbolic machines: Learning semantic parsers on freebase with weak supervision. CoRR, abs/1611.00020, 2016. URL http://arxiv.org/abs/1611.00020.
Program synthesis from natural language using recurrent neural networks. Chenglong Xi Victoria Lin, Deric Wang, Kevin Pang, Luke Vu, Michael D Zettlemoyer, Ernst, UW- CSE-17-03-01Seattle, WA, USAUniversity of Washington Department of Computer Science and EngineeringTechnical ReportXi Victoria Lin, Chenglong Wang, Deric Pang, Kevin Vu, Luke Zettlemoyer, and Michael D. Ernst. Program synthesis from natural language using recurrent neural networks. Technical Report UW- CSE-17-03-01, University of Washington Department of Computer Science and Engineering, Seattle, WA, USA, March 2017.
Latent predictor networks for code generation. Wang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Andrew Senior, Fumin Wang, Phil Blunsom, abs/1603.06744CoRRWang Ling, Edward Grefenstette, Karl Moritz Hermann, Tomás Kociský, Andrew Senior, Fumin Wang, and Phil Blunsom. Latent predictor networks for code generation. CoRR, abs/1603.06744, 2016. URL http://arxiv.org/abs/1603.06744.
Effective approaches to attentionbased neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, arXiv:1508.04025arXiv preprintMinh-Thang Luong, Hieu Pham, and Christopher D Manning. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025, 2015.
Learning a natural language interface with neural programmer. Arvind Neelakantan, V Quoc, Martin Le, Andrew Abadi, Dario Mccallum, Amodei, arXiv:1611.08945arXiv preprintArvind Neelakantan, Quoc V Le, Martin Abadi, Andrew McCallum, and Dario Amodei. Learning a natural language interface with neural programmer. arXiv preprint arXiv:1611.08945, 2016.
On-the-fly operation batching in dynamic computation graphs. Graham Neubig, Yoav Goldberg, Chris Dyer, arXiv:1705.07860arXiv preprintGraham Neubig, Yoav Goldberg, and Chris Dyer. On-the-fly operation batching in dynamic com- putation graphs. arXiv preprint arXiv:1705.07860, 2017.
Dengyong Zhou, and Pushmeet Kohli. Neuro-symbolic program synthesis. Emilio Parisotto, Abdel Rahman Mohamed, Rishabh Singh, Lihong Li, abs/1611.01855CoRREmilio Parisotto, Abdel rahman Mohamed, Rishabh Singh, Lihong Li, Dengyong Zhou, and Push- meet Kohli. Neuro-symbolic program synthesis. CoRR, abs/1611.01855, 2016.
Flashmeta: A framework for inductive program synthesis. Oleksandr Polozov, Sumit Gulwani, ACM SIGPLAN Notices. 5010Oleksandr Polozov and Sumit Gulwani. Flashmeta: A framework for inductive program synthesis. ACM SIGPLAN Notices, 50(10):107-126, 2015.
Abstract syntax networks for code generation and semantic parsing. Maxim Rabinovich, Mitchell Stern, Dan Klein, ACL. Maxim Rabinovich, Mitchell Stern, and Dan Klein. Abstract syntax networks for code generation and semantic parsing. In ACL, 2017.
. Scott Reed, Nando De Freitas, arXiv:1511.06279Neural programmer-interpreters. arXiv preprintScott Reed and Nando De Freitas. Neural programmer-interpreters. arXiv preprint arXiv:1511.06279, 2015.
Blinkfill: Semi-supervised programming by example for syntactic string transformations. Rishabh Singh, PVLDB9Rishabh Singh. Blinkfill: Semi-supervised programming by example for syntactic string transfor- mations. PVLDB, 9:816-827, 2016.
Pointer networks. Oriol Vinyals, Meire Fortunato, Navdeep Jaitly, NIPS. Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. In NIPS, 2015.
| [] |
[
"epl draft Word Sense Disambiguation Via High Order of Learning in Com- plex Networks",
"epl draft Word Sense Disambiguation Via High Order of Learning in Com- plex Networks"
] | [
"Thiago C Silva thiagoch@icmc.usp.br \nInstitute of Mathematics and Computer Science\nPostal Code\nUniversity of São Paulo\nP. O. Box 36913560-970São Carlos, São PauloBrazil\n",
"Diego R Amancio diego.amancio@usp.br \nInstitute of Physics of São Carlos\nPostal Code\nUniversity of São Paulo\nP. O. Box 36913560-970São Carlos, São PauloBrazil\n"
] | [
"Institute of Mathematics and Computer Science\nPostal Code\nUniversity of São Paulo\nP. O. Box 36913560-970São Carlos, São PauloBrazil",
"Institute of Physics of São Carlos\nPostal Code\nUniversity of São Paulo\nP. O. Box 36913560-970São Carlos, São PauloBrazil"
] | [] | Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low and high level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low and high level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model. | 10.1209/0295-5075/98/58001 | [
"https://arxiv.org/pdf/1302.4471v1.pdf"
] | 17,776,164 | 1302.4471 | 73c3d35898c6261f116db28e3da21279070a09ed |
epl draft Word Sense Disambiguation Via High Order of Learning in Com- plex Networks
Thiago C Silva thiagoch@icmc.usp.br
Institute of Mathematics and Computer Science
Postal Code
University of São Paulo
P. O. Box 36913560-970São Carlos, São PauloBrazil
Diego R Amancio diego.amancio@usp.br
Institute of Physics of São Carlos
Postal Code
University of São Paulo
P. O. Box 36913560-970São Carlos, São PauloBrazil
epl draft Word Sense Disambiguation Via High Order of Learning in Com- plex Networks
* Author to whom any correspondence should be addressed. PACS 89.75.Hc -Networks and genealogical trees PACS 89.20.Ff -Computer science and technology PACS 02.50.Sk -Multivariate analysis
Complex networks have been employed to model many real systems and as a modeling tool in a myriad of applications. In this paper, we use the framework of complex networks to the problem of supervised classification in the word disambiguation task, which consists in deriving a function from the supervised (or labeled) training data of ambiguous words. Traditional supervised data classification takes into account only topological or physical features of the input data. On the other hand, the human (animal) brain performs both low and high level orders of learning and it has facility to identify patterns according to the semantic meaning of the input data. In this paper, we apply a hybrid technique which encompasses both types of learning in the field of word sense disambiguation and show that the high level order of learning can really improve the accuracy rate of the model. This evidence serves to demonstrate that the internal structures formed by the words do present patterns that, generally, cannot be correctly unveiled by only traditional techniques. Finally, we exhibit the behavior of the model for different weights of the low and high level classifiers by plotting decision boundaries. This study helps one to better understand the effectiveness of the model.
Introduction. -Language is present everywhere and pervades all aspects of our daily life since the dawn of humanity. Although it has been largely studied, several issues remain open, such as the explanation of the emergence of fundamental laws such as the Zipf's Law [1]. Currently, language has not been exclusively studied by linguists or psychologists. Physicists have borrowed some of their tools to study emergent linguistic patterns. For example, complex systems [2], which are characterized by agents interacting in a non-trivial way, have been used to model interactions between words or segments of a text [3][4][5]. In the last few years, complex networks (CN) have been used to study both theoretical and practical aspects of language. Examples of recent theoretical findings using such a robust model include the verification of universal properties [3] and the modeling of adjacency networks. From the practical perspective, complex networks have been used to summarize texts [6], to assess the quality of machine translators [7], to group and classify data [8,9], and others. p-1 arXiv:1302.4471v1 [physics.soc-ph] 18 Feb 2013
In the current paper, we assess the ability of complex networks for the Word Sense Disambiguation (WSD) task (i.e., the discrimination of which of the meanings is used in a given context for a word that has multiple meanings). The importance of the WSD task stems from its essential role played for the development of the so called Semantic Web. Also, the WSD task is essential for machine translation research [7]. Although a myriad of strategies have been developed so far, none of them evaluated the usefulness of complex networks both to model texts and to recognize patterns arising from the topological and semantical relationship among senses. For this reason, we apply a novel generalized methodology based on the concept of complex networks [10] in the field of WSD. First, networks were employed to model the relationship between words in written texts from which it was possible to characterize both the semantical and topological properties of words inserted in a given semantic context (see Section 1.2 of the Supplementary Information 1 (SI)). Then, the similarity relationship given by such a characterization was modeled in the form of networks in order to extract and exploit patterns among the data in the networked representation. Interestingly, assuming that the description of senses in the resulting space is not made up of isolated points, but instead tend to form certain patterns, we found that it is possible to improve the discrimination when we compare the performance achieved with traditional classifiers.
Overview of the Technique. -In this section, we review the hybrid high level technique [10]. Consider a training X training = {(x 1 , y 1 ), . . . , (x l , y l )}, where the first component of the ith tuple x i = (f 1 , . . . , f d ) denotes the attributes of the d-dimensional ith training instance. The second component y i ∈ L = {L 1 , . . . , L n } characterizes the class label or target associated to that training instance. The goal here is to learn a mapping from x → y. Usually, the constructed classifier is checked by using a test set X test = {x l+1 , . . . , x l+u }, in which labels are not provided. In this case, each data item is called test instance.
In the supervised learning scheme, there are two phases of learning: the training phase and the classification phase. In the training phase, the classifier is induced or trained by using the training instances (labeled data) in X training . In the classification phase, the labels of the test instances in X test are predicted using the induced classifier. Below, these two phases are presented in detail.
In the training phase, the data in the training set are mapped into a graph G using a network formation technique g : X training → G = V, E , where V = {1, . . . , V } is the set of vertices and E is the set of edges. Each vertex in V represents a training instance in X training . As it will be described later, the pattern formation of the classes will be extracted by using the complex topological features of this networked representation. The edges in E are created using a combination of the r and k-nearest neighbors (κNN) graph formation techniques. In the original versions, the r technique creates a link between two vertices if they are within a distance , while the κNN sets up a link between vertices i and j if i is one of the k nearest neighbors of j or vice versa. Both approaches have their limitations when sparsity or density is a concern. For sparse regions, the κNN forces a vertex to connect to its k nearest vertices, even if they are far apart. In this scenario, one can say that the neighborhood of this vertex would contain dissimilar points. Equivalently, improper values could result in disconnected components, sub-graphs, or isolated singleton vertices.
The network is constructed using these two traditional graph formation techniques in a combined form. The neighborhood of a vertex x i is given by N (
x i ) = r (x i , y xi ), if | r (x i , y xi | > k. Otherwise, N (x i ) = κ(x i , y xi ),
where y xi denotes the class label of the training instance x i , r (x i , y xi ) returns the set {x j , j ∈ V : d(x i , x j ) < ∧ y xi = y xj }, and κ(x i , y xi ) returns the set containing the k nearest vertices of the same class as x i . Note that the r technique is used for dense regions (| r (x i )| > k), while the κNN is employed for sparse regions. With this mechanism, it is expected that each class will have a unique and single graph component.
For the sake of clarity, Fig. 1a shows a schematic of how the network looks like for a three-class problem when the training phase has been completed. In this case, each class holds a representative component. In the figure, the surrounding circles denote these components: G C1 , G C2 , and G C3 .
In the classification phase, the unlabeled data items in the X test are presented to the classifier one by one. In contrast to the training phase, the class labels of the test instances are unknown. In this way, each test instance is inserted into the network only using the traditional r technique, meaning it is connected to every vertex within this radius, no matter to which class each vertex in this region belongs. Once the data item is inserted, each class analyzes, in isolation, its impact on the respective class component using the complex topological features of it. In the high level model, each class retains an isolated graph component. Each of these components calculate the changes that occur in its pattern formation with the insertion of this test instance. If slight or no changes occur, then it is said that the test instance is in compliance with that class pattern. As a result, the high level classifier yields a great membership value for that test instance on that class. Conversely, if these changes dramatically modify the class pattern, then the high level classifier produces a small membership value on that class. These changes are quantified via network measures, each of which numerically translating the organization of the component from a local to global fashion. As we will see, the average degree, clustering coefficient, and the assortativity measures are employed for the high level order of learning.
For the sake of clarity, Fig. 1b exhibits a schematic of how the classification process is performed. The test instance (triangle-shaped) is inserted using the traditional r technique. Due to its insertion, the class components become altered: G C1 , G C2 , and G C3 , where each of them is a component surrounded by a circle in Fig. 1b. It may occur that some class components do not share any links with this test instance. In the figure, this happens with G C3 . In this case, we say that test instance do not comply to the pattern formation of the class component. For the components that share at least a link (G C1 and G C2 ), each of it calculates, in isolation, the impact on its pattern formation by virtue of the insertion of the test instance. For example, when we check the compliance of the test instance to the component G C1 , the connections from the test instance to the component G C2 are ignored, and vice versa.
Concurrently to the prediction made by the high level classifier, a low level classifier also predicts the membership of the test instance for every class in the problem. The way it predicts depends on the choice of the low level classifier. In the end, the predictions produced by both classifiers are combined via a linear combination to derive the prediction of the high level framework (meta-learning). Once the test instance gets classified, it is either discarded or incorporated to the training set with the corresponding predicted label. In the second case, the classifier must be retrained. Note that, in any of the two situations, each class is still represented by a single graph component.
The High Level Classification. -The hybrid classifier M consists of a convex combination of two terms: (i) a low level classifier (C4.5 [11], kNN [11] or Naive Bayes [11]) 2 ; and (ii) a high level classifier, which is responsible for classifying a test instance according to its pattern formation with the data. Mathematically, the membership of the test instance x i ∈ X test with respect to the class j ∈ L, here written as M (j) i , is given by:
M (j) i = (1 − λ)T (j) i + λC (j) i ,(1)
where T = 0, we may infer that the ith data item does not present any similarities nor complies to the pattern formation of class j. Values in-between these two extremes lead to natural uncertainness in the classification process and are found in the majority of times during a classification task. Note that Eq. (1) generates fuzzy outputs. Moreover, it is valuable to indicate that, when λ = 0, Equation (1) reduces to a common low level classifier. A test instance receives the label from the class j that maximizes (1).
The inference of pattern formation, which is used by the classifier C, within the data is processed using the generated network. The motivation behind using networks is that it can describe topological structures among the data items. These networks are constructed such that: (i) each class is an isolated subgraph (component) and (ii) after the insertion of a new test instance, each class must still retain a representative and unique component. With that in mind, the pattern formation of the data is quantified through a combination of network measures developed in the complex network literature. These measures are chosen in a way to cover relevant high level aspects of the class component. Suppose that K measures are selected to comprise the high level classifier C. Mathematically, the membership of the test instance x i ∈ X test with respect to the class j ∈ L yielded by the high level classifier, here written as C (j) i , is given by:
C (j) i = K u=1 α(u) 1 − f (j) i (u) g∈L K u=1 α(u) 1 − f (g) i (u) ,(2)
where α(u) ∈ [0, 1], ∀u ∈ {1, . . . , K}, K u=1 α(u) = 1, are user-controllable coefficients that indicate the influence of each network measure in the classification process and f (j) i (u) is a function that depends on the uth network measure applied to the ith data item with regard to the class j. This function is responsible for providing an answer whether the test instance x i presents the same patterns of the class j or not. The denominator in (2) has been introduced solely for normalization matters.
With respect to f (j) i (u), it possesses a general closed form given by:
f (j) i (u) = ∆G (j) i (u)p (j) ,(3)
where ∆G (j) i (u) ∈ [0, 1] is the variation of the uth network measure that occurs on the component representing class j if x i joins it and p (j) ∈ [0, 1] is the proportion of data items pertaining to the class j. Remembering that each class has a component representing itself, the strategy to check the pattern compliance of a test instance is to examine whether its insertion causes a great variation of the network measures representing the class component. In other words, if there is a small change in the network measures, the test instance is in compliance with all the other data items that comprise that class component, i.e., it follows the same pattern as the original members of that class. On the other hand, if its insertion is responsible for a significant variation of the component's network measures, then probably the test instance may not belong to that class.
We proceed to explain the role of the p (j) ∈ [0, 1] in (3). In real-world databases, unbalanced classes are usually encountered. In general, a database frequently encompasses several classes of different sizes. A great portion of the network measures are very sensitive to the size of the components. In an attempt to soften this problem and cancel out the effects of distinct components' sizes, (3) introduces the term p (j) , which is the proportion of vertices that class j has.
Composition of the High Level Classifier.
The network measurements that compose the high level classifier are the assortativity [12], the clustering coefficient, and the average degree. The reason why these three measures have been chosen is as follows: the average degree measure figures out strict local scalar information of each vertex in the network; the clustering coefficient of each vertex captures local structures by means of counting triangles formed by the current vertex and any of its two neighbors; the assortativity coefficient considers not only the current vertex and its neighbors, but also the second level of neighbors (neighbor of neighbor), the third level of neighbors, and so on. We can perceive that the three measures characterize the network's topological properties in a local to global fashion. In this way, the combination of these measures is expected to capture the pattern formation of the underlying network in a systematic manner. Details regarding these three measurements are given in the SI.
Results and Discussion. -First, the methodology is applied to an artificial database in order to better understand its functionality. Afterwards, the WSD problem is analyzed. The discussion of the observed results is given below.
High Level Applied to a Toy Database. As an introductory example, consider the toy data set depicted in Fig. 2, where there are two classes: the red or "star" (52 vertices) and the green or "square" (276 vertices) classes. This example serves as a gist of how the hybrid classifier draws its decisions. In the training and classification phases, we employ κ = 3 and = 0.04 for the network construction. The fuzzy SVM [13] with RBF kernel (C = 70 and γ = 2 −1 ) is adopted for the low level classifier. By inspection of the figure, the red or "star" class displays a well-defined pattern: a grid or lattice, whereas the green or "square" class does not indicate any well-established patterns. Here, the goal is to classify the cross-shaped data items (test set) one by one using only the information of the training set. Figures 2a, 2b, and 2c exhibit the decision boundaries of the two classes when λ = 0, λ = 0.5, and λ = 0.8, respectively. When λ = 0, only the SVM prediction is used by the hybrid technique. In this case, one can see that the five data items are not correctly classified. Notice that the decision boundaries are pushed near the red or "star" class by virtue of the large amount of green or "square" items in the vicinity. Now, when λ = 0.5, the SVM and the high level classifier predictions are utilized in the same intensity. In this situation, the decision boundaries are dragged toward the green or "square" class, because of the strong pattern that the red or "star" class exhibits. We can think this phenomenon as being a clash between the two decision boundaries: as λ increases, the more structured class tends to possess more decision power, and, consequently, is able to reduce the effective area of the competing class. For example, when λ = 0.8, the organizational features of the red or "star" class are so salient that its effective area invades the high density region of the green or "square" class. In the two former cases, the hybrid high level technique can successfully classify the cross-shaped data items. In summary, the concept of classification is altered depending on the value of the compliance term. A small compliance term causes the final decision of the hybrid classifier to be rooted in traditional assumptions of low level classifiers. Now, when a large compliance term is used, the salient characteristic that the hybrid classifier attempts to emphasize is the patterns that the classes display. As the structural pattern of a class becomes stronger, wider will be the delineated decision boundary for that class.
High Level Applied to Word Sense Disambiguation. The efficiency of the high level classifier is also verified in a real-world application. In this case, we aim at discriminating senses of ambiguous words (i.e., words with the same lexical form but with different senses) 3 . Using the database presented in Ref [14], two approaches for characterizing senses are employed: the topological and the semantical approach. In the former, each occurrence of a word is characterized by its local structure in the word adjacency network [15]. In the latter, each word sense is represented by the frequency of the w nearby words. Details of these two methodologies are given in the SI. Table 1 shows the results obtained for the five ambiguous words in the topological approach. Similarly, Table 2 depicts the results obtained for the semantic approach. In both cases, when selecting the suitable value of the parameter λ, it is possible to improve the efficiency of the classification achieved by the low level classifiers (C4.5, kNN and Naive Bayes). Moreover, because λ is different from zero in most cases, one can infer that there is a pattern in the data organized in the attribute space. Interestingly, one can conclude that the structural organization of the words in complex networks is not only useful for discriminating senses when modeling the relationship of words in a text, but also when modeling the relationship between words in the attribute space. In other words, when word senses are analyzed with the complex network framework, patterns emerge both in the organization of words in the adjacency network adjacency (before characterization) and in the network built in the attribute space. These unveiled patterns, in turn, cannot be properly discovered by traditional techniques. This reasoning explains the performance boost that occurred when a λ = 0 was employed in the experiments.
Conclusion. -In the current paper, we have applied a novel methodology of supervised data classification in the field of word sense disambiguation. The hybrid classifier is comprised of a combination of traditional (low level) and pattern-based classifiers. The latter uses a network to exploit the topological patterns in search of patterns. From the analysis of the experiments, we have found that the inclusion of the high level term was responsible for improving the ability of classification both in artificial and real-world networks. Specifically, in the latter, the methodology devised in Ref. [14] was improved as a consequence that words conveying the same meaning display organizational patterns not only in textual level but also in the attribute space. This argument serves to strengthen the fact that networks constructed using words are not totally disorganized. Instead, each set of words tend to form patterns that uniquely describe it. The hybrid framework exactly attempts to extract these hidden patterns that are cloaked within the word relationships (edges) in the network.
Because the hybrid high level technique is totally generic, we intend to use it in other real-world applications, other than word disambiguation. In addition, a methodology for automatically finding the best value of the compliance term will also be the subject of our future studies. * * * TCS (2009/12329-1) and DRA (2010/00927-9) acknowledge the financial support from FAPESP. [14] and the discrimination of senses was performed with low (kNN, C4.5 and Bayes) and high level classifiers. Note that the high level technique always outperforms the traditional low level classification.
Approach
Low Table 2: Semantic approach for discriminating senses of ambiguous words. Senses were characterized according to frequency of the n = 5 neighbors of the ambiguous word [14] and the discrimination of senses was performed with low (kNN, C4.5 and Bayes) and high level classifiers. Acc. Rate represents the accuracy rate obtained with an evaluation based on the 10-fold crossvalidation technique [16]. The p-value refers to the likelihood of obtaining the same accuracy rate with an random classifier (see Ref. [14] for details). Note that the high level technique always outperforms the traditional low level classification.
Approach
Fig. 1 :
1(a) Schematic of the network in the training phase. (b) Schematic of how the classification inference is done.
∈
[0, 1] denotes the membership of the test instance x i on class j produced by an arbitrary traditional 2 A brief description of the low level classifiers is given in the SI.
we may deduce that the ith data item carries all the characteristics of class j. On the other hand
Fig. 2 :
2Behavior of the decision boundaries as λ varies in the toy data. Decision boundaries when (a) λ = 0; (b) λ = 0.5; and (c) λ = 0.8.
Table 1 :
1Structural approach for discriminating senses of ambiguous words. Senses were characterized according to topological CN measurements
The Supplementary Information (SI) is hosted at http://dl. dropbox.com/u/2740286/epl_SI_9apr.pdf.
For example, the word "bear" might be either related to a large mammal of the family it Ursidae or to the verb "carry".
C D Manning, H Schtze, Foundations of Statistical Natural Language Processing. MIT PressManning C. D. and Schtze H., Foundations of Statistical Natural Language Processing (MIT Press) 1999
. S E Page, Diversity and Complexity. Princeton University PressPage S. E., Diversity and Complexity (Princeton Univer- sity Press) 2010.
. R Ferrer I Cancho, R V Sole, Procs. Natl. Acad. Sci. USA. 100788Ferrer i Cancho R. and Sole R. V., Procs. Natl. Acad. Sci. USA, 100 (2003), 788.
. H Liu, C Xu, Europhysics Letters. 9328005Liu H. and Xu C., Europhysics Letters 93 (2011), 28005.
. R F Cancho, Europhysics Letters. 761228Cancho R. F., Europhysics Letters 76 (2006), 1228.
. D R Amancio, M G V Nunes, Costa O N L OliveiraJr, F Da, Physica A. 3911855Amancio D. R., Nunes M. G. V., Oliveira Jr. O. N. and Costa L. da F., Physica A, 391 (2012), 1855.
W Weaver, Machine Translation of Languages: Fourteen Essays (Technology Press of MIT. Cambridge, MA, and; New York, NYJohn Wiley & SonsWeaver W., Machine Translation of Languages: Fourteen Essays (Technology Press of MIT, Cambridge, MA, and John Wiley & Sons, New York, NY. ) 1955.
. T C Silva, L Zhao, IEEE Transactions on Neural Networks and Learning Systems. 233Silva T. C. and Zhao L., IEEE Transactions on Neural Networks and Learning Systems 23(3) (2012), 451-466.
. T C Silva, L Zhao, IEEE Transactions on Neural Networks and Learning Systems. 233Silva T. C. and Zhao L., IEEE Transactions on Neural Networks and Learning Systems 23(3) (2012), 385-398.
. T C Silva, L Zhao, IEEE Transactions on Neural Networks and Learning Systems. 236to appearSilva T. C. and Zhao L., IEEE Transactions on Neural Networks and Learning Systems 23(6) (2012), to appear.
C M Bishop, Pattern Recognition and Machine Learning. Springer2006Bishop C. M., Pattern Recognition and Machine Learning (Springer) 2006
. M E J Newman, Phys. Rev. Lett. 89208701Newman M. E. J., Phys. Rev. Lett. 89 (2002), 208701.
. C Cortes, V N Vapnik, Machine Learning. 20Cortes C. and Vapnik V. N., Machine Learning 20, (1995).
. D R Amancio, Costa O N L OliveiraJr, F Da, Europhysics Letters. 9818002Amancio D. R., Oliveira Jr. O. N. and Costa L. da F., Europhysics Letters 98 (2012) 18002.
. D R Amancio, E G Altmann, O N OliveiraJr, L F Costa, New Journal of Physics. 39013Amancio D. R., Altmann E. G., Oliveira Jr. O. N. and Costa L. F., New Journal of Physics 390 (2011), 13.
R Kohavi, Proceedings of the 14 th International Joint Conference on Artificial Intelligence. the 14 th International Joint Conference on Artificial Intelligence212Kohavi R., Proceedings of the 14 th International Joint Conference on Artificial Intelligence, 2 (1995), 12.
| [] |
[
"EFFICIENT SEQUENCE PACKING WITHOUT CROSS-CONTAMINATION: ACCELERATING LARGE LANGUAGE MODELS WITHOUT IMPACTING PERFORMANCE",
"EFFICIENT SEQUENCE PACKING WITHOUT CROSS-CONTAMINATION: ACCELERATING LARGE LANGUAGE MODELS WITHOUT IMPACTING PERFORMANCE"
] | [
"Mario Michael Krell mariok@graphcore.ai ",
"Matej Kosec matejk@graphcore.ai ",
"Sergio P Perez sergiop@graphcore.ai ",
"Andrew Fitzgibbon ",
"\nGraphcore Inc. United States of America\nGraphcore Inc. United States of America\nGraphcore Inc\nUnited Kingdom\n",
"\nGraphcore Inc\nUnited Kingdom\n"
] | [
"Graphcore Inc. United States of America\nGraphcore Inc. United States of America\nGraphcore Inc\nUnited Kingdom",
"Graphcore Inc\nUnited Kingdom"
] | [] | Effective training of today's large language models (LLMs) depends on large batches and long sequences for throughput and accuracy. To handle variable-length sequences on hardware accelerators, it is common practice to introduce padding tokens, so that all sequences in a batch have the same length. We show in this paper that the variation in sequence lengths in common NLP datasets is such that up to 50% of all tokens can be padding. In less common, but not extreme, cases (e.g. GLUE-cola with sequence length 128), the ratio is up to 89%. Existing methods to address the resulting inefficiency are complicated by the need to avoid 'cross-contamination' in self-attention, by a reduction in accuracy when sequence ordering information is lost, or by customized kernel implementations only valid for specific accelerators. This paper introduces a new formalization of sequence packing in the context of the well-studied bin packing problem, and presents new algorithms based on this formulation which, for example, confer a 2x speedup for phase 2 pre-training in BERT. We show how existing models can be adapted to ensure mathematical equivalence between the original and packed models, meaning that packed models can be trained with existing pre-training and fine-tuning practices. | null | [
"https://export.arxiv.org/pdf/2107.02027v2.pdf"
] | 252,735,335 | 2107.02027 | 85cac89ba01a07f3dbf6dbb1e0c56067a3105714 |
EFFICIENT SEQUENCE PACKING WITHOUT CROSS-CONTAMINATION: ACCELERATING LARGE LANGUAGE MODELS WITHOUT IMPACTING PERFORMANCE
Mario Michael Krell mariok@graphcore.ai
Matej Kosec matejk@graphcore.ai
Sergio P Perez sergiop@graphcore.ai
Andrew Fitzgibbon
Graphcore Inc. United States of America
Graphcore Inc. United States of America
Graphcore Inc
United Kingdom
Graphcore Inc
United Kingdom
EFFICIENT SEQUENCE PACKING WITHOUT CROSS-CONTAMINATION: ACCELERATING LARGE LANGUAGE MODELS WITHOUT IMPACTING PERFORMANCE
Effective training of today's large language models (LLMs) depends on large batches and long sequences for throughput and accuracy. To handle variable-length sequences on hardware accelerators, it is common practice to introduce padding tokens, so that all sequences in a batch have the same length. We show in this paper that the variation in sequence lengths in common NLP datasets is such that up to 50% of all tokens can be padding. In less common, but not extreme, cases (e.g. GLUE-cola with sequence length 128), the ratio is up to 89%. Existing methods to address the resulting inefficiency are complicated by the need to avoid 'cross-contamination' in self-attention, by a reduction in accuracy when sequence ordering information is lost, or by customized kernel implementations only valid for specific accelerators. This paper introduces a new formalization of sequence packing in the context of the well-studied bin packing problem, and presents new algorithms based on this formulation which, for example, confer a 2x speedup for phase 2 pre-training in BERT. We show how existing models can be adapted to ensure mathematical equivalence between the original and packed models, meaning that packed models can be trained with existing pre-training and fine-tuning practices.
Introduction
Many language datasets, including the de-facto pre-training dataset for BERT-Wikipedia, have a skewed distribution of sequence lengths (see Figure 1). However, typical machine learning accelerators, and their corresponding libraries, exhibit poor performance when processing variable-length workloads. A simple mitigation is to set a maximum sequence length, and to pad shorter sequences with padding tokens. This naive batching is widely used and provided in the vanilla BERT implementation as well as the Hugging Face framework [32]. Its effect is enhanced by the offline dataset generation process which, in BERT, attempts to "pack" together sentences so as to fill the sequence length as completely as possible [8]. We improve this process at a whole-dataset level.
We show that, even after this pre-processing, padding tokens represent 50% of all tokens of the Wikipedia pre-training dataset at sequence length 512. Thus, by avoiding processing the padding tokens one can get a 2x speed-up for phase 2. Overall, the lengths range between 5 tokens up to 512. Samples of length 512 represent only 23.5% of the dataset, Beyond the simple batching, other solutions have been addressed in the literature, and in open-source software implementations. When processing sequences, most libraries and algorithms mention packing as reference to concatenating sentences from the same document (BERT) or from different documents (BERT, T5 [24], GPT-3 [4], and RoBERTa [16]) as they arrive (GREEDY) from the source dataset to generate the training dataset. None of the respective papers addresses the packing efficiency, i.e., remaining fraction of padding. To "separate" sequences from different documents, a separator token is introduced. However, this is not sufficient and can have a significant impact on performance. This is discussed only in the RoBERTa paper which shows that downstream F1 scores get consistently reduced on average by 0.35%. Alternative common approaches to overcome the large amount of padding in many datasets are "un-padding" as in Effective Transformer [5] and sorted batching (SORT) as in Faster Transformer [21], lingvo [28] fairseq [22], and RoBERTa. However, for running efficiently on arbitrary accelerators, these approaches require substantial hardware-specific low-level code optimizations only available on GPUs. Further details are in Sections C [1] and 4.4.
Beyond language models, packing has been also present in other areas of machine learning, however with little to no exploration in the literature and mostly hidden in some libraries without any further discussion. For example, PyG (PyTorch Geometric) combines multiple small graphs in a batch to account for the large variation in size and to optimize the hardware usage when training a Graph Neural Network (GNN). Another example is the RNN implementation in PyTorch which introduces a "PackedSequence" object and states that "All RNN modules accept packed sequences as inputs" but does not address how sequences are packed efficiently and how the processing of packed sequences is implemented in an efficient manner while avoiding interaction between sequences. Even though we focus on BERT [6] and other transformers in this paper, the general principles can be transferred to many more machine learning algorithms with differently sized data samples.
In this paper, we formally frame the packing problem in transformer based models, and provide some solutions, showing that sequences can be packed efficiently, separator tokens are not required, and cross-contamination can be avoided with little overhead.
In summary, the contributions of the paper are as follows. In Section 2, we produce histograms of a variety of datasets showing the high percentage of padding tokens. In Section 3.1, we present two new deterministic and efficient packing algorithms based on established solvers which efficiently pack datasets with millions of sequences in a matter of seconds (or less). In Section 3.2 and Section 3.3, we describe 'cross-contamination' -the cause of the accuracy reduction which separator tokens do not mitigate-and show how the BERT model can be adjusted to show the same convergence behavior on packed and unpacked sequences. We empirically show that the proposed packing algorithms produce a nearly-optimal packing scheme for Wikipedia pre-training dataset (Section 4.1) and more in the Appendix. In Section 4.2, we demonstrate that the convergence of the BERT large model on the packed dataset is equivalent to that on the un-packed dataset with 2x throughput increase on the Wikipedia sequence length 512 pre-training dataset. Further experiments underline the necessity and efficiency of our changes. BERT is pre-trained using masked-language modelling and next-sentence prediction on a large corpus of Wikipedia articles. Each sequence is composed of one <CLS> token followed by the first "segment" of sentences, followed by a <SEP> token, and then finally the second "segment" of sentences. Because these "segments" are created in sentence-level increments there is no token-level control of sequence length. Furthermore 10% (default value, [7]) of sequences are intentionally cut short. This leads to significant levels of padding, especially for longer maximum sequence lengths (see Figure 1 and Section J [1]). At sequence length 128 (commonly used in phase 1 of pre-training) the theoretical speed-up is around 1.2, at sequence length 384 this increases to 1.7, and finally at sequence length 512 (commonly used for phase 2 of pre-training) it is 2.0. Despite the widespread use of the Wikipedia dataset for pre-training BERT such histograms have, to the best of our knowledge, not been published previously. This has perhaps lead to the underestimation of the speed-up opportunity available. To put things into perspective, the sequence length 512 dataset contains 8.33 billion tokens, of which 4.17 billion are padding tokens.
Note that the skewed sequence length distributions are neither limited to Wikipedia, as shown with GLUE [30,31] from Section L [1] and SQuAD 1.1 [25] from Section K [1] (2.2x speed up), to BERT training, as shown with LibiSpeech text distributions [23] from Section M [1], nor to text itself, given the LibriSpeech audio data distributions, and the QM9 molecular data [27,26] (1.6x speed-up, Section Q [1]). All distributions can be found in Figure 1. Since LibriSpeech audio data is skewed to longer sequences, only 1.3x speed-up could be achieved despite the theoretical maximum of 1.6x. For all other cases, the algorithms presented in Section 3.1 lead to close to optimal packing.
Methods
Our approach consists of three distinct components. Firstly, we pack the n data samples efficiently during pre-processing to make full use of the maximum sequence length, s m (Sections 3.1 and F). Secondly, we introduce a series of model changes in Section 3.2 that preserve the equivalence with the original BERT implementation. The changes include a self-attention mask to prevent the model from attending between different sequences in the same pack (Section 3.2.2), and an adjustment of the the positional embeddings (Section 3.2.1) to handle packs of sequences. Other components of the model, such as the feed-forward layer [29], operate on a per-token basis and do not require modification for pre-training. In Section 3.2.3, we also demonstrate how to compute a per-sequence loss and accuracy for NSP and downstream fine-tuning tasks. Thirdly, we provide suggestions for hyperparameter adjustment (Section 3.3) that lead to analogous convergence behavior between the packed and un-packed BERT implementations. Additional videos and animations are provided as supplemental material.
Packing algorithms
The widely studied and well established bin packing problem deals with the assignment of items into bins of a fixed capacity such that the number of utilized bins is minimized. It has been known for decades if not centuries. Since an exact solution is strongly NP-complete [14], numerous approximate solutions have been proposed [12,15,13,36]. Since most existing approximations have a high complexity of at least O(n log n), we propose two new heuristic offline algorithms that are tailored to the NLP setting applied to the whole dataset. For a detailed introduction to packing see Section F.
Shortest-pack-first histogram-packing (SPFHP)
Shortest-pack-first histogram-packing (SPFHP) works on the bins in the sequence length histogram (with bin size 1) rather than the individual samples. The histogram is traversed in sorted order from longest to shortest sequences. Then, to pack the data during the traversal, we apply the worst-fit algorithm [12,36] such that the histogram bin being processed goes to the "pack" 2 that has the most space remaining ("shortest-pack-first"). If the histogram bin does not fit completely, a new pack is created. We also limit the packing depth, in other words the maximum number of sequences that are allowed in a pack. Therefore, an existing pack is only extended if it is not already at maximum packing depth. The detailed code for the algorithm is provided in Listing 3. The time and space complexity of the algorithm are O(n + s 2 m ) and O(s 2 m ) (Section G.2 [1]).
Non-negative least squares histogram-packing (NNLSHP)
The proposed NNLSHP algorithm is based on re-stating the packing problem as a (weighted) non-negative least squares problem (NNLS) [3] of the form wAx = wb where x ≥ 0. The vector b is the histogram containing the counts of all the sequence lengths in the dataset. Next, we define the A matrix (the "packing matrix") by first generating a list of all possible sequence length combinations ("strategies") that add up exactly to the maximum sequence length. We focus specifically on strategies that consist of at most 3 sequences per pack (independent of b) and encode each strategy as a column of the sparse matrix A. For example, a strategy consisting of the sequence length 128, 128, and 256 in represented a column vector that has the value 2 at the 128th row, the value 1 at the 256th row, and zero at all other rows.
The variable x describes the non-negative repetition count for each strategy. So a 24 in the ith row of x means that the strategy represented by the ith column of A should repeat 24 times. Moreover, in the un-weighted setting, Ax = b states that we would like to "mix" the pre-defined strategies (columns of A) such that the number of samples matches the histogram b, and where each strategy is used x ≥ 0 times. We use the residual weight w to control the penalization of the Ax − b residual on different sequence lengths (different rows of b). Heuristically, we set the weight of 0.09 for all sequences of length 8 or smaller because they are considered acceptable padding sequences while all other sequence lengths get weight 1. We discuss this heuristic choice of parameters in Section F.4.5 and F.5 [1]. The overall efficiency of the packing is not greatly influenced by the weighing (less than 1% extra speed-up).
After solving wAx = wb for x ≥ 0 using an off-the-shelf solver, we obtain a floating point solution, which means that the repetition counts are not necessarily integers. Since we cannot use a non-natural number of strategies, we round the solutionx to the nearest integer. The error introduced by this rounding is found to be negligible (a few hundred sequences in the worst case) compared to the size of the dataset (millions of sequences). The time complexity and space complexity of the algorithm are O(n + s 5 m ) and O(s 3 m ). Further details are provided in Section F.4.
packedBERT: model changes
This section describes how any vanilla BERT implementation should be modified for packed sequence processing, such that the behavior of the model is the same as when processing unpacked sequences. Preserving the mathematical equivalence is necessary to ensure existing BERT pre-training and fine-tuning practices remain valid, as well as being required by benchmarks such as MLPerf™ [17]. The presented approaches and principles apply to a variety of other models.
Adjust positional embeddings
The BERT model uses three types of embeddings: token, segment, and positional embeddings. The latter is canonically implemented as a bias add operation, rather than a full embedding look-up. This is possible because the positional indices increase linearly for every sequence. However, when using the packed data format the position index needs to be reset with each new packed sequence. For instance, when packing two sequences one of length 2 and one of length 3, the positional embedding indexes that need to be picked up are [0, 1, 0, 1, 2]. To achieve this, the bias add needs to be replaced by an embedding look-up to extract the correct positional embedding for each token in the pack. This also requires keeping an extra input which specifies the position of each token in its sequence. This required adjustment has only a minor impact on absolute accuracy/loss (see Section 4.2 and 4.2.1). To maintain an implementation that is consistent with the un-packed version, tokens from different sequences within a pack should not be able to attend to each other. This is typically achieved in other implementations by unpacking the sequences using custom attention kernels and then doing the attention per-sequence [5]. Instead, we propose directly masking the attention matrix with a block-diagonal mask before the attention softmax. This is straightforward to implement in modern frameworks (see Figure 2). Naturally, there is a cost to both the mask construction and applying it to the attention matrix. However, it is required to keep the accuracy (see Table 1, Section 4.1, Section 4.2). See also the code of the deprecated tensor2tensor library and our own provided code.
Adjust attention masking
Adjust per-sequence loss and accuracy
Canonical implementations of BERT compute the cross-entropy loss for the masked language model on a per-token basis. However other NLP tasks, such as SQuAD, compute the loss and accuracy on a per-sequence basis. This section discusses how to handle such tasks when training with packed sequences. Simply feeding packs of sequences to the same implementation of cross-entropy would result in a per-pack weighted loss. In other words, the overall loss on the micro-batch would sum-up the losses on the individual packs, rather than individual sequences. As a result, the model would converge to a different optimum than when running with the un-packed implementation. For instance, a pack of a single sequence would contribute to the loss with the same weight as a pack of three sequences.
To recover the per-sequence averaging behavior of the canonical un-packed BERT implementation, we effectively "unpack" the incoming logits and labels. Once the sequences have been unpacked, we can compute the loss on each sequence separately as usual and then add up the losses. However, rather than looping through the sequences index, we compute on all indexes in parallel (see Figure 2). This minimizes the latency overhead of un-packing the loss calculation. As an example, we show how per-sequence loss can be implemented for the pre-training task. We use the "masked lm weight" [7] input tensor to represent which sequence a given masked token belongs to (0, 1, 2 and so on). This is consistent with the canonical BERT implementation where this input takes a value of either 1 (belonging to the sequence) or 0 (belonging to padding). The full methodology is detailed in Listing 5 and can be applied to other classification or pre-training tasks.
Adjust hyperparameters
In terms of convergence behavior, the primary consequence of packing is an increase in the effective batch size (with respect to number of sequences and real tokens) with some added variation over different iterations. If we look on the sentence level, the number of sentences in one batch increases by the packing factor. Similarly, the number of tokens in one batch increases. Hence, hyperparameters that are sensitive to these numbers need to be adjusted.
A direct solution is to reduce the computational batch size by the packing factor (average number of sequences per pack) and keep all other hyperparameters the same. For example, if the packing factor is 2, cutting the gradient accumulation count by half is sufficient. The advantage of this strategy is that no fine-tuning of hyperparameters is required and performance curves are comparable. However, this approach might be not desirable as it might imply under-utilizing the memory/compute, especially if the micro batch size needs to be reduced.
Hence to preserve batch size and optimize hardware utilization, we additionally propose an approximate heuristic for updating the decay parameters of the LAMB optimizer [35] . For a packed dataset with a packing factor p, we update the decay parameters as: β 1 := β p 1 , β 2 := β p 2 . For p = 2, this corresponds to the exact parameters for calculating momentum and velocity, when updating with the same gradient twice (Section D). A common approach is to scale the learning rate with the batch size. However, our experiments in Section 4.2 show that this reduces convergence speed.
Since these adjustments are only heuristics the convergence of the model will be comparable but not identical. In particular, it is unlikely that simply adjusting the hyperparameters will fully undo the impact of the increased batch size. However, with these adjustments, researchers should be able to continue to use existing configurations.
Experiments
Bin packing algorithm comparison
We evaluate our algorithms using the following metrics: number of packs, number of all tokens, number of padding tokens, solution time of the packing algorithm (after histogram and strategy creation), number of strategies used, packing efficiency (the fraction of non-padding tokens in the packed dataset), the speed-up achieved compared to not packing (depth 1), and the average number of sequences per sample (packing factor). For SPFHP, we analyse different (maximum) packing depth, since packing is less efficient with smaller depth and we want to get a general understanding on how the packing depth influences the processing time. For NNLSHP, we focus on packing depth 3 because it packs the data sufficiently well. For the speed-up analysis, we focus on the intelligence processing unit (IPU) [11] (IPU-M2000, 16 accelerator chips), BERT phase 2 pretraining setup as in Section 4.2. A GPU dynamically loads the code into the accelerator; in contrast, the IPU works with a static pre-compiled engine that gets loaded onto the chip at the start of the run. While other approaches result in excessive padding or continuous changes of the code, our approach can work with the same code for the whole dataset. So in this setting the IPU architecture would especially benefit from our approach since it avoids code changes. Nevertheless, it can be applied to any implementation on GPU or TPU. For determining the speed-up, we take advantage of the precompiled kernel. Since time measurements are quite noisy, we can profile the kernel and how many cycles it takes for processing a batch. That way, we can determine the overhead (in cycles) from processing the additional attention masking and for unpacking the loss. Combining overhead and packing factor, we get the speed-up estimate. No experiment repetitions are required since the algorithms and measurements are deterministic. Packing depth describes the maximum number of packed sequences. NONE is the baseline BERT implementation, whereas SORT corresponds to sorted batching, and GREEDY concatenates sequences as they arrive until they would exceed 512 tokens. Setting no limit resulted in a maximum packing depth of 16. EFFiciency is the percentage of real tokens in the packed dataset. The packing factor describes the resulting potential speed-up compared to packing depth 1. With overhead (OH), we denote the percentage decrease in throughput due to changes to the model to enable packing (such as the masking scheme introduced in Section 3.2.2). The realized speed-up is the combination of the speed-up due to packing (the packing factor) and the decrease in throughput due to the overhead on the IPU. It is used to measure the relative speed-up in throughput and the overhead from masking and loss adjustment. SORT can be only efficient on GPUs (see Section 4.4).
The main results for the performance metric evaluation are displayed in Table 1. The processing time for SPFHP on an Intel(R) Xeon(R) Gold 6138 CPU with 2.00GHz, 80 nodes, and 472G RAM was around 0.03s and independent from the packing depth. Classical First-Fit-Decreasing requires 87-120s, a lot of memory, and scales almost linear with the number of samples. We see that the overhead slightly increases with packing depth but that the benefits of packing outweigh the cost. The best speed-up is obtained with NNLSHP at depth 3 which required 28.4s on the CPU for processing and ran out of memory for larger depth. With a value of 1.913, it is close to the theoretical upper bound of 2.001. The results show that efficiency, packing factor, and speed-up can be viewed inter-changeably. The amount of time needed to process a sample (a pack of sequences) is barely changed relative to the un-packed implementation. The packing factor, or the improvement in efficiency, effectively provide an accurate estimate of the speed-up. GREEDY packing as used in T5 shows to be quite inefficient and sorted batching (SORT) is highly efficient in avoiding padding but the resulting different computational graphs cause a major overhead on the IPU that exceeds the benefits of avoiding the padding. Since we made our algorithm and code public available, results have been reproduced with a different framework on the Habana Gaudi accelerator [10] and confirmed that our approach is hardware and software independent giving it a huge advantage over existing approaches.
MLPerf™ phase 2 pretraining setup: learning curves and hyperparameter adjustment
For depth 1 (classic BERT) and NNLSHP with depth 3, we additionally evaluate on the MLPerf™ version 0.7 BERT pre-training benchmark [17]. Briefly, this involves training from a standard checkpoint to a masked-language model accuracy of 71.2% using 3 million sequences with a maximum length of 512 tokens (refer to [19] for details). Following this standardized benchmark supports reproduction of results even on other systems and makes sure that the reproduction effort is moderate and setup rules are clearly documented. We compare the resulting speed-up as well as the respective learning curves by evaluating the data on a held-out validation dataset. The objective of this additional evaluation is to analyse if convergence behavior is changed by the packing strategy and if the theoretical speed-up can be achieved in practice.
With packing, we effectively increase the average batch size by the packing factor (≈ 2). However, with a different batch size, different hyperparameters are required (see Section 3.3) and there is no mapping that will generate exact matching of results but only heuristics. In a first comparison, we use the same hyperparameters when comparing packed and unpacked training except for cutting the accumulation count by half. This way, we make sure that the batch size is constant on average and we have the same amount of training steps. In the second comparison, we evaluate our heuristics and how they compensate the difference in batch size. This setup is more desirable because it is beneficial to use the hardware to its full potential and cutting the batch size by half usually reduces throughput. In the third comparison, we compare two optimized setups. In these two cases, packing takes half the amount of training steps.
The learning curves are displayed in Figure 3. In the first setup, we see the curves almost matching perfectly when normalizing by the numbers of samples processed. Differences can be explained by the variation of the number of sequences in the packing batch, and general noise in the training process. Especially after the initial phase, the curves show a near-identical match. The second setup shows bigger differences since changing the batch size and hyperparameters changes the training dynamics. We observe slower convergence early on in training due to the increased batch size. This is expected. The adjustment of the learning rate actually decreases performance probably because we correct for the increased number of sequences already in the modified loss. With the adjustment of the decay parameter of LAMB, we see matching performance at the later training stages. However, it is not feasible to completely recover the early convergence behavior of the smaller batch size by adjusting the hyperparameters. For instance doubling the batch size of unpacked BERT to 3000 and adjusting the LAMB decay parameters leads to more of a slow down in convergence than when running packed BERT with a batch size of 1500 and a packing factor of 2. n practice, our implementations exceeds the estimated 1.913 maximum speed-up. This estimate is based on the reduction in the computational work needed to process the dataset. However, packing the data also reduces the latency of the transferring the data to the device. Figure 3 shows that the realized total speed-up from packing exceeds 2x.
Ablation study
So far, we have shown that with the introduced adjustments, we can match the accuracy of unpacked BERT. In the following, we analyze in how far the masking adjustment is required. In Figure 4, we can see that without our adjustments, training loss and accuracy worsen drastically and a longer training time does not lead to a recovery. When not adjusting the positional embedding, the loss and accuracy almost match. However, the accuracy stalls at 71.8% and does not reach the target accuracy of 72.1%. So overall, both adjustments are crucial to avoid a reduction in performance.
When running packed BERT without the NSP loss but keeping everything else the same in a full training setup, we observed that downstream performance on SQuAD reduced the F1 measure by 1.31% and EM by 1.15%. Hence, we do not consider removing NSP as done in approaches like RoBERTa and T5 as discussed in Section I.
Full pretraining and SQuAD finetuning
Packing slightly violates the i.i.d. assumption of data. Thus, we have to check that downstream performance is not impacted by packing. This is especially relevant in a full training setup without a starting checkpoint. To this aim, we show that the packed and unpacked SQuAD 1.1 scores are comparable after a full-pretraining of BERT base and large plus fine-tuning. During pre-training, in order to avoid giving an advantage to packing by further hyperparameter tuning, we reduce the gradient accumulation count for the packed BERT training for phase 1 and phase 2 to match, on average, the total number of sequences that get processed before each weight update. With this approach, we can use the same hyperparameters and number of training steps but process each batch faster by avoiding the processing of padding. This gives a slight disadvantage to the packed run in terms of machine utilization, as explained in Section 3.3 and is different to the speedup analysis in Section 4.2. For Phase 2, we use sequence length 384 since longer range attention is not relevant for SQuAD 1.1. The respective speed-ups from packing for BERT base and large are shown in Table 2: the realized speed-up, measured as the quotient of the throughputs between the packed and unpacked runs, is slightly lower to the theoretical throughput (i.e. the packing factor) due to the packing overhead. Further learning curves with the loss function and accuracy are provided in Section P. For the fine-tuning training on SQuAD 1.1, we do not use packing. The scores, computed as the median of 10 different seeds, are displayed in Table 3. They are comparable to the reference ones in [6]: for BERT base (resp. large) the F1 score is reduced by 0.2% (resp. 0.3%) and the EM score increases by 0.3% (resp. 0.02%).
Scaling analysis: Impact of accelerators count
A further advantage of packing over competing un-padding approaches is the inherent load balancing provided by packing. So called un-padding approaches rely on dynamically launching custom kernels that ignore padding. A stated advantage of such implementations is the ability to avoid computing the complete (512 x 512) attention matrix. This provides additional computational savings compared to packing, where the attention matrix is computed in its entirety and then masked. Because of these additional savings, un-padding can exceed the theoretical upper bound for speed-up from packing (2.013 on Wikipedia). As a result of the dynamic nature of the approach, the processing time with un-padding is different for each sequence in the batch, and the amount of time required to process a batch of sequences will be determined by the processing time of the longest sequence in the batch (with the sequences being processed in parallel). Furthermore, in the multiple accelerator setting the processing time on each device will vary depending on the sequences in the batch that it receives. Devices which finish early have to wait for the slowest device to finish before exchanging gradients. This load-imbalance between the devices (and inside the batch) leads to a considerable decrease in the speed-up from un-padding as the number of accelerators is increased (see Figure 5 and Section E [1]). In contrast, packing (our approach) is inherently load-balanced. The processing time on each accelerator is independent of the content inside the batch received by the device. Any number of accelerators can therefore operate in unison without having to wait for the slowest batch to process (all per-device batches are equally fast).
Conclusion
Whereas packing is a well known concept, this paper sheds a new light onto it in multiple aspects. First, we visualize the sequence length distributions of multiple datasets not just from language domains but also audio and molecular domains to emphasize that packing is beneficial for varied datasets, leading to more than 2x acceleration by removing 50% or more padding. Second, we provide two new highly efficient packing approaches based on established solvers that leave almost no padding and that can tackle arbitrarily large datasets in a matter of seconds, in contrast to existing approaches that are slow and suboptimal. Third, we demonstrate that without adjusting the sequence processing algorithm (e.g., BERT) to the packed sequences, predictive performance is reduced. Thus, we propose several model adjustments that are all necessary to keep predictive performance. Last but not least, we prove that, thanks to such adjustments, predictive performance is preserved as if no packing was used -but speed significantly increases, especially since the adjustments come with an overhead of less than 5%. We prove in our experiments that downstream performance is not impacted by packing and that the anticipated 2x acceleration can be achieved.
In the future, an interesting direction is the packing of images of different sizes to help accelerate computer-vision applications. This is especially relevant given the recent advances in the use of transformer-based approaches in the computer vision domain, for example the visual transformer [33]. Note that many images come in different shapes and resolutions and packing them can be a new approach to tackle this diversity instead of casting them all to the same resolution and shape. Masking out the self-attention within transformers is easier to implement than avoiding cross-contamination of convolutions applied to packed images. Future work should explore improving the performance of other models (RoBERTa, GPT-3, T5) by avoiding contamination between non-contiguous segments from different documents. Even BERT itself might benefit from avoiding contamination between the two concatenated segments.
[19] MLCOMMONS. v0.7 Results. https://mlcommons.org/en/training-normal-07/, 2020. Result not verified by MLPerf. Throughput/speedup is not the primary metric of MLPerf. MLPerf name and logo are trademarks. See www.mlperf.org for more information.
[20] NVIDIA. Reference numbers for BERT un-padding results. https://github.com/mlcommons/training_results_v0. 7/blob/master/NVIDIA/results/dgxa100_ngc20.06_pytorch/bert/result_0.txt, 2020. Throughput/speedup is not the primary metric of MLPerf. MLPerf name and logo are trademarks. See www.mlperf.org for more information.
[21] NVIDIA. Faster Transformer. https://github.com/NVIDIA/DeepLearningExamples/tree/master/ FasterTransformer/v1, 2021. Supplemental Material for "Efficient Sequence Packing without Crosscontamination: Accelerating Large Language Models without Impacting Performance"
A Broader impact
We showed that when pre-training BERT on Wikipedia, the computational overhead taken to process padding tokens is roughly 50%. By eliminating this wasted computational time, the approach presented in this paper paves a way to halving the carbon footprint of training BERT-based models.
Furthermore, our approach circumvents the need for custom kernels, making the benefits of packing readily accessible to a broader audience of NLP practitioners. As such, we are hopeful the research will have a positive impact on the NLP community, and do not see any disadvantage of using this approach.
The benefit of our algorithm is based on two assumptions: A skewed length distribution in the training dataset and a hardware setup that trains efficiently on a fixed batch size. If efficient training is possible, with a variable batch size approaches like FasterTransformer and the fairseq sorted batch approach will result in the same or even larger benefits (due to smaller self-attention matrices). If the dataset is generated differently like in GPT models [4] and RoBERTa (FULL-SENTENCES) [16], all sequences will be at full length and sequences cannot be concatenated and there is indeed no benefit in packing sequences. However, strategies that reach full sequence length usually combine segments from different unrelated document sources which can result in reduced performance. Even in the normal BERT model, there might be this contamination between segments from different documents. Our paper introduced an approach to avoid the contamination between sequences. However, the same approach could also be applied to avoid contamination between segments and it remains future work to explore its benefits beyond BERT pretraining.
Future work would need to investigate the applicability of packing on text produced by different cultures and in different languages. We have already shown that the speed-up resulting from using our methods does not only occur when pre-training BERT on Wikipedia but also on other datasets such as SQuAD and GLUE. Furthermore, the sentence length distribution of the original English language text shows similar characteristics. Our research leads us to believe that compressible distributions arise naturally in language tasks and beyond, for instance in DNA sequence lengths [40], protein lengths [39], and speech (Section M). Many such sequence modelling workloads are based on variations of the BERT/transformer architecture and would therefore easily benefit from our acceleration.
Failures in NLP can have a big impact on society; many technologies, such as Alexa, Siri, and Google Home, rely on them. Whilst any errors arising from our approach can be avoided, one potential source of error comes from the implementation. Both the attention mask and the per-sequence loss need to be modified to support packing. These changes are significantly smaller than those required by custom kernels, however they may still be time consuming to implement and debug. To help mitigate the risk of any implementation errors, we share our reference implementations of the required changes in the appendix.
B Reproducibility Statement
All code for the packing algorithms is available in the appendix (Section U) and is directly linked to our GitHub page to simplify the download and usage. We even provide code for different variants and the histograms of sequence length for different datasets that got tokenized for BERT training of fine-tuning.
To generate the learning curves, our public submission to MLPerf™ could be used and we are preparing further code releases in other frameworks. To encourage the use of the adjustments of models for packed sequences, we additionally provide detailed explanations and code snippets in TensorFlow.
Detailed mathematical formulas (Section E and F), a theorem proof (Section D), and complexity calculations (Section G) are provided in this appendix to support our claims in the paper in full detail.
C Related work
The most obvious way to reduce the extent of padding in the dataset is to group samples by size before batching (SORT), i.e., process the shorter samples together and longer samples together. BERT is pre-trained in two phases, where the first phase uses sequence length 128 for 900K steps and the second phase uses sequence length 512 for 100K steps. However even by splitting the training in this way, the wasted compute due to padding is approximately 20% (see Figure 1). Other examples of this "sorted batching" approach can be found in Faster Transformer [21], lingvo [28] fairseq [22], and RoBERTa [16], which group samples of similar size together in one batch and fill up with padding only to the maximum length in this batch. This approach can be highly efficient in cases where the dataset length is multiple orders of magnitude larger than the batch size and the number of different sequence lengths. Despite its high computational efficiency, this approach has multiple drawbacks. We outline these below and propose an alternative which maintains the high efficiency, while also circumventing the downsides. Firstly, sorting the data can reduce the overall convergence speed when the batch size is large because it violates the i.i.d. assumption on the data distribution [2,18]. Secondly, processing batches with shorter sequence lengths under-utilizes the compute compared to running the same batch size with a longer sequence length. For GPUs, a common heuristic to mitigate this effect is to adjust the batch size to keep the number of processed tokens near constant [22,16]. In general however, the relationship between the sequence length and the optimum batch size is more complex and maximizing compute utilization can require the model to be sharded differently across multiple accelerators. Avoiding this, often manual process, is important for ease of use and the portability of methods across different hardware architectures. Thirdly, modern NLP applications are optimized and compiled for fixed tensor sizes using tools such as XLA [34,9], which provides a ≈ 7x acceleration for BERT in MLPerf™ [17] compared to the non-XLA baseline [34]. Changing the sequence length or batch size requires re-optimization of the computational graph and recompilation of the program for the new tensor shapes. For complex models such as BERT, optimization and recompilation take a non-negligible amount of time. Even if one pre-compiled and cached all combinations of batch size and sequence length, the kernels would still need to be re-uploaded to the device every time the shapes change. Depending on how frequently the tensor shapes change, the overhead from switching kernels adds up. To avoid these issues, it is preferable (and common) to work with fixed tensor shapes for the entire duration of the training run.
More advanced approaches for reducing the padding overhead rely on custom computational kernels. Loosely these are referred to as "un-padding" approaches. In Effective Transformer [5], the input batch is provided as a padded matrix but padding values are dynamically removed and restored during different calculation stages. While un-padding implementations are highly sophisticated and are able to completely circumvent the processing of padding tokens, they introduce a significant overhead due to the multiple GPU kernel launches (i.e., one kernel per sequence rather than one kernel per batch). Additionally the time to process each batch will fluctuate depending on the sequence lengths in each batch, i.e., batches with only shorter sequences will typically be processed faster. When working with more than one accelerator, this variability in throughput results in all devices in the cluster waiting for the device with the most compute intensive batch to finish processing. As such, un-padding approaches are not appropriate for deployment on large clusters. The "packing" based approach introduced in this paper offers significant advantages over un-padding approaches. Firstly, packing is implemented directly at the framework level and requires no additional custom kernel implementations. Secondly, the processing time for each batch is independent of the content of the batch, allowing the packing based approach to maintain the same speed-up whether running on a single device or thousands.
While we demonstrate the effectiveness of packing specifically on the Wikipedia dataset, packing SQuAD [25] or GLUE datasets [31,30] for BERT also leads to significant speed-ups (some in excess of 9x) (Sections K and L). The effectiveness of packing is a result of both the length distribution of the documents in the source datasets as well as the different text preprocessing steps for BERT [8]. The use of bi-directional self-attention in BERT implies that the input sequences should contain complete sentences. If a sentence is abruptly cut short, the hidden state on other (preceding) tokens in the sequence will be affected. Language models with causal attention (only attending to previous tokens in the input) do not have this issue to the same degree. For such models, if a sequence is cut short at an arbitrary token, the other tokens (which occur earlier in the sequence) will not be affected. This ability to cut sequences arbitrarily completely trivializes the packing problem for models based on causal attention. For instance, GPT-3 [4] is trained with a maximum sequence length of 2048 where a single sequence may contain multiple segments of sentences separated by a special end of segment token. The last segment in each sequence is simply cut to meet the sequence length requirement making the packing problem trivial and avoiding any padding. In the interest of computational efficiency GPT-3 does not mask the attention between different segments in a sequence. In contrast, the packing approach presented in this paper introduces a mask in the attention layer (see Section 3.2.2) to prevent cross-contamination between examples in a pack. Note, we mask the interaction between different sequences and not between different sentences or segments in the same sequence. This ensures that the characteristics of the original dataset and model are matched as closely as possible. RoBERTa and many other models in production like T5 [24] use a similar packing approach as GPT-3, combining full sentences/sequences with GREEDY packing (first come first concatenate) and also separation tokens or additional padding. The RoBERTa ablation study shows that mixing of sentences from different documents reduces accuracy, but it is used nonetheless for load balancing reasons which indicates that sorted batching is not sufficient.
There might be hidden code snippets as in the deprecated tensor2tensor library that seems to implement the same attention masking mechanism as we propose. However, these lack a sufficient documentation, testing, evaluation, ablation, and communication to the research community to be considered state of the art in NLP research. More general, to the best of our knowledge and the knowledge of many other engineers and researchers that we were in contact with, there is no other research work that focuses on packing in NLP.
D Theorem on LAMB hyperparameter correction heuristic
With packing, the effective batch size changes and hence hyperparameters of the LAMB optimizer [35] need to be adjusted. For a packed dataset with a packing factor p, we update the decay parameters as: β 1 := β p 1 , β 2 := β p 2 . For instance if β 1 = 0.81 for the un-packed dataset, then for a packed dataset with an average of 2 sequences per sample one should use a value of 0.81 2 ≈ 0.66 instead. Assuming no or only minor changes in gradients and p being a natural number, we can prove that this heuristic is the exact solution to make sure that momentum and velocity in LAMB are unaffected by packing. This can be proven by mathematical induction. Note that p ≥ 1 by definition.
Theorem D.1. For any p ∈ N and assuming that respective gradients on a batch of b random samples are (approximately) the same, choosing
β 1 := β p 1 (1) β 2 := β p 2 .(2)
as hyperparameters in the LAMB optimizer ensures that the momentum and velocity after p separate update steps are the same as with one packed update step with p × b samples.
Proof.
• Base Case: For p = 1 the left and right side of the equation are the same which matches exactly the unpacked case. Hence, the theorem holds for p = 1.
• Inductive hypothesis: Suppose the theorem holds for all values of p up to some k, k ≥ 1.
• Inductive proposition: The theorem holds for p = k + 1.
• Proof of the inductive step: Let l be the loss function, w t the weight vector after t updates, and x t 1 , . . . , x t b the respective underlying data to calculate the gradient g t . For a single update step in LAMB with batch size b samples, we compute the gradient
g t = 1 b b i=1 ∂l ∂w (x t i , w t ).(3)
Since g 1 ≈ g 2 ≈ . . . ≈ g k+1 , We have with the inductive hypothesis and the definitions in LAMB:
m k = β k 1 m 0 + (1 − β k 1 )g 1 (4) v k = β k 2 v 0 + (1 − β k 2 )g 2 1(5)
Now we can calculate (with g 1 ≈ g k+1 )
m k+1 = β 1 m k + (1 − β 1 )g k+1 (6) ≈ β 1 β k 1 m 0 + (1 − β k 1 )g 1 + (1 − β 1 )g 1(7)
= β k+1
1 m 0 + (1 − β k+1 1 )g 1(8)
The calculation for v k is the same. As reference for a packed update with p = k + 1 with β 1 and β 2 , we would get
g = 1 pb p j=1 b i=1 ∂l ∂w (x j i , w 1 ) = 1 p p j=1 1 b b i=1 ∂l ∂w (x j i , w 1 ) ≈ 1 p p j=1 g 1 = g 1(9)
since we are calculating gradients over b samples which are assumed to be approximately the same. Consequently, the updates for momentum and velocity would be
m k = β 1 m 0 + (1 − β 1 )g 1 (10) v k = β 2 v 0 + (1 − β 2 )g 2 1 .(11)
Hence, β 1 = β k+1 1 and β 2 = β k+1 2 is required to map to the formula with the consecutive updates (for the same amount of data).
• Conclusion: The theorem holds for any p ∈ N.
Since we proved that the formulas β 1 := β p 1 , β 2 := β p 2 . hold for all p ∈ N, p ≥ 1, it is safe to assume that it is an appropriate heuristic for all p ∈ R, p ≥ 1.
E Un-padding scaling estimate
To demonstrate the severity of the load-imbalance issue in Section 4.4 we consider the scaling of an un-padding approach with a per-device batch size of 32 running on eight devices [20]. From there, we readily extrapolate the performance to both larger and smaller cluster sizes by fitting a Gumbel distribution to the observed processing times as described in this section. On a single device with batch size 32 un-padding outperforms packing and exceeds the theoretical upper-bound for packing. As the number of devices increases to two or more, the proposed packing approach outperforms the dynamic un-padding approach. On a cluster with 32 accelerators the speed-up from un-padding drops to 50% and with 2048 devices the speed-up is only 30%. In contrast, the speed-up due to packing is independent of the number of accelerators and stays at 1.913. Switching to a smaller batch size would reduce the load-imbalance issue to some extent, but would also result in under-utilization of the available memory and compute.
Firstly, we retrieve the per-batch processing time for an un-padding implementation running pre-training on the Wikipedia dataset from [20]. These processing times were obtained using 8 GPUs each with a per-device batch size of 32. We also retrieve the throughput numbers for the same system running with padding from [44] and use that as the baseline to compare the un-padded throughput against.
The throughput on the 8 GPU system is effectively limited by the slowest of the eight batches being processed in parallel. The Gumbel distribution is particularly suited to modelling the maximum or minimum value of a fixed size collection of i.i.d. samples (in this case batches). We observe that on 8 GPUs the throughput (i.e. speed-up) distribution indeed closely resembles a Gumbel distribution with α 1 = 1.6 and β 8 = 0.13 as shown in Figure 6. We can extrapolate the performance on the 8 GPU system to larger clusters by recognizing that the processing time for each cluster is effectively determined by the slowest batch being processed. Specifically, we could randomly sample (without replacement) two processing times for the 8 GPU system, and record the max of the two as the processing time for a system of 16 GPUs. However, this simple approach is too sensitive to outliers in the data and would result in an under-estimate of the performance of un-padding on large systems. We mitigate the effect of outliers in the data by avoiding directly sampling the processing times. Instead, we fit a Gumbel distribution to the processing times of a single batch of size 32 running on one GPU. To perform the fit, we observe that the cdf on one GPU (P 1 ) is related to the cdf on 8 GPUs (P 8 ) through [41](section 1.3):
(1 − P 8 (s)) = (1 − P 1 (s)) 8(12)
In other words, if the speed-up on the cluster is larger than s, this implies that the speed-up on every GPUs in the cluster was at least s. Assuming P 1 is Gumbel and given the 8 GPU Gumbel parameters α 8 and β 8 , we need to fit two parameters, α 1 and β 1 . Consequently for the median (s = α 8 − β 8 ln(ln(2)), P 8 (s) = 0.5), we have:
0.5 = (1 − P 1 (α 8 − β 8 ln(ln(2)))) 8 .(13)
And since P 8 is Gumbel, we also have an equation for the mode (s = α 8 , P 8 (s) = e −1 ):
(1 − e −1 ) = (1 − P 1 (α 8 )) 8 .(14)
We solve these two non-linear equations simultaneously using the standard SciPy optimization package.
Listing 1: Infer Gumble distribution parameters.
import numpy as np from scipy import stats, optimize alpha_8 = 1.6038 beta_8 = 0.1288 def g(x): alpha_1, beta_1 = x dist = stats.gumbel_r(loc=alpha_1, scale=beta_1) # Equations for median and mode median = alpha_8 -beta_8*np.log(np.log (2)) equation1 = 0.5 -dist.sf(median)**n_gpu mode = alpha_8 equation2 = (1-np.exp(-1)) -dist.sf(mode)**n_gpu return (equation1**2 + equation2**2) res = optimize.minimize(g, [alpha_8, beta_8], method="Nelder-Mead") alpha_1, beta_1 = res.x
The resulting estimated speed-up Gumbel distribution for a single device has α = 1.94, β = 0.108 and is shown in Figure 6 [right]. To simulate the performance of a cluster of size n with a batch size of 32 per device, we take the minimum over n samples from this distribution. Repeating this process to generate many samples allows us to estimate the expected speed-up for any given cluster size. Unfortunately, we cannot make any statistical inference about the processing times of individual sequences since the data is only provided at the granularity of 32 sequences per batch, and it is not clear how much of the computation is done in parallel and how much in serial.
F Technical background on packing F.1 Canonical packing problem
The bin packing problem deals with the assignment of items into bins of a fixed capacity such that the number of utilized bins is minimized. In the canonical formulation of the packing problem a vector s(i) of length n is used to represent the items being packed, where s(i) denotes the length of the i-th sequence/item. The allocation of items into bins is tracked through the assignment matrix B, where B ij ∈ {0, 1} states whether the i-th sequence should be placed into the j-th bin. In the worst case scenario, every item is assigned to its own bin, thus B ∈ R n×n . Notably, s grows linearly in the number of sequences/items being packed and B grows with the square. To mask out unused bins y j ∈ {0, 1}, denotes whether the j-th bin is being used. The optimization objective is to minimize the sum of y j while making sure to assign each s i to exactly one bin and not exceeding the maximum bin capacity s m for each bin. This problem formulation is well known as bin packing [14].
Bin packing is a strongly NP-complete [14] problem. Producing an exact and optimal solution is possible with a variety of existing algorithms, for example with the branch-and-cut-and-price algorithm [37]. However, given that we want to apply it for very large n (16M for the Wikipedia dataset) an approximate approach is required.
F.2 Approximate bin packing problem
Approximate packing approaches are divided into online and offline algorithms [12]. Online algorithms process incoming sequences one-by-one in a streaming fashion, whereas offline algorithms have a holistic view of all samples to be packed but typically still operate on a per sample basis. This results in best case time and memory complexities of at least O(n log(n)) and solutions that can sometimes be far from optimal, especially for the online algorithms which do not have access to a holistic view of the datasets. The simplest online approach (next-fit) would be to keep a single open bin at any given time. An incoming sequence is added to this open bin if it fits, otherwise the bin is closed (can never be appended to again) and a new one is opened to accommodate the new sequence [12]. In the case of the Wikipedia pre-training dataset almost 25% of the sequences are of length 512, which makes this approach very inefficient since bins would frequently be closed because the incoming sequence did not fit. More specifically, this approach is not able to efficiently combine one long sequence with one shorter sequence, when the number of long sequences is large. The algorithms that come closest to the approaches proposed in this paper are the online harmonic-k algorithm [15], which creates harmonic sized bins for the assignment decision, and the offline Modified First Fit Decreasing method [13,36], which sorts the data, groups it into 4 size categories and defines a strategy adjusted to these sizes.
In our approaches, we make three major simplifications. We make the problem of bin packing less dependent on n by operating on the histogram of sequence lengths with bin size 1. Hence, we replace s(i) by its histogram b and the bin assignment y, B by a mixture of strategies x, where the set of all available packing strategies is modeled as the matrix A (see also Section F.4.2).
Then, we do not solve the full packing problem but focus on a fixed packing depth (in other words the well known 3-partition problem). Last but not least, we solve the limited depth packing problem only approximately either with a non-negativity-constrained linear least squares [3] (NNLS) followed by rounding to nearest integer solution or by applying Worst-Fit [13,36] to the histogram, sorted from largest to smallest (in contrast to using an unsorted dataset). An exact solution would not be appropriate, since the 3-partition problem is strongly NP-complete [38] as well.
F.3 Definitions
In this section, we standardize the terms used throughout our methods. Firstly, the terms pack and bin may be used interchangeably. Secondly, the presented packing schemes impose a limit on how many sequences can be packed into any given bin. This limit is referred to as the maximum packing depth. For simplicity, we require the different sequence lengths in a pack to always add up exactly to the bin capacity s m (we can always generate a padding sequence of just the right length to fill-up the bin). A packing strategy is a sorted list of sequence lengths, for example [5,7,500], such that the total sequence length is no more than s m and the number of sequences in the pack does not exceed the maximum packing depth. The output of a packing scheme is typically as set of packing strategies and the corresponding repeat count for each strategy stating how many times each strategy should be repeated in order to cover the entire dataset. The strategy repeat count is also referred to as the mixture of strategies. The objective of the packing algorithm is to jointly design a set of packing strategies and their repeat counts, such that the amount of padding is (approximately) minimized. The presence of padding in the packs can either be implicit or explicit. For instance for s m = 512 the strategy [2,508] has an implicit padding of 2 (needed to fill the pack up to the s m ). Alternatively, the strategy repeat count may over-subscribe a particular sequence length leading to explicit packing. For instance constructing a pack of [4,508] may require a new padding sequence of length 4 be constructed, if there are not enough sequences of that length in the dataset. The packing algorithms, we present, use both representations.
F.4 Non-negative least squares histogram-packing
The first algorithm proposed in this paper is suitable for settings where it is desirable to achieve a high packing efficiency with a limited packing depth. The algorithm is deterministic and has three major components described in Sections F. 4 and should only be listed once. This reduces the search space as well as the space of potential solutions by a factor of 6 approximately and thus significantly accelerates the optimization process. If you had the same strategy repeated 6 times instead of having just one instance of that strategy with weight X, you will have six instances with weight x/6 (for example, or any other distribution). This would conflict with integer rounding of the solutions and with convergence of optimization algorithms.
F.4.2 Constructing the packing matrix
The number of rows in the packing matrix is equal to the number of different sequence length categories. For instance, if we are using a granularity of 1 token to distinguish between different sequence lengths, then there are "maximum sequence length" rows. Each column of the matrix corresponds to a valid packing strategy (given the depth of packing). An example packing matrix for fitting up to 3 sequences into sequence length 8 is given in Table 4. Each column of the matrix represents a packing strategy. For instance, the first column represents the strategy [1, 1, 6] of packing two length-1 sequences and one length-6 sequence together to form a pack of length 8. The number of strategies (and columns in the matrix) is discussed in Section G. For a packing depth of 3 and maximum sequence length, we obtain around s 2 m +6sm+12 12 strategies. For depth 4, around sm(sm+4)(2sm+1) 288 more get added.
F.4.3 Solution of the NNLS approximate packing problem
A solution of the packing problem is the mixture of packing strategies x that minimizes the amount of padding in the packed dataset. We solve directly for the mixture (positive real numbers) and recover the padding as the negative portion of the residual (see Section F.4.4).
min x∈R m A · x − b 2 s.t. x ≥ 0(16)
The solution vector x will represent the mixture of the columns of A, in other words the mixture of valid packing strategies such that A · x is as close as possible (in the least squares sense) to the histogram of sequence lengths b. We obtain a solution with a non-negative least squares implementation [42,46] Interestingly in the case of sequence length 512 only 634 out of the 22102 available packing strategies of depth up to 3 are used (3%). We compute the residuals of the least squares solution (after rounding the mixture to integer) as:
r = b − A · round(x)(17)
The negative portion of the residuals represents sequences that we are "short". That is, there is a deficit of those sequences and we are over-subscribing to them. The positive portion of the residuals represents sequences which have failed to be packed. Typically, there is a deficit of short sequences and a surplus of long sequences as demonstrated by the following plot. The detailed code for the algorithm is provided in Listing 2.
F.4.5 Residual weighting
A natural extension of the non-negative least squares problem introduced in Section F.4.3 is to weight the residuals on different sequence length differently.
min x∈R m (wA) · x − (wb) 2 s.t. x ≥ 0(18)
We should not significantly penalize a deficit in short sequence lengths (smaller than 8 tokens) as adding up to 8 tokens of padding is not much overhead. Similarly, a surplus in long sequences is not worrisome because the amount of padding needed to achieve a sequence length of 512 is small. Reducing the weight of the residual on the first 8 tokens to 0.09 leads to the following residual plot shown on the right in Figure 8. In this case the residual is almost entirely shifted to the shorter sequences and the positive residual on the longer sequences has virtual disappeared. This section discusses the choice and effect of the weighting parameters in the NNLSP packing algorithm. To simplify the problem of selecting reasonable defaults for the residual weights, we use just two parameters to completely describe the weights: an "offset" parameter and a "weight" parameter. Originally, all sequence length residuals are given the same weight of 1. This results in a packing with leftover long sequences, because there are not enough short sequences to pack them with. To reduce the residual on long sequences, we could either increase the residual weight on long sequences or reduce the weight on short sequences. We chose to reduce the weight on short sequences. Specifically, sequence lengths up to the "offset" length have a reduced "weight". The other residual weights stay at 1.
To start, we chose an offset of 8 tokens, which is the smallest power of 2 for which there are examples in the Wikipedia dataset. We decrease the weight on sequences shorter than the "offset" from 1 to 0.9 to 0.09 to see which order of magnitude is the most appropriate. On visual inspection (looking at the residual plots as in Figure 8), we found that 0.9 still left too many long sequences unpacked. So, we reduced the weight a further order of magnitude to 0.09. This seemed sufficient to encourage nearly all long sequences to pack. While visual inspection helps in understanding how many long/short sequences are leftover, we are also interested in the impact the weights have on the overall efficiency of the packing.
Without any weighting, we get 99.746359% efficiency, whereas the weighted approach results in 99.746274% efficiency. Hence, we conclude that the impact of the weights on the packing efficiency is very limited. Additionally, using an "offset" length of 4, resulted in similar numbers, for the full range of weights from 0 to 1. Using a weight of 0 for an "offset" length of 8 resulted in insignificantly higher efficiency of 99.7519%, whereas using an "offset" length of 16 reduces performance to 99.38964%. A weight of 0 implies that the residual on those lengths can be safely ignored, i.e., the packing algorithm can thus add as many short sequences as it chooses without any penalty. It is very interesting that this does not significantly impact the packing efficiency, and can even have a slightly positive impact. However, increasing the "offset" length further significantly decreases the performance with weight 0. Keeping the weight at 0.09 and increasing the length reduces performance slightly, for example with 99.53% at length 256 and 99.728% at length 16.
For our Squad analysis, weighting improved the efficiency slightly from 96.94% to 97.38%. Fine tuning further with direction grid search, delivered a local optimum of 98.767% efficiency with length 64 and weight 0.002.
Overall the influence of different residual weights on the packing efficiency (and the acceleration factor) is less than 1%. This might differ from application to application, but it shows that we are able to use the residual weights to achieve secondary targets (like not having leftover long sequences) without significantly compromising the packing efficiency.
G Complexity analysis of the proposed packing approaches
Since approximate packing algorithms have a complexity of at least O(n log(n)) and we would like to be able to tackle datasets with 2K million samples, we will discuss the complexity of our packing algorithms in this section. The complexity depends on the maximum sequence length s m , the number of samples n, and the packing depth d.
To create the histogram, we have to iterate over the data once (O(n)). Our histograms will be binned by size 1, meaning one bin for each sequence length. Hence, a dictionary can be generated (O(s m )) and used for the sorting (O(1)). The respective histogram vector has dimension s m .
G.1 Complexity Analysis of non-negative least-squares histogram-packing
# potential strategies = sm 3 j=1 sm−j 2 i=j 1 = sm 3 j=1 s m − j 2 − (j − 1) ≈ sm 3 j=1 s m 2 − 3 2 j ≈ s m 2 s m 3 − 3 2 s m /3(s m /3 + 1) 2 ≈ s 2 m 12(19)
Note that for s m = 512 the approximation is exact. This means that our strategy matrix A has the dimensions s m × s 2 m 12 + sm 2 + 1 . Overall, this leaves us with a space complexity of s 3 m since A is larger than w, x, and b. So it contains 11'316'224 numbers which is still much smaller than n. Note that the original data matrix B had n 2 entries, which all needed to be optimized together with the n bin assignments y. We now have only s 2 m 12 + sm 2 free variables in the strategy vector x. Also note that A can be precomputed when s m is known and is independent of the number of samples. Given a problem matrix with dimension i × j, Luo et al. [43] indicate that the asymptotic complexity of most solution approaches is O(ij 2 ), whereas they propose an O(ij) solution. Since we use the standard SciPy implementation [42], our estimated total time complexity for NNLSHP is O(n + s 5 m ). For s m = 2048, the estimate would be 350 540 potential strategies which is still far less than the number of samples. For packing depth 4, we calculate [48]:
sm 4 k=1 sm −k 3 j=k sm−j−k 2 i=j 1 ≈ sm 4 k=1 sm −k 3 j=k s m − k + 2 − 3j 2 ≈
So with s m = 512, there would be around 940K strategies. In our implementation, this number of strategies would be too high to create the problem matrix. One alternatives to simplify would be to not use the exact length of sequences but to only consider even numbers for the sequence length and round up. That way arbitrary sequence length could also be handled and the limiting factor would be the complexity of the attention layer in BERT which does not scale well with the sequence length.
G.2 Complexity Analysis of shortest-pack-first histogram-packing
The complexity calculation of SPFHP is straightforward. We go over the whole data once for the histogram sorting. Next, we iterate over each of the s m bins in the histogram. Lastly, we iterate over all strategies that were encountered so far. It can be proven that, at each iteration, the number of strategies can be maximally increased by one. In each step, we potentially add a sequence to existing strategies but a new strategy is opened up only in the final step, when we either create a new strategy or we split one of the existing strategies into two. Hence, the number of strategies is bounded by s m and the overall time complexity is bounded by O(n + s 2 m ). The space complexity is O(s 2 m ) since we need to store up to s m strategies with maximum s m counts for different sequence length.
H Performance Comparison to GREEDY Packing in T5
T5 [24] is normally trained on the C4 dataset. However, to give an idea of the difference in packing efficiency and acceleration compared to our newly introduced algorithm, we can analyse the performance of greedy aggregation of samples on our given Wikipedia dataset.
We take the histogram and cast it back to a list of different sequence lengths since this is all that matters for analysing packing behaviour. Next, we randomly shuffle the dataset and iterate with the greedy aggregation algorithm multiple times to account for randomness. We iterate sequence by sequence and combine them provided the maximum sequence length of 512 is not yet reached. If it is exceeded, the packed sequence is considered finished and a new sequence is started.
The greedy packing algorithm itself takes a bit more than 10 seconds, since we are operating on single sequences and not histogram counts. The efficiency of this approach is 78.24% (standard deviation of 0.005) compared to our 99.75% for NNLSHP. The respective acceleration would be around 1.566x compared to our 2x. With respective separator tokens, the performance decreases around 0.13% for one separator token and 0.27% when two separator tokens are required between two sequences. Following the brief documentation at tensor2tensor [link], two separator tokens would be expected in the T5 processing.
In addition to the packing preprocessing, our paper proposes, rather than using separator tokens, to instead modify the masking of the attention matrix during training. The RoBERTa paper shows that avoiding contamination of sequences from different documents can consistently improve downstream F1 scores by 0.35%.
I Impact of NSP loss
When running packed BERT base without the NSP loss but keeping everything else the same, we observed that downstream performance on SQuAD reduced the F1 measure by 1.31% and EM by 1.15%.
For the packing in approaches like RoBERTa or T5, it is crucial that there is no NSP loss because that would circumvent putting arbitrary sequences together in contrast to our approach that can handle multiple sequences from different documents without cross-contamination. Liu et al. [16] argument that NSP can be omitted because "removing the NSP loss matches or slightly improves downstream task performance". In their experiments, they compare the normal BERT setup with NSP ("SEGMENT-PAIR") to the "DOC-SENTENCES" approach, where there is no NSP and data in one sequence comes only from one document. For the "SEGMENT-PAIR" approach, the paper does not address, how much padding tokens are still present. Assuming, it is around 40%, their correction in batch sizes for each step would result in a significant increase in training steps for the "DOC-SENTENCES" approach. It is well known that BERT performance increases with longer pretraining time. Our results indicate that NSP loss might be still relevant, depending on the dataset generation process. With our approach, we can get the acceleration benefits of T5 and RoBERTa while keeping the predictive performance by avoiding cross-contamination.
J Wikipedia with Longer Sequence Length
The histogram raw data for Wikipedia with different maximum sequence length is provided in Listing 6 and visualized in Figure 9. We can see that with increasing maximum sequence length, long sequences become more and more rare and the resulting benefits from packing drastically increase. Keeping in mind that the BERT dataset generation process decreases the size of a maximum of 50% of the sequences, we can infer that having a different dataset generator that truncates any short sequence, would result in significant loss of data (> 25% for length 512). Due to the length distribution, it is not anymore sufficient to concatenate only 3 sequences to obtain perfect packing for maximum sequence length 1024 or 2048. Instead, around 6 and 12 sequences are required. This cannot be solved by NNLSHP anymore due to search space complexity but requires an online heuristics like SPFHP or the slightly better LPFHP, introduced in Section R that is based on Best-Fit and splitting counts in the histogram in contrast to the rather simple First-Fit descending. Figure 10 shows the achieved speed-ups with LPFHP depending on the maximum number of allowed sequences. We tokenized SQuAD [25] for BERT [6] with maximum sequence length 384 and visualized the histogram over the sequence length ( Figure 11). The distribution looks similar to the Wikipedia dataset but is slightly less skewed. However, the maximum sequence length only had an occurrence of 1.2% compared to 23.5%. Hence, the theoretical un-padding speedup is 2.232. In Table 5, we can see that SPFHP does not concatenate more than 3 samples and obtains 97.54% efficiency in contrast to a maximally used depth of 16 with 99.60% efficiency on Wikipedia, because of the less skewed distribution. Note that we have less than 90 000 samples. Hence, NNLSHP is less efficient because the rounding in the residuals has a much larger impact compared to more than 16 million sequences in the Wikipedia dataset.
L Packing GLUE
To explore a variety of datasets and emphasize that skewed distributions are common, we explored all datasets in the GLUE benchmark [31,30] that came with training data. We loaded the datasets using the HuggingFace dataset loading API [47]. For preprocessing, we followed the implementation in the HuggingFace transformers repository [32] 4 and extracted the respective data processing snippets to obtain tokenized data with a maximum sequence length of 128. The histogram of the sequence length for each of the included datasets is displayed in Figure 12 and the packing results are given in Table 6. Each dataset benefits from packing. The lower the mean, the higher the packing factors are that can be reached but with a higher packing depth.
M Packing Audio Data (LibriSpeech)
In this section, we show that packing can benefit other domains than NLP like ASR. We use the LibiSpeech dataset [23] and preprocess it as described at a reference implementation. 5 The resulting histograms for the subsampled audio sample lengths and respective text labels are provided in Figure 13 It can be seen that the audio sequence length is dominated by long sequences with 38% of required padding to meet the max sequence length of 330. Thus the theoretical optimal speed-up of 1.6x cannot be reached. However, 80% efficiency are possible with any of the proposed packing algorithms to achieve 1.3x speed-up. This can be already achieved by combining up to 2 sequences. To achieve almost perfect packing efficiency, a sequence length around 457 and concatenating up to 8 sequences is required. Due to the quadratic increased computational load that usually comes with longer sequence length, increasing the sequence length is not practical.
If processing and packing the text data independently of the audio, 99.99% efficiency could be achieved with a speed-up of 2.24x.
N Packing Paper Abstracts (PubMed)
This section analyses the length of abstracts to give an intuition about how different documents can be in length. Figure 14 depicts the length of abstracts in characters extracted from PubMed. 6 If these abstracts were directly used as sequences, a character length of 1000 could result in 1.9x speed-up from packing. The potential speed-ups for length 2000, 3000, 4000 would be 2x, 3x, and 4x, respectively. Note that, document clean-up procedures would usually eliminate documents that are too short or too long for data sanitizing purposes. Note that for the processing in BlueBERT [45], paper titles and abstracts get separated into sequences, tokenized, and then combined with the BERT sequence combination approach for a maximum sequence length of 128 tokens. Thus, it results in a different distribution.
O MLPerf™ phase 2 learning curves
This section provides further learning curves related to Section 4.2. Q Note on changing the sequence length for optimal packing An interesting aspect of packing is that the maximum sequence length for packing could be larger than the maximum sequence length in the underlying dataset that gets packed.
For the QM9 dataset, this means that by setting the maximum sequence length to 36 instead of 27 an optimal 1.6x speed-up can be easily achieved.
Note that the choice of maximum sequence length depends on the underlying machine learning algorithm. Due to the squared computational and memory complexity of self-attention in BERT and other transformers, the maximum sequence length is usually kept as small as possible for these models. So an increase for packing alone is not practical.
For algorithms with linear complexity as for example Graph Neural Networks, implemented in PyG, larger maximum sequence length can be chosen to ensure, optimal packing is always possible.
R Fine-tuned longest-pack-first histogram-packing
In the main paper, we focused on SPFHP due its simplicity. In this section, we analyse the effect of applying the "Best-Fit" algorithm [12]. Here, the longest pack that still fits the sequence is chosen instead of the shortest one. . This latter strategy would be complemented by other sequences but would probably not result in an optimal packing. The implementation of this approach is much more complex than the SPFHP implementation. The code is provided in Listing 8 and the results in Table 7 Table 7: Performance results of longest-pack-first histogram-packing for Wikipedia BERT pre-training with maximum sequence length 512.
We can see that longest-pack-first histogram-packing (LPFHP) uses a much higher packing depth when no limit is set (29 instead of 16). Splitting the histogram counts results in slightly higher numbers of used strategies compared to SPFHP where the number of used strategies is limited by the maximum sequence length. The best efficiency of LPFHP is 99.949% with packing factor of 2 which is slightly higher than the 99.75% (1.996 packing factor) for NNLSHP and 99.6% for SPFHP (1.993 packing factor). All algorithms are very close to the upper limit.
Note that for NNLSHP, we only fill up the unpacked samples with padding. Applying best-fit on the remains, similar results can be expected. Although the benefits of the improved algorithm are negligible, we share the concept and code below in case packing is applied to other data with a different distribution that would benefit more from it, or for applications where only perfectly packed sequences without padding are of interest.
S Extended NNLS with padding token weighting
In Section F.4.4, we defined the residual as
r = b − A · round(x)(21)
and discovered that a positive residual corresponds to sequences that we did not pack at all and should be avoided. Negative residuals correspond to padding and should be minimized. Due to this discrepancy, we decided to set small weights for very short sequences (that don't occur in the data). However, it was not possible to directly optimize the amount of padding. A negative residual component for length i, r i , results in |r i | · i padding tokens, however a positive residual actually results into (512 − r i ) · i padding tokens. This cannot be addressed by our weighting approach in
min x∈R m (wA) · x − (wb) 2 s.t. x ≥ 0(22)
Working within the NNLS approach, we can strictly enforce a non-positive residual r (before rounding to integer). To that end, we define a new auxiliary variable r ≈ −(b − Ax) which is the negative of the residual, r. This will allow us to reformulate the objective r ≤ 0 to the non-negative constraint:
r ≥ 0. min x∈R m (wA) · x − (wb) 2 + w · A · x − w · b − w · r 2 s.t. x ≥ 0 r ≥ 0(23)
This will enforce r = Ax − b ≥ 0 due to the large weight, w := 10 6 , and no upper limits on r. Now, we can set w i := i to optimize for the padding tokens. Due to the use of the squared error, we would however optimize the squared sum of padding tokens instead of the preferred sum of padding tokens. To accomplish the latter, we would have to replace the L2-norm problem by an L1-norm problem which would be too complex to solve. Note that due to rounding, the unwanted positive residuals r (r < 0) might still occur. This could be avoided by rounding up x instead of normal rounding of x. To put the new formulation into a solver, we replace
b by b b , x by x r , w by w w , and A by A 0 m A −D m ,(24)
where 0 m is an m × m matrix with m being the maximum sequence length, 512, and D m is a unit matrix of the same dimensions as 0 m . Since, we are already close to optimum especially on the Wikipedia dataset, the results are only a little bit better. The processing time however increases from 30 to 415 seconds without considering the increased time for constructing the processing matrix. Since the slightly improved algorithm might be nevertheless relevant for other applications, we share it in Listing 9.
T Implementation Challenges and Tricks
Whereas the model changes are described in Section 3.2, getting them implemented in the most efficient way can require a bit more effort. This section points out a few tricks that we used in our code.
T.1 Packing Algorithms
Whereas the packing algorithm implementations might look trivial, they can become quite intricate. For example, when splitting and distributing bins like for example combining 2 sequences of length 256 to a sequence of length 512, the number of categories can drastically increase and thus the search space. Hence, it is valuable to test each adjustment while changing the packing algorithms. If a solution is not provided right away, the algorithm switched probably to a way less efficient complexity category.
T.2 Positional Encoding
This approach was implemented as described in Section 3.2.1 by providing the index of the item with the data. Note that for any other part in BERT, the exact position does not matter. This allows to actually rearrange the data to our advantage. We can start with the up to 72 mask tokens and have an additional mask, that tell us, which tokens are the mask tokens, a list that provides their true labels, and with the positional encoding, we can determine their position in the sequence.
The NSP tokens get moved from the beginnings of their sequences to the end.
T.3 Attention
For the attention mask, we realised creating them on host can have a major cost in data transfer due to its size. Instead, one can create the mask on the accelerator. Therefore, we implemented a custom operation using C++ and PopArt: https: //github.com/graphcore/examples/blob/master/nlp/bert/popart/custom_ops/attention_mask.cpp.
Note that in most cases, the attention mask gets not multiplied but added for efficiency. Hence, the "softmask_mask" is used instead of the multiplication mask from Figure 2 in our implementation.
T.4 Avoiding loss unpacking
Note that the MLM loss is applied on a token level and does not need any loss unpacking. However, for NSP, theoretically, the NSP tokens would be distributes within a sequence. During dataset creation however, we arranged the tokens and moved all NSP tokens to the end. Due to our packing strategy, we also know that those tokens are limited to a maximum number of 3. This, we can apply the NSP head to the 3 potential positions and just provide a mask to filter out the relevant NSP tokens. This way, we need much less memory and compute for the unpacking for the NSP loss.
T.5 Testing
The ultimate approach to test the correctness of the implementation is to check, if packed and unpacked sequence provide the same values and gradients. Due to large numeric variations, we implemented this test in FP32 for our PyTorch Huggingface implementation This way, we could prove that with the correct adjustments, unpacked sequences processed with vanilla BERT result in the exact same losses and weight updates as the packed sequences processed with the modified packed BERT version.
T.6 Loss Balancing
This section addresses a challenge, called loss imbalance, that is usually faced with small batch sizes with different appearance when running packed compared to vanilla BERT. It can also translate to other scenarios where losses get averaged with large amounts and variance of underlying padding in the data or variance in the underlying "sequences/segments/components" in a batch. This is highly relevant since model sizes increase and already now, the microbatch size when running BERT large on the IPU is 3 and on the GPU for large scale training, a batch size of 3 is used on a single GPU to limit the total batch size to 12960 aggregated over 4320 GPUs. 7 The main question is, how much influence/weight in a gradient update does a single MLM token and a single NSP token get and how does this change with batch size, packing, or other factors that woule be expected to be invariants? Let us look into two extreme cases: batch size 1 and a batch being the full dataset. Note that in the BERT model, we first take the mean over all MLM tokens and over all NSP tokens and then add the losses up.
For a batch size of 1, there are two extreme cases in the vanilla BERT setting. In case 1, we have 1 MLM token and 1 NSP token. So each token gets a weight of 1 in the final sum. In case 2, we have 76 MLM tokens and 1 NSP token. So each MLM token gets a weight of 1/76 in the overall loss/gradien/weight update and the NSP token, again gets a weight of 1. This means, the MLM tokens of short sequences get a weight of 1 and it reduces linearly down to 1/76 for maximum sequence. Thus, short sequences get more influence in the weight update and the ratio of weights compared to NSP changes, too, even though it is unclear how the ratio influences the final result.
Let us assume perfect packing efficiency for packed BERT. Hence, we have 76 MLM tokens and a weight of 1/76 for the MLM tokens in every case independent of the batch size. However, with a maximum packing depth of 3, the number of NSP tokens can range between 1 and 3 and thus the weights can be 1, 1/2, 1/3. This means that NSP loss for a sequence of length 512 gets 3 times more weight than the NSP loss for a single sequence compared to packing 3 sequences for example of length 170 together. Again, the ratio between NSP and MLM changes, too.
Now lets look at the other extreme case of a batch being the full dataset of size L (which behaves similar to the case of a large batch size between 12k-1000k which is common). Again, for vanilla BERT, the NSP weight is 1/L in any case. Assuming 50% padding, which can be common as previously shown, and again a maximum of 76 MLM tokens per sequence, we get a total of 76 · 0.5 · L MLM tokens with the respective reciprocal value for the weight. There is no variation. 76 · 0.5 is the average number of MLM tokens per sample.
Assuming a packing factor of 2, the respective maximum batch size can only be L/2. This fits to our scheme of reducing the batch size to avoid further adjustments of hyperparameters. For packed BERT, the number of MLM tokens is doubled compared to the average case in vanilla BERT and thus the weight is 1/(76 · 1.0 · (L/2)), assuming a packing efficiency of 100%. The number of NSP tokens is 2 · (L/2) and the respective weight is 1/L. Again there is no variation and the weights between packed and vanilla BERT are identical. This seems more like an ideal case that is less dependent on how samples are put together. Also, it ensures equivalence between packed and vanilla setup.
Getting weights calculated correctly in a distributed setup (data parallel processing as well as pipelining) where each replica has a small batch size down to 1 is challenging. Each replica would need separate gradients for NSP and MLM loss, then aggregate a weighted sum for those separate gradients, and only afterwards add up the gradients before the optimiser update. This is infeasible because of challenges in framework implementation, large increase of memory requirements, roughly doubling of the computational workload for the backpropagation, and more than doubling the communication overhead for weights.
We propose a simplified approach that generalizes from the weights, we observed for large batches, to the weights in tiny batches. Instead of averaging using the real number of tokens, we propose using the expected number of tokens instead. Technically that means, the mean aggregation gets replaced by a sum aggregation multiplied by a constant weight. Let b be our batch size, e the token efficiency, p the packing factor, and m the maximum number of MLM tokens in a sample. This means, for vanilla BERT with sequence length 512, we have something like e = 0.5, p = 1, m = 76 and for packed BERT, we have e = 1, p = 2, m = 76. Let l i,k M , i ∈ I(k), k ∈ {1, .., b} be the active MLM losses and l j,k N , j ∈ J(k), k ∈ {1, .., b} be the active NSP losses in a sequence. Then we balance the MLM loss calculation like:
mean(l M ) =
Note that when logging the loss, it should be averaged over multiple batches to get a representative result that is comparable to values previously obtained. This approach is straightforward to implement in any framework, even though some fine-tuning might be required when working with low precision.
In our experiments, loss balancing only reduced the noise in the NSP loss. Other than that, it had no influence on the loss curves. now be strictly < 0 residual = histogram -A @ strategy_repeat_count # Add padding based on deficit (negative residual portion of residual) padding = np.where(residual < 0, -residual, 0) # Calculate some basic statistics sequence_lengths = np.arange(1, max_sequence_length + 1) old_number_of_samples = histogram.sum() new_number_of_samples = int(strategy_repeat_count.sum()) speedup_upper_bound = 1.0/(1 -(histogram*(1 -sequence_lengths / max_sequence_length)).sum()/old_number_of_samples) num_padding_tokens_packed = (sequence_lengths * padding).sum() efficiency = 1 -num_padding_tokens_packed/(new_number_of_samples*max_sequence_length) print(f"Packing efficiency (fraction of real tokens): {efficiency:3.4f}\n", f"Speed-up theoretical limit: {speedup_upper_bound:3.4f}\n", f"Achieved speed-up over un-packed dataset: {old_number_of_samples/new_number_of_samples:3.5f}") return strategy_set, strategy_repeat_count Listing 3: Shortest-pack-first histogram-packing from collections import defaultdict import numpy as np def add_pack(pack, count, tmp, final, limit, offset):
U Packing source code
"""Filter out packs that reached maximum length or number of sequences.""" if len ( Listing 4: Evaluation function of shortest-pack-first histogram-packing """Max depth analysis of shortest-pack-first histogram-packing.""" from collections import defaultdict import tabulate import time import numpy as np def evaluate_spfhp(histogram, max_sequence_length):
"""Evaluate shortest-pack-first histogram-packing algorithm.""" stats_data = [["pack. depth", "# strat. used", "# packs", "# tokens", "# padding tok.", "efficiency (%)", "pack.factor", "time"]] for max_sequences_per_pack in [1,2,3,4,8,16, "max"]: start = time.time() strategy_set, strategy_repeat_count = pack_using_spfhp( histogram, max_sequence_length, max_sequences_per_pack) duration = time.time() -start # Performance Evaluation of packing approach n_strategies = int(len(strategy_set)) packs = int(sum(strategy_repeat_count)) sequences = sum([count*len(pack) for count, pack in zip(strategy_repeat_count, strategy_set)]) total_tokens = int(max_sequence_length * packs) empty_tokens = int(sum([ count*(max_sequence_length-sum(pack)) for count, pack in zip(strategy_repeat_count, strategy_set)])) token_efficiency = 100 -empty_tokens / total_tokens * 100 if max_sequences_per_pack == "max": m_length = max([len(pack) for pack in strategy_set]) max_sequences_per_pack = "max ({})".format(m_length) stats_data.append([ max_sequences_per_pack, n_strategies, packs, total_tokens, empty_tokens, token_efficiency, sequences / packs, duration]) print(tabulate.tabulate(stats_data, headers="firstrow", floatfmt=".3f")) Listing 5: Loss calculation # The number of sequences in each batch may vary sequences_in_batch = tf.reduce_sum(tf.reduce_max(masked_lm_weight, -1)) sequences_in_batch = tf.cast(sequences_in_batch, tf.float32) # Create the 0/1 mask that will be used to un-packed sequences masked_lm_weight = tf.reshape(masked_lm_weight, [B, 1, -1]) sequence_selection = tf.reshape(tf.range(1, max_sequences_per_pack + 1), [1, -1, 1]) sequence_selection = tf.cast(masked_lm_weight == sequence_selection, tf.float32) # Apply the mask to un-pack the loss per sequence nll_per_token = tf.reshape(nll_per_token, [B, 1, -1]) nll_per_sequence = sequence_selection * nll_per_token # Normalize the per-sequence loss by the number of mlm-tokens in the sequence (as is standard) attempted = tf.reduce_sum(sequence_selection, -1, keepdims=True) attempted = attempted + tf.cast(attempted == 0, tf.float32) # prevent NaNs when dividing by attempted nll_per_sequence = nll_per_sequence/attempted # Average per-batch loss (so contributions from different batches are comparable) lm_loss = tf.reduce_sum(nll_per_sequence)/sequences_in_batch Listing 8: Longest-pack-first histogram-packing from collections import defaultdict import numpy as np import time def add_pack(pack, count, tmp, final, limit, offset, max_sequence_length=512):
"""Filter out packs that reached maximum length or number of components.""" # sanity checks assert(max_sequence_length-sum(pack) == offset), "Incorrect offset." assert(offset >= 0), "Too small offset." assert(offset < max_sequence_length), "Too large offset." if len (
Figure 1 :
1Sequence length distributions for different datasets. The three graphics at the top left show Wikipedia BERT pre-training dataset sequence length histograms (token count excluding padding) for different maximum sequence lengths based on the Wikipedia article dump from October 1st 2020. The theoretical speed-up relates to not using any padding tokens and not having any overhead from processing the different lengths. Top right: GLUE datasets. Bottom from left to right: SQuAD 1.1, LibriSpeech text labels, LibriSpeech audio token sequence, and QM9 molecules of a graph in a sequence.
Figure 2 :
2Attention mask code [left], respective zero-one mask [middle], and vectorized unpacking of the sequence loss[right]. White rectangles correspond to padding.
Figure 3 :
3Comparison of learning curves for packed and unpacked processing, where all experiments converged to the target accuracy within the same number of training samples(3 million). [left] same effective batch size (ebs is batch size times packing factor), [middle] different heuristic adjustments of the hyperparameters (batch size 1500 for all runs, such that ebs for packed runs is 1500 * 2), and [right] realized speed-up from packing (in excess of desired 2x). Further learning curves are provided in Section O.
Figure 4 :
4Comparison of learning curves with and without mask or positional embedding adjustment in our packed BERT approach. The grey accuracy baseline to reach is 72.1%.
Figure 5 :
5Comparison of the theoretical speed-up as the number of accelerators is increased.
[ 22 ]
22OTT, M., EDUNOV, S., BAEVSKI, A., FAN, A., GROSS, S., NG, N., GRANGIER, D., AND AULI, M. fairseq: A fast, extensible toolkit for sequence modeling. In Proceedings of NAACL-HLT 2019: Demonstrations (2019). [23] PANAYOTOV, V., CHEN, G., POVEY, D., AND KHUDANPUR, S. Librispeech: an asr corpus based on public domain audio books. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on (2015), IEEE, pp. 5206-5210. [24] RAFFEL, C., SHAZEER, N., ROBERTS, A., LEE, K., NARANG, S., MATENA, M., ZHOU, Y., LI, W., AND LIU, P. J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research 21 (oct 2019).
Figure 6 :
6Left: Speed-up from un-padding on 8 GPUs closely resembles a Gumbel distribution. Right: statistical estimate of speed-up distribution on a 1 GPU system running un-padding
i)b ij ≤ s m y j ∀jCumulative length cannot exceed capacity.
F
.4.4 Padding as the residuals of the packing problem
Figure 7 :
7Visualization of the residual of the NNLS packing problem In total, there are n = 16'279'552 sequences in the Wikipedia pre-training dataset. After the non-negative least squares packing (and rounding to integer solution) there are 56'799 unpacked sequences left un-packed (about 0.352%). The residual on sequence lengths 1 to 8 are [−4620, −4553, −4612, −4614, −3723, −3936, −3628, −3970].These negative residuals imply that we need to add this many sequences of their corresponding sequence length to realize the mixture of packing strategies. In total the first iteration introduces 7.9410 6 tokens of padding. In contrast large sequence lengths have a positive residual (a surplus of unused sequences). For sequence lengths 504 to 512 the values are[3628, 3936, 3724, 4613, 4612, 4553, 4619, 0]. Note that sequence length 512 has a residual of 0 since they do not need packing. Intermediate sequence lengths typically have non-zero (but much smaller) residuals.
Figure 8 :
8Visualization of the weighted residual of the NNLS packing problem F.5 Discussion of residual weight choice
a packing depth of one, there is only the strategy [s m ]. For a packing depth of two, we add the strategies [1, s m −1], ..., [s m − sm 2 ] which results in an additional sm 2 potential strategies. Following the dynamic programming approach, the number of possible additional strategies of depth three can be calculated with
Figure 9 :
9Sequence length distributions for different sequence lengths in Wikipedia BERT pre-training dataset and according theoretical speed-up.
Figure 10 :
10Speed-ups achieved by LPFHP for different maximum sequence length and maximum number of packed sequences.
Figure 11 :
11SQuAD 1.1 BERT pre-training dataset sequence length histogram for maximum sequence length of 384.
Figure 12 :
12GLUE dataset sequence length histograms for maximum sequence length of 128.
Figure 13 :
13LibriSpeech sequence length histograms of preprocessed audio data [top] as well as target text data [bottom].
Figure 14 :
14Abstract length distribution in PubMed.
Figure 15 :Figure 16 :Figure 17 :Figure 18 :Figure 19 :Figure 20 :Figure 21 :
15161718192021Comparison of learning curves for packed and unpacked processing with reduced batch size for the packed approach. Comparison of learning curves for packed and unpacked processing with heuristics applied. Comparison of learning curves for packed and unpacked processing in the optimized setup.P Full pretraining of BERT base and large learning curvesThis section provides further learning curves related to Section 4Comparison of learning curves for BERT base phase 1 (sequence length 128) with packed and unpacked processing. Comparison of learning curves for BERT base phase 2 (sequence length 384) with packed and unpacked processing. Comparison of learning curves for BERT large phase 1 (sequence length 128) with packed and unpacked processing. Comparison of learning curves for BERT large phase 2 (sequence length 384) with packed and unpacked processing.
Table 1 :
1Key performance results of proposed packing algorithms (SPFHP and NNLSHP) on IPU.pack.
packing EFF
p
OH
realized
depth algorithm (%)
(%) speed-up
1
NONE 50.0 1.00 0.000
1.000
1
SORT 99.9 2.00
100
1.000
≈10
GREEDY ≈78 ≈1.6 ≈4.48
≈1.5
2
SPFHP 80.5 1.61 4.283
1.544
3
SPFHP 89.4 1.79 4.287
1.716
3
NNLSHP 99.7 2.00 4.287
1.913
4
SPFHP 93.9 1.88 4.294
1.803
8
SPFHP 98.9 1.98 4.481
1.895
max
SPFHP 99.6 1.99 4.477
1.905
Table 2 :
2Measured speed-ups in BERT pretraining with packing.Model Sequence Packing Realized
size
length
factor
speed-up
base
128
1.17
1.15
384
1.70
1.68
large
128
1.17
1.15
384
1.70
1.69
Table 3 :
3SQuAD 1.1 scores after BERT pretraining with
packing.
Model Configuration
F1
Exact
size
match
base
[6]
88.5
80.8
Packed
88.32 81.03
large
[6]
90.9
84.1
Packed
90.65 84.12
Table of Contents
ofPacking algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3.2 packedBERT: model changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.3 Adjust hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 Bin packing algorithm comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 4.2 MLPerf™ phase 2 pretraining setup: learning curves and hyperparameter adjustment . . . . . . . . . . . . . . . . 6 4.3 Full pretraining and SQuAD finetuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 4.4 Scaling analysis: Impact of accelerators count . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Canonical packing problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 F.2 Approximate bin packing problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 F.3 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 F.4 Non-negative least squares histogram-packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 F.5 Discussion of residual weight choice . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Complexity Analysis of non-negative least-squares histogram-packing . . . . . . . . . . . . . . . . . . . . . . . . 24 G.2 Complexity Analysis of shortest-pack-first histogram-packing . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 T.1 Packing Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 T.2 Positional Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 T.3 Attention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 T.4 Avoiding loss unpacking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 T.5 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 T.6 Loss Balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Introduction
1
2 Sequence length distributions
2
3 Methods
3
3.1 4 Experiments
5
4.1 5 Conclusion
8
A Broader impact
14
B Reproducibility Statement
14
C Related work
15
D Theorem on LAMB hyperparameter correction heuristic
16
E Un-padding scaling estimate
17
F Technical background on packing
19
F.1 G Complexity analysis of the proposed packing approaches
24
G.1 H Performance Comparison to GREEDY Packing in T5
25
I Impact of NSP loss
25
J Wikipedia with Longer Sequence Length
26
K Packing SQuAD 1.1
27
L Packing GLUE
28
M Packing Audio Data (LibriSpeech)
29
N Packing Paper Abstracts (PubMed)
30
O MLPerf™ phase 2 learning curves
31
P Full pretraining of BERT base and large learning curves
32
Q Note on changing the sequence length for optimal packing
34
R Fine-tuned longest-pack-first histogram-packing
34
S Extended NNLS with padding token weighting
35
T Implementation Challenges and Tricks
36
U Packing source code
39
Listing all unique ways of packing up to a maximum packing depth can be achieved through dynamic programming. We only consider packing at most up to 3 sequences per pack. This is the smallest packing depth that can eliminate the need for most padding on the Wikipedia dataset. Increasing the depth to 4, increases the size of the packing problem drastically and yields no throughput benefit.3 With only two sequences, packing would be not as efficient since the distribution on sequence length is not symmetric. We use dynamic programming to enumerate all feasible ways/strategies that up to M sequences of length 1 − 512 can be packed into a bin of length 512. For example, a packing strategy may be [512] or[6, 506] or[95, 184, 233]. To avoid listing the same strategy multiple times, we enforce the sequence lengths within a pack to occur in sorted order, for example, [95, 184, 233] is equivalent to[184, 95, 233] .1,
F.4.2 and F.4.3.
F.4.1 Enumerating packing strategies of fixed packing depth
Table 4 :
4Example packing matrix for sequence length 8. Columns represent different kinds of packs. Rows represent the number of sequences in these packs with a certain length. The last column represents a pack with only a single sequence of length six.
Table 5 :
5Performance results of proposed packing algorithms for SQuAD 1.1 BERT pre-training.packing
packing # strategies # packs
# tokens # padding efficiency packing
depth
algorithm
used
tokens
(%)
factor
1
none
348
88641 34038144 18788665
44.801
1.000
2
SPFHP
348
45335 17408640
2159161
87.597
1.955
3
NNLSHP
398
40808 15670272
420793
97.310
2.172
3/max
SPFHP
344
40711 15633024
383545
97.547
2.177
Table 6 :
6Performance results of proposed packing algorithms for the GLUE dataset. Only the baseline and the SPFHP packing results without limiting the packing depth are displayed.data
packing # strategies # packs
# tokens # padding efficiency packing
name
depth
used
tokens
(%)
factor
cola
1
34
8551
1094528
997669
8.849
1.000
cola
13/max
29
913
116864
20005
82.882
9.366
sst2
1
64
67349
8620672
7723633
10.406
1.000
sst2
15/max
64
7691
984448
87409
91.121
8.757
mrpc
1
77
3668
469504
274214
41.595
1.000
mrpc
4/max
74
1606
205568
10278
95.000
2.284
qqp
1
123 363846 46572288 35448844
23.884
1.000
qqp
5/max
123
97204 12442112
1318668
89.402
3.743
stsb
1
85
5749
735872
575993
21.726
1.000
stsb
6/max
83
1367
174976
15097
91.372
4.206
mnli
1
124 392702 50265856 34636487
31.093
1.000
mnli
8/max
124 123980 15869440
240071
98.487
3.167
rte
1
112
2490
318720
152980
52.002
1.000
rte
4/max
108
1330
170240
4500
97.357
1.872
wnli
1
72
635
81280
57741
28.960
1.000
wnli
6/max
63
192
24576
1037
95.780
3.307
Listing 2: Non-negative least squares histogram-packing def pack_using_nnlshp(histogram, max_sequence_length, max_sequences_per_pack): # List all unique ways of packing to the desired maximum sequence length strategy_set = get_packing_strategies(0, 1, max_sequence_length, max_sequences_per_pack) print(f"Packing will involve {len(strategy_set)} unique packing strategies.") # Get the packing matrix corresponding to this list of packing strategies A = get_packing_matrix(strategy_set, max_sequence_length) # Weights that penalize the residual on short sequences less. Solving non-negative least squares took {time.time() -start:3.2f} seconds.") # Round the floating point solution to nearest integer strategy_repeat_count = np.rint(strategy_repeat_count).astype(np.int64) # Compute the residuals, shape: [max_sequence_length] residual = histogram -A @ strategy_repeat_count # Handle the left-over sequences i.e. positive part of residual unpacked_seqlen = np.arange(1, max_sequence_length + 1)[residual > 0] for l in unpacked_seqlen: strategy = sorted([l, max_sequence_length -l]) # the depth 1 strategy strategy_index = strategy_set.index(strategy) strategy_repeat_count[strategy_index] += residual[l-1] # Re-compute the residual with the updated strategy_repeat_count # This shouldimport time
import numpy as np
from scipy import optimize, stats
from functools import lru_cache
def get_packing_matrix(strategy_set, max_sequence_length):
num_strategies = len(strategy_set)
A = np.zeros((max_sequence_length, num_strategies), dtype=np.int32)
for i, strategy in enumerate(strategy_set):
for seq_len in strategy:
A[seq_len -1, i] += 1
return A
@lru_cache(maxsize=None)
def get_packing_strategies(start_length, minimum_increment, target_length, depth):
gap = target_length -start_length
strategies = []
# Complete the packing with exactly 1 number
if depth == 1:
if gap >= minimum_increment:
strategies.append([gap])
# Complete the sample in "depth" steps, recursively
else:
for new in range(minimum_increment, gap + 1):
new_gap = target_length -start_length -new
if new_gap == 0:
strategies.append([new])
else:
options = get_packing_strategies(start_length + new, new, target_length, depth -1)
for option in options:
if len(option) > 0:
strategies.append([new] + option)
return strategies
penalization_cutoff = 8
w0 = np.ones([max_sequence_length])
w0[:penalization_cutoff] = 0.09
# Solve the packing problem
print(f"Sequences to pack: ", histogram.sum())
start = time.time()
strategy_repeat_count, rnorm = optimize.nnls(np.expand_dims(w0, -1) * A, w0 * histogram)
print(f"
pack) == limit or offset == 0: final[offset].append((count, pack)) else: tmp[offset].append((count, pack)) def pack_using_spfhp(histogram, max_sequence_length, max_sequences_per_pack): """Shortest-pack-first histogram-packing algorithm.""" reversed_histogram = np.flip(histogram) # Initialize main strategy data dictionary. # The key indicates how many tokens are left for full length. # The value is a list of tuples, consisting of counts and respective packs. # A pack is a (sorted) list of sequence length values that get concatenated. tmp_strategies_per_length = defaultdict(list) strategies_per_length = defaultdict(list) # Index i indicates here, how much space is left, due to reversed histogram for i in range(max_sequence_length): n_sequences_to_bin = reversed_histogram[i] length_to_bin = max_sequence_length -i offset = i + 1 # largest possible offset while n_sequences_to_bin > 0: if (length_to_bin + offset) in tmp_strategies_per_length: # extract shortest pack that will get modified n_sequences_to_pack, pack = tmp_strategies_per_length[ length_to_bin + offset].pop() new_pack = pack + [length_to_bin] count = min(n_sequences_to_pack, n_sequences_to_bin) if n_sequences_to_pack > n_sequences_to_bin: # old pack gets reduced n_sequences_to_pack -= n_sequences_to_bin tmp_strategies_per_length[length_to_bin + offset].append( (n_sequences_to_pack, pack)) n_sequences_to_bin = 0 else: n_sequences_to_bin -= n_sequences_to_pack add_pack(new_pack, count, tmp_strategies_per_length, strategies_per_length, max_sequences_per_pack, offset) # clean up to speed up main key search if not tmp_strategies_per_length[length_to_bin + offset]: tmp_strategies_per_length.pop(length_to_bin + offset) else: offset -= 1 # Does not fit anywhere. Create new pack. # merge all strategies for key in tmp_strategies_per_length: strategies_per_length[key].extend(tmp_strategies_per_length[key]) # flatten strategies dictionary strategy_set = [] strategy_repeat_count = [] for key in strategies_per_length: for count, pack in strategies_per_length[key]: pack.reverse() strategy_set.append(pack) strategy_repeat_count.append(count) return strategy_set, np.array(strategy_repeat_count)if offset < 0:
add_pack([length_to_bin], n_sequences_to_bin,
tmp_strategies_per_length, strategies_per_length,
max_sequences_per_pack, i)
n_sequences_to_bin = 0
pack) == limit or offset == 0: final[offset].append((count, pack)) else: tmp[offset].append((count, pack)) def pack_using_lpfhp(histogram, max_sequence_length, max_sequences_per_pack, distribute=True): """Longest-pack-first histogram-packing.""" start = time.time() reversed_histogram = np.flip(histogram) # Initialize main strategy data dictionary. # The key indicates how many tokens are left for full length. # The value is a list of tuples, consisting of counts and respective packs. # A pack is a (sorted) list of sequence length values that get concatenated. tmp_strategies_per_length = defaultdict(list) strategies_per_length = defaultdict(list) if max_sequences_per_pack is "max": max_sequences_per_pack = max_sequence_length # Index i indicates here, how much space is left, due to reversed histogram for i in range(max_sequence_length): n_sequences_to_bin = reversed_histogram[i] length_to_bin = max_sequence_length -i offset = 0 # smallest possible offset for perfect fit while n_sequences_to_bin > 0: if (length_to_bin + offset) in tmp_strategies_per_length: # extract worst pack that will get modified n_sequences_to_pack, pack = tmp_strategies_per_length[ length_to_bin + offset].pop() # calculate how often the current sequence maximally fits in repeat = min(1 + offset // length_to_bin, max_sequences_per_pack-len(pack)) # correct dependent on count while n_sequences_to_bin//repeat == 0: repeat -= 1 if not distribute: repeat = 1 new_pack = pack + [length_to_bin]*repeat count = min(n_sequences_to_pack, n_sequences_to_bin//repeat) if n_sequences_to_pack > count: # old pack gets reduced n_sequences_to_pack -= count tmp_strategies_per_length[length_to_bin + offset].append( (n_sequences_to_pack, pack)) n_sequences_to_bin -= count * repeat else: n_sequences_to_bin -= n_sequences_to_pack * repeat add_pack(new_pack, count, tmp_strategies_per_length, strategies_per_length, max_sequences_per_pack, offset -(repeat -1) * length_to_bin, max_sequence_length) # clean up to speed up main key search if not tmp_strategies_per_length[length_to_bin + offset]: tmp_strategies_per_length.pop(length_to_bin + offset) # reset offset in case best fit changed offset = 0 else: offset += 1 # Does not fit anywhere. Create new pack. if offset >= max_sequence_length -length_to_bin + 1: # similar repetition but no dependence on pack. repeat = min(max_sequence_length//length_to_bin, max_sequences_per_pack) while n_sequences_to_bin//repeat == 0: repeat -= 1 if not distribute: repeat = 1 add_pack([length_to_bin]*repeat, n_sequences_to_bin//repeat, tmp_strategies_per_length, strategies_per_length, max_sequences_per_pack, max_sequence_length-length_to_bin*repeat, max_sequence_length) n_sequences_to_bin -= n_sequences_to_bin//repeat * repeat # merge all strategies for key in tmp_strategies_per_length: strategies_per_length[key].extend(tmp_strategies_per_length[key]) # flatten strategies dictionary strategy_set = [] strategy_repeat_count = [] for key in strategies_per_length: for count, pack in strategies_per_length[key]: pack.reverse() strategy_set.append(pack) strategy_repeat_count.append(count) max_sequence_length * new_number_of_samples empty_tokens = sum([count*(max_sequence_length-sum(pack)) for count, pack in zip(strategy_repeat_count, strategy_set)]) efficiency = 100 -empty_tokens / total_tokens * 100 speedup_upper_bound = 1.0/(1 -(histogram*( 1 -sequence_lengths / max_sequence_length)).sum() / old_number_of_samples) print(f"Packing efficiency (fraction of real tokens): {efficiency:3.4f}\n", f"Speed-up theoretical limit: {speedup_upper_bound:3.4f}\n", f"Achieved speed-up over un-packed dataset: {old_number_of_samples/new_number_of_samples:3.5f}", f"Runtime: Packed {old_number_of_samples} sequences in {duration:3.3f} seconds.") return strategy_set, strategy_repeat_count# Summarize efficiency of solution
duration = time.time() -start
sequence_lengths = np.arange(1, max_sequence_length + 1)
strategy_repeat_count = np.array(strategy_repeat_count)
n_strategies = len(strategy_set)
old_number_of_samples = histogram.sum()
new_number_of_samples = strategy_repeat_count.sum()
sequences = sum([count*len(pack) for count, pack in
zip(strategy_repeat_count, strategy_set)])
total_tokens =
We avoid the ambiguous terms "bin" and "sample/sequence"and use "pack" instead to refer to the multiple sequences concatenated during packing.
For data distributions that are more skewed than Wikipedia this might look different.
https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue. py
https://github.com/mlcommons/training/tree/master/rnn_speech_recognition/pytorch
https://huggingface.co/datasets/pubmed
https://github.com/mlcommons/training_results_v1.1/blob/main/NVIDIA/benchmarks/bert/ implementations/pytorch/config_DGXA100_540x8x3x1_new.sh#L2
Listing 7: Histogram creation for GLUE training datasets # Copyright 2020 The HuggingFace Inc. team. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """GLUE data loading and histogram creation.Some code snippets were taken from https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py Most is original code. """ from transformers import AutoTokenizer import datasets import numpy as np # constants max_sequence_length = 128 task_to_keys = { "cola": ("sentence", None), "mnli": ("premise", "hypothesis"), "mrpc": ("sentence1", "sentence2"), "qnli": ("question", "sentence"), "qqp": ("question1", "question2"), "rte": ("sentence1", "sentence2"), "sst2": ("sentence", None), "stsb": ("sentence1", "sentence2"), "wnli": ("sentence1", "sentence2"), } glue_keys = ['cola', 'sst2', 'mrpc', 'qqp', 'stsb', 'mnli', 'rte','1 -sequence_lengths / max_sequence_length)).sum()/old_number_of_samples) num_padding_tokens_packed = (sequence_lengths * padding).sum() efficiency = 1 -num_padding_tokens_packed/(new_number_of_samples*max_sequence_length) print(f"Packing efficiency (fraction of real tokens): {efficiency:3.4f}\n", f"Speed-up theoretical limit: {speedup_upper_bound:3.4f}\n", f"Achieved speed-up over un-packed dataset: {old_number_of_samples/new_number_of_samples:3.5f}") return strategy_set, strategy_repeat_count
Supplemental Material for "Efficient Sequence Packing without Cross-contamination: Accelerating Large Language Models without Impacting Performance. ANONYMOUS. 2022ANONYMOUS. Supplemental Material for "Efficient Sequence Packing without Cross-contamination: Accelerating Large Language Models without Impacting Performance', 2022.
Optimization Methods for Large-Scale Machine Learning. L Bottou, F E Curtis, J Nocedal, SIAM Review. 602BOTTOU, L., CURTIS, F. E., AND NOCEDAL, J. Optimization Methods for Large-Scale Machine Learning. SIAM Review 60, 2 (jan 2018), 223-311.
A fast non-negativity-constrained least squares algorithm. R Bro, S Jong, Journal of Chemometrics. 115BRO, R., AND DE JONG, S. A fast non-negativity-constrained least squares algorithm. Journal of Chemometrics 11, 5 (sep 1997), 393-401.
Language Models are Few-Shot Learners. T B Brown, B Mann, N Ryder, M Subbiah, J Kaplan, P Dhariwal, A Neelakantan, P Shyam, G Sastry, A Askell, S Agarwal, A Herbert-Voss, G Krueger, T Henighan, R Child, A Ramesh, D M Ziegler, J Wu, C Winter, C Hesse, M Chen, E Sigler, M Litwin, S Gray, B Chess, J Clark, C Berner, S Mccandlish, A Radford, I Sutskever, D Amodei, Advances in Neural Information Processing Systems. 33BROWN, T. B., MANN, B., RYDER, N., SUBBIAH, M., KAPLAN, J., DHARIWAL, P., NEELAKANTAN, A., SHYAM, P., SASTRY, G., ASKELL, A., AGARWAL, S., HERBERT-VOSS, A., KRUEGER, G., HENIGHAN, T., CHILD, R., RAMESH, A., ZIEGLER, D. M., WU, J., WINTER, C., HESSE, C., CHEN, M., SIGLER, E., LITWIN, M., GRAY, S., CHESS, B., CLARK, J., BERNER, C., MCCANDLISH, S., RADFORD, A., SUTSKEVER, I., AND AMODEI, D. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems 33 pre-proceedings (NeurIPS 2020) (may 2020).
. Bytedance Inc. Effective, Transformer, 2021BYTEDANCE INC. Effective Transformer. https://github.com/bytedance/effective_transformer, 2021.
J Devlin, M W Chang, K Lee, K Toutanova, Bert, Pre-training of deep bidirectional transformers for language understanding. NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference. DEVLIN, J., CHANG, M. W., LEE, K., AND TOUTANOVA, K. BERT: Pre-training of deep bidirectional transformers for language understanding. NAACL HLT 2019 -2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies -Proceedings of the Conference 1 (oct 2019), 4171-4186.
J Devlin, M W Chang, K Lee, K Toutanova, Bert, Pre-training of Deep Bidirectional Transformers for Language Understanding. DEVLIN, J., CHANG, M. W., LEE, K., AND TOUTANOVA, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. https://github.com/google-research/bert, 2019.
Pre-training data creation script for BERT. J Devlin, M W Chang, K Lee, K Toutanova, DEVLIN, J., CHANG, M. W., LEE, K., AND TOUTANOVA, K. Pre-training data creation script for BERT. https: //github.com/google-research/bert/blob/master/create_pretraining_data.py#L243, 2019.
W Fedus, B Zoph, N Shazeer, Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. arXiv. FEDUS, W., ZOPH, B., AND SHAZEER, N. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. arXiv (jan 2021).
Dissecting the Graphcore IPU architecture via microbenchmarking. Z Jia, B Tillman, M Maggioni, D P Scarpazza, ArXiv abs/1912.03413JIA, Z., TILLMAN, B., MAGGIONI, M., AND SCARPAZZA, D. P. Dissecting the Graphcore IPU architecture via microbench- marking. ArXiv abs/1912.03413 (2019).
Near-optimal bin packing algorithms. D S Johnson, Massachusetts Institute of TechnologyPhD thesisJOHNSON, D. S. Near-optimal bin packing algorithms. PhD thesis, Massachusetts Institute of Technology, 1973.
A 7160 theorem for bin packing. D S Johnson, M R Garey, Journal of Complexity. 11JOHNSON, D. S., AND GAREY, M. R. A 7160 theorem for bin packing. Journal of Complexity 1, 1 (oct 1985), 65-106.
of Algorithms and Combinatorics. B Korte, Vygen, J. Combinatorial Optimization. 21SpringerKORTE, B., AND VYGEN, J. Combinatorial Optimization, vol. 21 of Algorithms and Combinatorics. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
A Simple On-Line Bin-Packing Algorithm. C C Lee, D T Lee, Journal of the ACM (JACM). 323LEE, C. C., AND LEE, D. T. A Simple On-Line Bin-Packing Algorithm. Journal of the ACM (JACM) 32, 3 (jul 1985), 562-572.
Y Liu, M Ott, N Goyal, J Du, M Joshi, D Chen, O Levy, M Lewis, L Zettlemoyer, V Stoyanov, Roberta, A Robustly Optimized BERT Pretraining Approach. arXiv. LIU, Y., OTT, M., GOYAL, N., DU, J., JOSHI, M., CHEN, D., LEVY, O., LEWIS, M., ZETTLEMOYER, L., AND STOYANOV, V. RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv (jul 2019).
MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance. P Mattson, V J Reddi, C Cheng, C Coleman, G Diamos, D Kanter, P Micikevicius, D Patterson, G Schmuelling, H Tang, G Wei, C Wu, IEEE Micro. 40MATTSON, P., REDDI, V. J., CHENG, C., COLEMAN, C., DIAMOS, G., KANTER, D., MICIKEVICIUS, P., PATTERSON, D., SCHMUELLING, G., TANG, H., WEI, G., AND WU, C. MLPerf: An Industry Standard Benchmark Suite for Machine Learning Performance. IEEE Micro 40, 2 (2020), 8-16.
Convergence analysis of distributed stochastic gradient descent with shuffling. Q Meng, W Chen, Y Wang, Z M Ma, T Y Liu, Neurocomputing. 337MENG, Q., CHEN, W., WANG, Y., MA, Z. M., AND LIU, T. Y. Convergence analysis of distributed stochastic gradient descent with shuffling. Neurocomputing 337 (apr 2019), 46-57.
SQuAD: 100,000+ questions for machine comprehension of text. P Rajpurkar, J Zhang, K Lopyrev, P Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsRAJPURKAR, P., ZHANG, J., LOPYREV, K., AND LIANG, P. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (Austin, Texas, Nov. 2016), Association for Computational Linguistics, pp. 2383-2392.
Quantum chemistry structures and properties of 134 kilo molecules. R Ramakrishnan, P O Dral, M Rupp, O A Von Lilienfeld, Scientific Data. 1RAMAKRISHNAN, R., DRAL, P. O., RUPP, M., AND VON LILIENFELD, O. A. Quantum chemistry structures and properties of 134 kilo molecules. Scientific Data 1 (2014).
Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. L Ruddigkeit, R Van Deursen, L C Blum, J.-L Reymond, 23088335Journal of Chemical Information and Modeling. 52RUDDIGKEIT, L., VAN DEURSEN, R., BLUM, L. C., AND REYMOND, J.-L. Enumeration of 166 billion organic small molecules in the chemical universe database gdb-17. Journal of Chemical Information and Modeling 52, 11 (2012), 2864-2875. PMID: 23088335.
Lingvo: a modular and scalable framework for sequence-to-sequence modeling. J Shen, P Nguyen, Y Wu, Z Chen, Et Al, SHEN, J., NGUYEN, P., WU, Y., CHEN, Z., ET AL. Lingvo: a modular and scalable framework for sequence-to-sequence modeling, 2019.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, U Kaiser, I Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsRed Hook, NY, USACurran Associates IncVASWANI, A., SHAZEER, N., PARMAR, N., USZKOREIT, J., JONES, L., GOMEZ, A. N., KAISER, U., AND POLOSUKHIN, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Red Hook, NY, USA, 2017), NIPS'17, Curran Associates Inc., p. 6000-6010.
GLUE: A multi-task benchmark and analysis platform for natural language understanding. A Wang, A Singh, J Michael, F Hill, O Levy, S Bowman, Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsWANG, A., SINGH, A., MICHAEL, J., HILL, F., LEVY, O., AND BOWMAN, S. GLUE: A multi-task benchmark and analysis platform for natural language understanding. In Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (Brussels, Belgium, Nov. 2018), Association for Computational Linguistics, pp. 353-355.
A Warstadt, A Singh, S R Bowman, arXiv:1805.12471Neural network acceptability judgments. arXiv preprintWARSTADT, A., SINGH, A., AND BOWMAN, S. R. Neural network acceptability judgments. arXiv preprint arXiv:1805.12471 (2018).
Transformers: State-of-the-art natural language processing. T Wolf, L Debut, V Sanh, J Chaumond, C Delangue, A Moi, P Cistac, T Rault, R Louf, M Funtowicz, J Davison, S Shleifer, P Von Platen, C Ma, Y Jernite, J Plu, C Xu, T L Scao, S Gugger, M Drame, Q Lhoest, A M Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsWOLF, T., DEBUT, L., SANH, V., CHAUMOND, J., DELANGUE, C., MOI, A., CISTAC, P., RAULT, T., LOUF, R., FUNTOWICZ, M., DAVISON, J., SHLEIFER, S., VON PLATEN, P., MA, C., JERNITE, Y., PLU, J., XU, C., SCAO, T. L., GUGGER, S., DRAME, M., LHOEST, Q., AND RUSH, A. M. Transformers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations (Online, Oct. 2020), Association for Computational Linguistics, pp. 38-45.
Visual transformers: Token-based image representation and processing for computer vision. B Wu, C Xu, X Dai, A Wan, P Zhang, Z Yan, M Tomizuka, J Gonzalez, K Keutzer, P Vajda, WU, B., XU, C., DAI, X., WAN, A., ZHANG, P., YAN, Z., TOMIZUKA, M., GONZALEZ, J., KEUTZER, K., AND VAJDA, P. Visual transformers: Token-based image representation and processing for computer vision, 2020.
XLA: Optimizing Compiler for Machine Learning. T Xla, 2021XLA, T. XLA: Optimizing Compiler for Machine Learning. https://www.tensorflow.org/xla, 2021.
Y You, J Li, S Reddi, J Hseu, S Kumar, S Bhojanapalli, X Song, J Demmel, K Keutzer, C.-J Hsieh, Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. arXiv. YOU, Y., LI, J., REDDI, S., HSEU, J., KUMAR, S., BHOJANAPALLI, S., SONG, X., DEMMEL, J., KEUTZER, K., AND HSIEH, C.-J. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. arXiv (apr 2019).
Duplication factors slightly differ. M Yue, L Zhang, Listing 6: Wikipedia and SQuAD 1.1 histograms """Wikipedia and SQUaD 1.1 histograms. For sequence length 128 to 512, we use the Wikipedia article dump from October 1st 2020. For sequence length 1024 and 2048, we use the Wikipedia article dump from. 11import numpy as np wikipedia_histogram = np.array. 9838, 9854, 9740, 9782, 9799, 9798, 9788, 9840, 9747, 9797, 9893, 9593, 9535, 9658, 9554, 9593, 9530, 9523, 9488, 9548, 9418, 9418, 9508, 9638, 9521, 9277, 9289, 9255, 9322, 9281, 9351, 9259, 9255, 9225, 9098, 9268, 9227, 9224, 9106, 9239, 3815044], dtype=np.int64)YUE, M., AND ZHANG, L. A simple proof of the inequality M F F D(L) ≤ 71/60OP T (L) + 1, L for the MFFD bin-packing algorithm. Acta Mathematicae Applicatae Sinica 11, 3 (jul 1995), 318-330. Listing 6: Wikipedia and SQuAD 1.1 histograms """Wikipedia and SQUaD 1.1 histograms. For sequence length 128 to 512, we use the Wikipedia article dump from October 1st 2020. For sequence length 1024 and 2048, we use the Wikipedia article dump from February 8th 2021. Duplication factors slightly differ. """ import numpy as np wikipedia_histogram = np.array([ 0, 0, 0, 0, 1821, 1226, 1969, 1315, 1794, 1953, 3082, 3446, 4166, 5062, 9554, 16475, 19173, 17589, 17957, 19060, 21555, 23524, 26954, 30661, 33470, 36614, 40134, 43256, 46094, 49350, 52153, 55428, 58109, 60624, 63263, 64527, 65421, 66983, 68123, 68830, 70230, 70486, 72467, 72954, 73955, 74311, 74836, 74489, 74990, 75377, 74954, 75096, 74784, 74698, 74337, 74638, 74370, 73537, 73597, 73153, 72358, 71580, 71082, 70085, 69733, 69445, 67818, 67177, 66641, 65709, 64698, 63841, 63218, 62799, 61458, 60848, 60148, 59858, 58809, 58023, 56920, 55999, 55245, 55051, 53979, 53689, 52819, 52162, 51752, 51172, 50469, 49907, 49201, 49060, 47948, 47724, 46990, 46544, 46011, 45269, 44792, 44332, 43878, 43984, 42968, 42365, 42391, 42219, 41668, 41072, 40616, 40587, 39999, 40169, 39340, 38906, 38438, 38142, 37757, 37818, 37535, 37217, 36757, 36589, 36151, 35953, 35531, 35496, 35089, 35053, 34567, 34789, 34009, 33952, 33753, 33656, 33227, 32954, 32686, 32880, 32709, 31886, 32126, 31657, 31466, 31142, 31106, 30650, 30316, 30494, 30328, 30157, 29611, 29754, 29445, 28921, 29271, 29078, 28934, 28764, 28445, 28319, 28141, 28282, 27779, 27522, 27333, 27470, 27289, 27102, 27018, 27066, 26925, 26384, 26188, 26385, 26392, 26082, 26062, 25660, 25682, 25547, 25425, 25072, 25079, 25346, 24659, 24702, 24862, 24479, 24288, 24127, 24268, 24097, 23798, 23878, 23893, 23817, 23398, 23382, 23280, 22993, 23018, 23242, 22987, 22894, 22470, 22612, 22452, 21996, 21843, 22094, 21916, 21756, 21955, 21444, 21436, 21484, 21528, 21597, 21301, 21197, 21281, 21066, 20933, 21023, 20888, 20575, 20574, 20511, 20419, 20312, 20174, 20023, 20087, 19955, 19946, 19846, 19562, 19710, 19556, 19477, 19487, 19387, 19225, 19069, 19360, 18655, 19034, 18763, 18800, 19012, 18893, 18714, 18645, 18577, 18317, 18458, 18374, 18152, 17822, 18102, 17735, 17940, 17805, 17711, 17690, 17703, 17669, 17410, 17583, 17331, 17313, 16892, 16967, 16870, 16926, 17233, 16845, 16861, 16576, 16685, 16455, 16687, 16747, 16524, 16473, 16349, 16273, 16255, 16228, 16219, 16021, 16111, 15867, 15751, 16081, 15703, 15751, 15854, 15665, 15469, 15431, 15428, 15464, 15517, 15335, 15461, 15237, 15292, 15305, 15351, 15078, 14810, 15119, 14780, 14664, 14869, 14722, 14890, 14672, 14439, 14685, 14706, 14840, 14373, 14286, 14596, 14615, 14168, 14299, 13987, 14167, 14107, 14096, 14202, 13985, 14118, 14094, 14127, 13896, 13864, 13597, 13572, 13717, 13669, 13782, 13617, 13284, 13333, 13425, 13457, 13256, 13404, 13318, 13425, 13317, 13179, 13193, 13257, 13160, 12813, 13149, 13010, 12867, 12958, 12818, 12801, 12749, 12810, 12575, 12673, 12514, 12735, 12523, 12677, 12298, 12469, 12341, 12445, 12477, 12326, 12110, 12087, 12305, 12156, 12032, 12190, 12150, 11980, 12022, 11825, 11969, 11831, 11997, 11924, 11739, 11685, 11702, 11783, 11783, 11659, 11647, 11610, 11526, 11577, 11538, 11536, 11497, 11480, 11374, 11234, 11433, 11466, 11475, 11147, 11376, 11217, 11002, 11245, 11124, 11000, 11129, 10923, 10966, 11071, 11029, 11036, 10972, 11012, 10800, 10936, 10904, 10750, 10669, 10766, 10780, 10675, 10905, 10511, 10598, 10583, 10658, 10471, 10667, 10601, 10430, 10440, 10510, 10148, 10468, 10346, 10257, 10286, 10235, 10351, 10182, 10182, 10095, 10192, 9866, 10070, 10148, 9956, 10132, 10043, 9741, 10003, 10056, 9920, 10021, 9838, 9854, 9740, 9782, 9799, 9798, 9788, 9840, 9747, 9797, 9893, 9593, 9535, 9658, 9554, 9593, 9530, 9523, 9488, 9548, 9418, 9418, 9508, 9638, 9521, 9277, 9289, 9255, 9322, 9281, 9351, 9259, 9255, 9225, 9098, 9268, 9227, 9224, 9106, 9239, 3815044], dtype=np.int64)
A branch-and-cut-and-price algorithm for one-dimensional stock cutting and two-dimensional two-stage cutting. G Belov, G Scheithauer, European Journal of Operational Research. 1711BELOV, G., AND SCHEITHAUER, G. A branch-and-cut-and-price algorithm for one-dimensional stock cutting and two-dimensional two-stage cutting. European Journal of Operational Research 171, 1 (may 2006), 85-106.
Computers and Intractability; A Guide to the Theory of NP-Completeness. M R Garey, D S Johnson, W. H. Freeman & CoUSAGAREY, M. R., AND JOHNSON, D. S. Computers and Intractability; A Guide to the Theory of NP-Completeness. W. H. Freeman & Co., USA, 1990.
Detailed analysis of putative genes encoding small proteins in legume genomes. G Guillén, C Diaz-Camino, C Loyola-Torres, R Aparicio-Fabre, A Hernández-López, M Díaz-Sánchez, F Sanchez, Frontiers in Plant Science. 4208GUILLÉN, G., DIAZ-CAMINO, C., LOYOLA-TORRES, C., APARICIO-FABRE, R., HERNÁNDEZ-LÓPEZ, A., DÍAZ-SÁNCHEZ, M., AND SANCHEZ, F. Detailed analysis of putative genes encoding small proteins in legume genomes. Frontiers in Plant Science 4 (2013), 208.
Comparing ancient dna preservation in petrous bone and tooth cementum. H B Hansen, P B Damgaard, A Margaryan, J Stenderup, N Lynnerup, E Willerslev, M E Allentoft, PLOS ONE. 121HANSEN, H. B., DAMGAARD, P. B., MARGARYAN, A., STENDERUP, J., LYNNERUP, N., WILLERSLEV, E., AND ALLENTOFT, M. E. Comparing ancient dna preservation in petrous bone and tooth cementum. PLOS ONE 12, 1 (01 2017), 1-18.
Extreme Value Distributions. S Kotz, S Nadarajah, World Scientific Publishing CompanyKOTZ, S., AND NADARAJAH, S. Extreme Value Distributions. World Scientific Publishing Company, 2000.
Solving Least Squares Problems. C L Lawson, R J Hanson, Society for Industrial and Applied Mathematics. LAWSON, C. L., AND HANSON, R. J. Solving Least Squares Problems. Society for Industrial and Applied Mathematics, jan 1995.
Efficient parallel non-negative least squares on multi-core architectures. Y Luo, R Duraiswami, SIAM Journal on Scientific Computing. 33LUO, Y., AND DURAISWAMI, R. Efficient parallel non-negative least squares on multi-core architectures. SIAM Journal on Scientific Computing 33 (2011), 2848 -2863.
Performance catalogue for BERT on Pytorch. 2021NVIDIANVIDIA. Performance catalogue for BERT on Pytorch. https://ngc.nvidia.com/catalog/resources/ nvidia:bert_for_pytorch/performance, 2021.
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. Y Peng, S Yan, Z Lu, Proceedings of the 2019 Workshop on Biomedical Natural Language Processing. the 2019 Workshop on Biomedical Natural Language ProcessingPENG, Y., YAN, S., AND LU, Z. Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets. In Proceedings of the 2019 Workshop on Biomedical Natural Language Processing (BioNLP 2019) (2019), pp. 58-65.
. P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, D Cournapeau, E Burovski, P Peterson, W Weckesser, J Bright, S J Van Der Walt, M Brett, J Wilson, K J Millman, N Mayorov, A R J Nelson, E Jones, R Kern, E Larson, C J Carey, İ Polat, Y Feng, E W Moore, J Vanderplas, D Laxalde, J Perktold, R Cimrman, I Henriksen, E A Quintero, C R Harris, A M Archibald, A H Ribeiro, F Pedregosa, P Van Mulbregt, Nature Methods. 17AND SCIPY 1.0 CONTRIBUTORS. SciPy 1.0: Fundamental Algorithms for Scientific Computing in PythonVIRTANEN, P., GOMMERS, R., OLIPHANT, T. E., HABERLAND, M., REDDY, T., COURNAPEAU, D., BUROVSKI, E., PETERSON, P., WECKESSER, W., BRIGHT, J., VAN DER WALT, S. J., BRETT, M., WILSON, J., MILLMAN, K. J., MAYOROV, N., NELSON, A. R. J., JONES, E., KERN, R., LARSON, E., CAREY, C. J., POLAT,İ., FENG, Y., MOORE, E. W., VANDERPLAS, J., LAXALDE, D., PERKTOLD, J., CIMRMAN, R., HENRIKSEN, I., QUINTERO, E. A., HARRIS, C. R., ARCHIBALD, A. M., RIBEIRO, A. H., PEDREGOSA, F., VAN MULBREGT, P., AND SCIPY 1.0 CONTRIBUTORS. SciPy 1.0: Fundamental Algorithms for Scientific Computing in Python. Nature Methods 17 (2020), 261-272.
. T Wolf, Q Lhoest, P Von Platen, Y Jernite, M Drame, J Plu, J Chaumond, C Delangue, C Ma, A Thakur, S Patil, J Davison, T L Scao, V Sanh, C Xu, N Patry, A Mcmillan-Major, S Brandeis, S Gugger, F Lagunas, L Debut, M Funtowicz, A Moi, S Rush, P Schmidd, P Cistac, V Muštar, J Boudier, A Tordjmann, Datasets, Github, WOLF, T., LHOEST, Q., VON PLATEN, P., JERNITE, Y., DRAME, M., PLU, J., CHAUMOND, J., DELANGUE, C., MA, C., THAKUR, A., PATIL, S., DAVISON, J., SCAO, T. L., SANH, V., XU, C., PATRY, N., MCMILLAN- MAJOR, A., BRANDEIS, S., GUGGER, S., LAGUNAS, F., DEBUT, L., FUNTOWICZ, M., MOI, A., RUSH, S., SCHMIDD, P., CISTAC, P., MUŠTAR, V., BOUDIER, J., AND TORDJMANN, A. Datasets. GitHub. Note: https://github.com/huggingface/datasets 1 (2020).
Version 12.2. Champaign, IL. Wolfram Research Inc, Mathematica, WOLFRAM RESEARCH INC. Mathematica, Version 12.2. Champaign, IL, 2020.
| [
"https://github.com/mlcommons/training_results_v0.",
"https://github.com/NVIDIA/DeepLearningExamples/tree/master/",
"https://github.com/huggingface/transformers/blob/master/examples/text-classification/run_glue.",
"https://github.com/mlcommons/training/tree/master/rnn_speech_recognition/pytorch",
"https://github.com/mlcommons/training_results_v1.1/blob/main/NVIDIA/benchmarks/bert/",
"https://github.com/huggingface/transformers/blob/master/examples/pytorch/text-classification/run_glue.py",
"https://github.com/bytedance/effective_transformer,",
"https://github.com/google-research/bert,",
"https://github.com/huggingface/datasets"
] |
[
"Paraphrase Detection on Noisy Subtitles in Six Languages",
"Paraphrase Detection on Noisy Subtitles in Six Languages"
] | [
"Eetu Sjöblom eetu.sjoblom@helsinki.fi \nDepartment of Digital Humanities\nFaculty of Arts\nUniversity of Helsinki\nUnioninkatu 40FI-00014\n",
"Mathias Creutz mathias.creutz@helsinki.fi \nDepartment of Digital Humanities\nFaculty of Arts\nUniversity of Helsinki\nUnioninkatu 40FI-00014\n",
"Mikko Aulamo mikko.aulamo@helsinki.fi \nUniversity of Helsinki\nFinland\n"
] | [
"Department of Digital Humanities\nFaculty of Arts\nUniversity of Helsinki\nUnioninkatu 40FI-00014",
"Department of Digital Humanities\nFaculty of Arts\nUniversity of Helsinki\nUnioninkatu 40FI-00014",
"University of Helsinki\nFinland"
] | [
"Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text"
] | We perform automatic paraphrase detection on subtitle data from the Opusparcus corpus comprising six European languages: German, English, Finnish, French, Russian, and Swedish. We train two types of supervised sentence embedding models: a word-averaging (WA) model and a gated recurrent averaging network (GRAN) model. We find out that GRAN outperforms WA and is more robust to noisy training data. Better results are obtained with more and noisier data than less and cleaner data. Additionally, we experiment on other datasets, without reaching the same level of performance, because of domain mismatch between training and test data. | 10.18653/v1/w18-6109 | [
"https://www.aclweb.org/anthology/W18-6109.pdf"
] | 52,334,848 | 1809.07978 | 60087248f37d3923e13934f73112127dae11a9c9 |
Paraphrase Detection on Noisy Subtitles in Six Languages
Nov 1
Eetu Sjöblom eetu.sjoblom@helsinki.fi
Department of Digital Humanities
Faculty of Arts
University of Helsinki
Unioninkatu 40FI-00014
Mathias Creutz mathias.creutz@helsinki.fi
Department of Digital Humanities
Faculty of Arts
University of Helsinki
Unioninkatu 40FI-00014
Mikko Aulamo mikko.aulamo@helsinki.fi
University of Helsinki
Finland
Paraphrase Detection on Noisy Subtitles in Six Languages
Proceedings of the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated Text
the 2018 EMNLP Workshop W-NUT: The 4th Workshop on Noisy User-generated TextBrussels, BelgiumNov 164
We perform automatic paraphrase detection on subtitle data from the Opusparcus corpus comprising six European languages: German, English, Finnish, French, Russian, and Swedish. We train two types of supervised sentence embedding models: a word-averaging (WA) model and a gated recurrent averaging network (GRAN) model. We find out that GRAN outperforms WA and is more robust to noisy training data. Better results are obtained with more and noisier data than less and cleaner data. Additionally, we experiment on other datasets, without reaching the same level of performance, because of domain mismatch between training and test data.
Introduction
This paper studies automatic paraphrase detection on subtitle data for six European languages. Paraphrases are a set of phrases or full sentences in the same language that mean approximately the same thing. Automatically finding out when two phrases mean the same thing is interesting from both a theoretical and practical perspective. Theoretically, within the field of distributional, compositional semantics, there is currently a significant amount of interest in models and representations that capture the meaning of not just single words, but sequences of words. There are also practical implementations, such as providing multiple alternative correct translations when evaluating the accuracy of machine translation systems.
To our knowledge, the present work is the first published study of automatic paraphrase detection based on data from Opusparcus, a recently published paraphrase corpus (Creutz, 2018) 1 . Opusparcus consists of sentential paraphrases, that is, pairs of full sentences that convey approximately the same meaning. Opusparcus provides data for six European languages: German, English, Finnish, French, Russian, and Swedish. The data sets have been extracted from OpenSubtitles2016 (Lison and Tiedemann, 2016), which is a collection of translated movie and TV subtitles. 2 In addition to Opusparcus, experiments are performed on other well known paraphrase resources:
(1) PPDB, the Paraphrase Database (Ganitkevitch et al., 2013;Ganitkevitch and Callison-Burch, 2014;Pavlick et al., 2015), (2) MSRPC, the Microsoft Research Paraphrase Corpus Dolan and Brockett, 2005), (3) SICK (Marelli et al., 2014), and (4) STS14 (Agirre et al., 2014).
We are interested in movie and TV subtitles because of their conversational nature. This makes subtitle data ideal for exploring dialogue phenomena and properties of everyday, colloquial language (Paetzold and Specia, 2016;van der Wees et al., 2016;Lison et al., 2018). We would also like to stress the importance of working on other languages beside English. Unfortunately, many language resources contain English data only, such as MSRPC and SICK. In other datasets, the quality of the English data surpasses that of the other languages to a considerable extent, as in the mutilingual version of PPDB (Ganitkevitch and Callison-Burch, 2014).
Although our subtitle data is very interesting data, it is also noisy data, in several respects. Since the subtitles are user-contributed data, there are misspellings both due to human mistake and due to errors in optical character recognition (OCR). OCR errors emerge when textual subtitle files are 2 OpenSubtitles2016 is extracted from www. opensubtitles.org.
OpenSubtitles2016 is in itself a subset of the larger OPUS collection ("... the open parallel corpus"): opus.lingfil.uu.se, and provides a large number of sentence-aligned parallel corpora in 65 languages. produced by "ripping" (scanning) the subtitle text from DVDs using various tools. Furthermore, movies are sometimes not tagged with the correct language, they are encoded in various character encodings, and they come in various formats. (Tiedemann, 2007(Tiedemann, , 2008(Tiedemann, , 2016 A different type of errors emerge because of misalignments and issues with sentence segmentation. Opusparcus has been constructed by finding pairs of sentences in one language that have a common translation in at least one other language. For example, English "Have a seat." is potentially a paraphrase of "Sit down." because both can be translated to French "Asseyez-vous." (Creutz, 2018) To figure out that "Have a seat." is a translation of "Asseyez-vous.", English and French subtitles for the same movie can be used. English and French text that occur at the same time in the movie are assumed to be translations of each other. However, there are many complications involved: Subtitles are not necessarily shown as entire sentences, but as snippets of text that fit on the screen. There are numerous partial overlaps when comparing the contents of subtitle screens across different languages, and the reconstruction of proper sentences may be difficult. There may also be timing differences, because of different subtitle speeds and different time offsets for starting the subtitles. (Tiedemann, 2007(Tiedemann, , 2008 Furthermore, Lison et al. (2018) argue that " [subtitles] should better be viewed as boiled down transcriptions of the same conversations across several languages. Subtitles will inevitably differ in how they 'compress' the conversations, notably due to structural divergences between languages, cultural differences and disparities in subtitling traditions/conventions. As a consequence, sentence alignments extracted from subtitles often have a higher degree of insertions and deletions compared to alignments derived from other sources."
We tackle the paraphrase detection task using a sentence embedding approach. We experiment with sentence encoding models that take as input a single sentence and produce a vector representing the semantics of the sentence. While models that rely on sentence pairs as input are able to use additional information, such as attention between the sentences, the sentence embedding approach has its advantages: Embeddings can be calculated also when no sentence pair is available, and large numbers of embeddings can be precalculated, which allows for fast comparisons in huge datasets.
Sentence representation learning has been a topic of growing interest recently. Much of this work has been done in the context of generalpurpose sentence embeddings using unsupervised approaches inspired by work on word embeddings (Hill et al., 2016;Kiros et al., 2015) as well as approaches relying on supervised training objectives (Conneau et al., 2017a;Subramanian et al., 2018). While the paraphrase detection task is potentially useful for learning general purpose embeddings, we are mainly interested in paraphrastic sentence embeddings for paraphrase detection and semantic similarity tasks.
Closest to the present work is that of Wieting and Gimpel (2017), who study sentence representation learning using multiple encoding architectures and two different sources of training data. It was found that certain models benefit significantly from using full sentences (SimpWiki) instead of short phrases (PPDB) as training data. However, the SimpWiki data set is relatively small, and this leaves open the question how much the approaches could benefit from very large corpora of sentential paraphrases. It is also unclear how well the approaches generalize to languages other than English.
The current paper takes a step forward in that experiments are performed on five other languages in addition to English. We also study the effects of noise in the training data sets.
Data
Opusparcus (Creutz, 2018) contains so-called training, development and test sets for each of the six languages it covers. The training sets, which consist of millions of sentence pairs, have been created automatically and are orders of magnitude larger than the development and test sets, which have been annotated manually and consist of a few thousands of sentence pairs. The development and test sets have different purposes, but otherwise they have identical properties: the development sets can be used for optimization and extensive experimentation, whereas the test sets should only be used in final evaluations.
The development and test sets are "clean" (in principle), since they have been checked by human annotators. The annotators were shown pairs of sentences, and they needed to decide whether the two sentences were paraphrases (that is, meant the same thing), on a four-grade scale: dark green (good), light green (mostly good), yellow (mostly bad), or red (bad). Two different annotators checked the same sentence pairs and if the annotators were in full agreement or if they chose different but adjacent categories, the sentence pair was included in the data set. Otherwise the sentence pair was discarded.
There was an additional choice for the annotators to explicitly discard bad data. Data was to be discarded, if there were spelling mistakes, bad grammar, bad sentence segmentation, or the language of the sentences was wrong. The highest "trash rate" of around 11 % occurred for the French data, apparently because of numerous grammatical mistakes in French spelling, which is known to be tricky. The lowest "trash rate" of below 3 % occurred for Finnish, a language with highly regular orthography. Interestingly, English was second best after Finnish, with less than 4 % discarded sentence pairs. Although English orthography is not straightforward, there are few diacritics that can go wrong (such as accents on vowels), and English benefits from the largest amounts of data and the best preprocessing tools. Table 1 displays a breakdown of the error types in the English and Finnish annotated data. The Opusparcus training sets need to be much larger than the development and test sets in order to be useful. However, size comes at the expense of quality, and the training sets have not been checked manually. The training sets are assumed to contain noise to the same extent as the development and test sets. On one hand, when it comes to spelling and OCR errors, this may not be too bad, as a paraphrase detection model that is robust to noise is a good thing. On the other hand, when we train a supervised paraphrase de-tection model, we would like to know which of the sentence pairs in the training data are actual paraphrases and which ones are not. Since the training data has not been manually annotated, we cannot be sure. Instead we need to rely on the automatic ranking presented by Creutz (2018) that is supposed to place the sentence pairs that are most likely to be true paraphrases first in the training set and the sentence pairs that are least likely to be paraphrases last.
In the current paper, we investigate whether it is more beneficial to use less and cleaner training data or more and noisier training data. We also compare different models in terms of their robustness to noise.
In addition to the Opusparcus data, we use other data sources. In Section 4.3 we experiment with a model trained on PPDB, a large collection of noisy, automatically extracted and ranked paraphrase candidates. PPDB has been successfully used in paraphrase models before (Wieting et al., 2015(Wieting et al., , 2016Wieting and Gimpel, 2017), so we are interested in comparing the performance of models trained on Opusparcus and those trained on PPDB.
We also evaluate our models on MSRPC, a well-known paraphrase corpus. While Opusparcus contains mostly short sentences of conversational nature, and PPDB contains mostly short phrases and sentence fragments, the MSRPC data comes from the news domain. MSRPC was created by automatically extracting potential paraphrase candidates, which were then checked by human annotators.
Lastly, two semantic textual similarity data sets, SICK and STS14 are used for evaluation in a transfer learning setting. SICK contains sentence pairs from image captions and video descriptions annotated for relatedness with scores in the [0, 5] range. It consists of about 10,000 English sentences which are descriptive in nature. STS14 comprises five different subsets, ranging over multiple genres, also with human-annotated scores within [0, 5].
Embedding models
We use supervised training to produce sentence embedding models, which can be used to determine how similar sentences are semantically and thus if they are likely to be paraphrases.
Models
In our models, there is a sequence of words (or subword units) to be embedded: s = (w 1 , w 2 , ..., w n ). The embedding of a sequence s is g(s), where g is the embedding function.
The word embedding matrix is W ∈ R d×|V | , where d is the dimensionality of the embeddings and |V | is the size of the vocabulary. W w i is used to denote the embedding for the token w i .
We use a simple word averaging (WA) model as a baseline. In this model the phrase is embedded by averaging the embeddings of its tokens:
g(s) = 1 n n i=1 W w i
Despite its simplicity, the WA model has been shown to achieve good results in a wide range of semantic textual similarity tasks. (Wieting et al., 2016) Our second model is a variant of the gated recurrent averaging network (GRAN) introduced by Wieting and Gimpel (2017). GRAN extends the WA model with a recurrent neural network, which is used to compute gates for each word embedding before averaging. We use a gated recurrent unit (GRU) network (Cho et al., 2014). The hidden states (h 1 , ..., h n ) are computed using the following equations:
r t = σ(W r W wt + U r h t−1 ) z t = σ(W z W wt + U z h t−1 ) h t = z t • f (W h W wt + U h (r t • h t−1 ) + b h ) h t = (1 − z t ) • h t−1 +h t
Here W r , W z , W h , U r , U z , and U h are the weight matrices, b h is a bias vector, σ is the sigmoid function, and • denotes the element-wise product of two vectors.
At each time step t we compute a gate for the word embedding and elementwise-multiply the gate with the word embedding to acquire the new word vector a t :
g t = σ(W x W wt + W h h t + b) a t = W wt • g t
Here W x and W h are weight matrices. The final sentence embedding is computed by averaging the word vectors:
g(s) = 1 n n i=1 a i
Training
Our training data consists of pairs of sequences (s 1 , s 2 ) and associated labels y ∈ {0, 1} indicating whether the sequences are paraphrases or not. Because the Opusparcus data contains ranked paraphrase candidates and not labeled pairs, we take the following approach to sampling the data:
The desired number of paraphrase pairs (positive examples) are taken from the beginning of the data sets. That is, the highest ranking pairs, which are the most likely to be proper paraphrases according to Creutz (2018), are labeled as paraphrases, although not all of them are true paraphrases. The non-paraphrase pairs (negative examples) are created by randomly pairing sentences from the training data. It is possible that a positive example is created this way by accident, but we assume the likelihood of this to be low enough for it not to have noticeable effect on performance. We sample an equal number of positive and negative pairs in all experiments. In the rest of this paper, when mentioning training set sizes, we indicate the number of (assumed) positive pairs sampled from the data. There is always an equal amount of (assumed) negative pairs. During training we optimize the following margin-based loss function: L(θ) = y(max(0, m − d(g(s 1 ), g(s 2 ))) 2
+ (1 − y)d(g(s 1 ), g(s 2 ))
Here m is the margin parameter, d(g(s 1 ), g(s 2 )) is the cosine distance between the embedded sequences, and g is the embedding function. The loss function penalizes negative pairs with a cosine distance smaller than the margin (first term) and encourages positive pairs to be close to each other (second term).
We use the Adam optimizer (Kinga and Ba, 2015) with a learning rate of 0.001 and a batch size of 128 samples in all experiments. Variational dropout (Gal and Ghahramani, 2016) is used for regularization in the GRAN model. The hyperparameters were tuned in preliminary experiments for development set accuracy and, with the exception of keep probability in dropout, kept constant in all experiments.
The embedding matrix W is initialized to a uniform distribution over [−0.01, 0.01]. In our experiments we found that initializing with pre-trained embeddings did not improve the paraphrase detection results. The layer weights in the GRU network are initialized using Xavier initialization (Glorot and Bengio, 2010), and we use the leaky ReLU activation function.
Experiments
Our initial experiment addresses the effects of unsupervised morphological segmentation on the results of the paraphrase detection task.
Next, we tackle our main question on the tradeoff between the amount of noise in the training data and the data size. In particular, we try to see if an optimal amount of noise can be found, and whether the different models have different demands in this respect.
Finally, we evaluate the English-language models on out-of-domain semantic similarity and paraphrase detection tasks.
All evaluations on the Opusparcus are conducted in the following manner: Each sentence in the sentence pair is embedded using the sentence encoding model. The resulting vectors are concatenated and passed on to a multi-layer perceptron classifier with a single hidden layer of 200 units. The classifier is trained on the development set, and the final results are reported on the unseen test set in terms of classification accuracy.
Segmentation
We work on six different European languages, some of which are morphologically rich (that is, the number of possible word forms in the language is high). In the case of languages like Finnish and Russian, the vocabularies without any kind of morphological preprocessing can grow very large even with small amounts of data.
In our approach we train Morfessor Baseline (Creutz and Lagus, 2002;Virpioja et al., 2013), an unsupervised morphological segmentation algorithm, on the whole Opusparcus training data available. Segmentation approaches that result in fixed-size vocabularies, such as byte-pair encoding (BPE) (Sennrich et al., 2016), have been gaining popularity in some natural language processing tasks. We decided to use Morfessor instead, which also appeared to outperform BPE in preliminary experiments. However, we will not focus on segmentation quality, but use segmentation simply as a preprocessing step to improve downstream performance.
The results are shown in the WA-M and WA columns of mance between the WA models with segmentation (called just WA) and without segmentation (called WA-M) clearly indicate that this is a necessary preprocessing step when working on languages with complex morphology. The effect of segmentation for the GRAN model (not shown) is similar, with the exception of English also improving by a few points instead of worsening. Based on these results we will use Morfessor as a preprocessing step in all of the remaining experiments.
Data selection
We next investigate the effects of data set size and the amount of noise in the data on model performance. We are interested in finding an appropriate amount of training data to be used in training the paraphrase detection models, as well as evaluating the robustness of different models against noise in the data. For each language, data sets containing approximately 80%, 70%, or 60% clean paraphrase pairs are created. These percentages are the proportions of assumed positive training examples; the negative examples are created using the approach outlined in Section 3.2.
Estimates of the quality of the training sets exist for all languages in Opusparcus. 3 The quality estimates were used to approximate the numbers of phrase pairs corresponding to the noise levels. Because the data sets for different languages are not equal in size, the number of phrase pairs at a certain noise level differs from language to language. The different data set sizes for all noise levels and languages are shown in Table 3. Table 3 shows the results for the GRAN model. The results indicate that the GRAN model is rather robust to noise in the data. For five out of six languages, the best results are achieved using either the 70% or 60% data sets. That is, even when up to 40% of the positive examples in the training data are incorrectly labeled, the model is able to maintain or improve its performance.
The results for the WA model are very different. The last row of Table 3 shows the accuracies of the WA model at different levels of noise for English. The model's performance decreases significantly as the number of noisy pairs increases, and the results are similar for the other languages as well. We hypothesize these differences to be due to differences in model complexity. The GRAN model incorporates a sequence model and contains more parameters than the simpler WA model.
Further analysis of differences between models
Some qualitative differences between the WA and GRAN models are illustrated in Tables 4 and 5 as well as Figure 1. Table 4 shows which ten sentences in the English development set are closest to one target sentence "okay, you don't get it, man." according to the two models. The comparison is performed by computing the cosine similarity between the sentence embedding vectors. A similar example is shown for German in Table 5: "Kann gut sein." (in English: "That may be.") 4 The result suggests that the WA (word averaging) models produce "bag of synonyms". Sentences are considered similar if they contain the same words or similar words. This, however, makes the WA model perform weakly when a sentence should not be interpreted literally word by word. German "Kann gut sein." is unlikely to literally mean "Can be good."; yet sentences with that meaning are at the top of the WA ranking. By contrast, the GRAN model comes up with very different top candidates, sentences expressing modality, such as: "Possibly", "Yes, he might", "You're probably right", "As naturally as possible", and "I think so".
Figure 1 provides some additional information on the English sentence "okay, you don't get it, man.". Distributions of the cosine similarities of a much larger number of sentences have been plotted (10 million sentences from English OpenSubtitles). In the plots, similar sentences are on the right and dissimilar sentences on the left. In the case of the GRAN model we see a huge mass of dissimilar sentences smoothing out in a tail of similar sentences. In the case of the WA model, there is clearly a second, smaller bump to the right. It turns out that the "bump" mainly contains negated sentences, that is, sentences that contain synonyms of "don't". A second look at Table 4 validates this observation: the common trait of the sentences ranked at the top by WA is that they contain "don't" or "not". Thus, according to WA, the main criterion for a sentence to be similar to "okay, you don't get it, man." is that the sentence needs to contain negation. Again, the GRAN model stresses other, more relevant aspects, in this case, whether the sentence refers to not knowing or not understanding.
PPDB as training data
We also train the GRAN model on PPDB data. Wieting and Gimpel (2017) found that models trained on PPDB achieve good results on a wide range of semantic textual similarity tasks, thus, good performance could be expected on the Opusparcus test sets.
For English we use the PPDB 2.0 release, for languages other than English we use the 1.0 release, as the 2.0 is not available for those languages. The phrasal paraphrase packs are used for all languages. We pick the number of paraphrase pairs in such a way that the training data contains as close to an equal number of tokens as the Opusparcus training data with 1 million positive examples. This ensures that the amount of training data is as similar as possible in both settings. The training setup is otherwise identical to that outlined above.
The results are shown in Table 6. There is a significant drop in performance when moving from in-domain training data (Opusparcus) to out-ofdomain training data (PPDB). One possible explanation for this is that the majority of the phrase pairs in the PPDB dataset contain sentence fragments rather than full sentences. Table 2, in which the size of the training set was the same for each language, regardless of noise levels; the estimated proportion of truly positive pairs in these setups are shown within brackets. The last row of the Table shows the performance of the WA model for English. Figure 1: Distributions of similarity scores between the target sentence "okay, you don't get it, man." and 10 million English sentences from OpenSubtitles. Cosine similarity between sentence embedding vectors are used. A sentence that is very close to the target sentence has a cosine similarity close to 1, whereas a very dissimilar sentence has a value close to -1. (Some of the similarity values are below -1 because of rounding errors in Faiss: https://github.com/facebookresearch/faiss/issues/297.) Section 4.2.1 discusses differences in the distributions between the GRAN and WA models.
Transfer learning
We also evaluate our English models on other data sets. Because we are primarily interested in paraphrastic sentence embeddings, we choose to evaluate our models on the MSRPC paraphrase corpus, as well as two semantic textual similarity tasks, SICK-R and STS14. The data represent a range of genres, and hence offer a view of the potential of Opusparcus for out-of-domain use and transfer learning. Because of the similarities between paraphrase detection and the semantic textual similarity tasks, we believe the two tasks to be mutually supportive. We present results for the WA model as well as the best GRAN model from Section 4.2. The eval-uations are conducted using the SentEval toolkit (Conneau and Kiela, 2018). To obtain comparable results, we use the recommended default configuration for the SentEval parameters. The results are shown in Table 7.
We first note that our models fall short of the state-of-the-art results by a rather large margin. We hypothesize the discrepancy between the performance on MSRPC of our models and the BiLSTM-Max model of Conneau et al. (2017b) to be due to differences in the genre of training data. The conversational language of subtitles is vastly different from the news domain of MSRPC. Although the NLI data used by Conneau et al. (2017b) is derived from an image-captioning task okay , you don 't get it , man . you don 't understand . Table 4: The ten most similar sentences to "okay, you don't get it, man." in the Opusparcus English development set, based on sentence embeddings produced by the GRAN and WA models, respectively. Cosine similarities are shown along with the sentences. (The annotated "correct" paraphrase is "you don't understand.") and thus does not represent the news domain, it is at least closer to MSRPC in terms of the vocabulary and sentence structure. Most interesting is the difference between our WA model and the Paragram-phrase model of Wieting et al. (2016). These are essentially the same model, but trained on two different data sets. While the performance on SICK-R is comparable, our model significantly underperforms on STS14. Overall the results indicate that our models tend to overfit the domain of the Opusparcus data and consequently do not perform as well on out-of-domain data.
Discussion and Conclusion
Our results show that even a large amount of noise in training data is not always detrimental to model performance. This is a promising result, as automatically collected, large but noisy data sets are often easier to come by than clean, manually collected or annotated data sets. Our results can also guide model selection when noise in training data is a consideration. Table 5: The ten most similar sentences to "Kann gut sein." in the Opusparcus German development set, based on sentence embeddings produced by the GRAN and WA models, respectively. The annotated "correct" paraphrase is here "Wahrscheinlich schon." ("Probably yes").
In future work we would like to explore how to most effectively leverage possibly noisy paraphrase data in learning general-purpose sentence embeddings for a wide range of transfer tasks. Investigating training procedures and encoding architectures that allow for robust models with the capability for generalization is a topic for future research. Table 7: Transfer learning results on MSRPC, SICK-R and STS14. GRAN and WA denote our models. We also show results for a selection of models from the transfer learning literature. We use the evaluation measures that are customarily used in connection with these data sets. For MSRPC, the accuracy (left) and F1-score (right) are reported. For SICK-R we report Pearson's r, and for STS14 Pearson's r (left) and Spearman's rho (right). For all these measures a higher value indicates a better result.
GRAN
Table 1 :
1The numbers and proportions of different error types in the data discarded by the annotators. Note that some of the sentence pairs that have been discarded are actually correct and have been mistakenly removed by the annotators.
Table 2 .
2The differences in perfor-AP WA-M WA GRAN
de 74.3
77.0
82.3
83.2
en 72.8
87.4
86.4
89.2
fi 61.0
74.7
80.3
80.1
fr 68.6
74.0
76.7
76.8
ru 65.4
61.4
70.9
69.7
sv 54.8
78.1
84.1
83.2
Table 2 :
2Classification accuracies on the Opusparcus
test sets for models trained on 1 million positive sen-
tence pairs. AP (all paraphrases) is the majority base-
line, which is the accuracy obtained if all sentence pairs
in the test data are labeled as paraphrases. Consis-
tent improvement is obtained by the WA model without
segmentation (WA-M: "WA without Morfessor") and
further by the WA model with segmentation. Whether
the GRAN model outperforms WA is hard to tell from
these figures, but this is further analyzed in Section 4.2.
Table 3 :
3Results on Opusparcus for GRAN (all languages) and WA (English only). The first six rows show the
accuracies of the GRAN model at different estimated levels of correctly labeled positive training pairs: 80%, 70%,
and 60%. In each entry in the table, the first number is the classification accuracy and the number in brackets is
the number of assumed positive training pairs in millions. For comparison, the 1M column to the left repeats the
values from
0 . 98
.no , you don 't understand . you can 't know that . 0.92 G you do not really know . 0.90 R no , i don 't think you understand 0.88 A you know , nobody has to know . N you don 't got it . no one will ever know . and no one will know . do not beat yourself up about that . 0.90 please don 't . 0.89 W well ... not everything . no , you don 't understand . one it 's not up to you . okay , that 's not necessary .0.98
0.86
0.82
0.82
0.81
we don 't know yet .
0.81
you don 't got it .
0.91
don 't go over .
0.91
0.89
A not all of it .
0.88
you don 't have to .
0.87
0.87
0.86
0.84
Kann gut sein . Möglicherweise . Ja , könnte er . Hast wohl Recht . G So natürlich wie möglich . 0.91 R Ihr habt natürlich recht . 0.91 A Sie haben recht , natürlich . 0.88 N Ich denke , doch . Ja , ich denke schon . Wahrscheinlich schon . Ich bin mir sicher . Das ist doch gut . Na , das ist gut . Dir geht es gut . W Ihnen geht es gut . A Sie ist in Ordnung .Ich kann es fühlen .Es ist alles gut .Mir geht 's gut .0.93
0.92
0.92
0.88
0.87
0.87
0.87
0.83
0.81
Ist in Ordnung .
0.81
0.81
0.81
0.81
0.80
0.79
0.79
Sie is okay .
0.79
Table 6 :
6Results on Opusparcus test sets for models trained on PPDB.MSRPC SICK-R STS14
GRAN
69.5/80.6
.717
.40/.44
WA
67.1/79.1
.710
.54/.53
BiLSTM-Max
76.2/83.1
.884
.70/.67
Paragram-phrase
-
.716
.71/-
FastSent
72.2/80.3
-
.63/.64
Opusparcus is available for download at: http:// urn.fi/urn:nbn:fi:lb-201804191
The figures used to approximate the data set sizes can be found in the presentation slides (slides 12-13) at https: //helda.helsinki.fi//bitstream/handle/ 10138/237338/creutz2018lrec_slides.pdf
Further examples of similar sentences can be found in the supplemental material.
Semeval-2014 task 10: Multilingual semantic textual similarity. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, Janyce Wiebe, Proceedings of the 8th international workshop on semantic evaluation. the 8th international workshop on semantic evaluationEneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th international workshop on semantic evaluation (SemEval 2014), pages 81-91.
On the properties of neural machine translation: Encoder-decoder approaches. Kyunghyun Cho, Dzmitry Bart Van Merrienboer, Yoshua Bahdanau, Bengio, Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical TranslationAssociation for Computational LinguisticsKyunghyun Cho, Bart van Merrienboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder ap- proaches. In Proceedings of SSST-8, Eighth Work- shop on Syntax, Semantics and Structure in Statis- tical Translation, pages 103-111. Association for Computational Linguistics.
Senteval: An evaluation toolkit for universal sentence representations. Alexis Conneau, Douwe Kiela, arXiv:1803.05449arXiv preprintAlexis Conneau and Douwe Kiela. 2018. Senteval: An evaluation toolkit for universal sentence representa- tions. arXiv preprint arXiv:1803.05449.
Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, Antoine Bordes, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAlexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017a. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680. Associ- ation for Computational Linguistics.
Supervised learning of universal sentence representations from natural language inference data. Alexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, Antoine Bordes, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsAlexis Conneau, Douwe Kiela, Holger Schwenk, Loïc Barrault, and Antoine Bordes. 2017b. Supervised learning of universal sentence representations from natural language inference data. In Proceedings of the 2017 Conference on Empirical Methods in Nat- ural Language Processing, pages 670-680. Associ- ation for Computational Linguistics.
Open Subtitles Paraphrase Corpus for Six Languages. Mathias Creutz, Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018). the 11th International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAMathias Creutz. 2018. Open Subtitles Paraphrase Cor- pus for Six Languages. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. Eu- ropean Language Resources Association (ELRA).
Unsupervised discovery of morphemes. Mathias Creutz, Krista Lagus, Proceedings of the ACL workshop on Morphological and Phonological Learning (SIGPHON). the ACL workshop on Morphological and Phonological Learning (SIGPHON)Philadelphia, PA, USAMathias Creutz and Krista Lagus. 2002. Unsupervised discovery of morphemes. In Proceedings of the ACL workshop on Morphological and Phonological Learning (SIGPHON), pages 21-30, Philadelphia, PA, USA.
Automatically constructing a corpus of sentential paraphrases. Bill Dolan, Chris Brockett, Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Asia Federation of Natural Language Processing. the Third International Workshop on Paraphrasing (IWP2005). Asia Federation of Natural Language ProcessingBill Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005). Asia Federation of Natu- ral Language Processing.
Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. Bill Dolan, Chris Quirk, Chris Brockett, Proceedings of the 20th International Conference on Computational Linguistics, COLING '04. the 20th International Conference on Computational Linguistics, COLING '04Geneva, SwitzerlandAssociation for Computational LinguisticsBill Dolan, Chris Quirk, and Chris Brockett. 2004. Un- supervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Pro- ceedings of the 20th International Conference on Computational Linguistics, COLING '04, Geneva, Switzerland. Association for Computational Lin- guistics.
A theoretically grounded application of dropout in recurrent neural networks. Yarin Gal, Zoubin Ghahramani, Advances in Neural Information Processing Systems. D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. GarnettCurran Associates, Inc29Yarin Gal and Zoubin Ghahramani. 2016. A theo- retically grounded application of dropout in recur- rent neural networks. In D. D. Lee, M. Sugiyama, U. V. Luxburg, I. Guyon, and R. Garnett, editors, Advances in Neural Information Processing Systems 29, pages 1019-1027. Curran Associates, Inc.
European Language Resources Association. Juri Ganitkevitch, Chris Callison-Burch, The 9th edition of the Language Resources and Evaluation Conference. Reykjavik, IcelandThe multilingual paraphrase databaseJuri Ganitkevitch and Chris Callison-Burch. 2014. The multilingual paraphrase database. In The 9th edition of the Language Resources and Evaluation Confer- ence, Reykjavik, Iceland. European Language Re- sources Association.
PPDB: The paraphrase database. Juri Ganitkevitch, Benjamin Van Durme, Chris Callison-Burch, Proceedings of NAACL-HLT. NAACL-HLTAtlanta, GeorgiaAssociation for Computational LinguisticsJuri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pages 758-764, Atlanta, Georgia. Association for Compu- tational Linguistics.
Understanding the difficulty of training deep feedforward neural networks. Xavier Glorot, Yoshua Bengio, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. the Thirteenth International Conference on Artificial Intelligence and StatisticsXavier Glorot and Yoshua Bengio. 2010. Understand- ing the difficulty of training deep feedforward neu- ral networks. In Proceedings of the Thirteenth In- ternational Conference on Artificial Intelligence and Statistics, pages 249-256.
Learning distributed representations of sentences from unlabelled data. Felix Hill, Kyunghyun Cho, Anna Korhonen, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsFelix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1367-1377. Associ- ation for Computational Linguistics.
Adam: A method for stochastic optimization. P Diererik, Jimmy Lei Kinga, Ba, International Conference on Learning Representations. Diererik P. Kinga and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. In International Conference on Learning Representations.
Skip-thought vectors. Ryan Kiros, Yukun Zhu, R Ruslan, Richard Salakhutdinov, Raquel Zemel, Antonio Urtasun, Sanja Torralba, Fidler, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Ryan Kiros, Yukun Zhu, Ruslan R Salakhutdinov, Richard Zemel, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Skip-thought vectors. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Infor- mation Processing Systems 28, pages 3294-3302. Curran Associates, Inc.
OpenSub-titles2016: Extracting large parallel corpora from movie and TV subtitles. Pierre Lison, Jörg Tiedemann, Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). the 10th International Conference on Language Resources and Evaluation (LREC 2016)Portorož, SloveniaPierre Lison and Jörg Tiedemann. 2016. OpenSub- titles2016: Extracting large parallel corpora from movie and TV subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016), Portorož, Slovenia.
OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Corpora. Pierre Lison, Jörg Tiedemann, Milen Kouylekov, Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018). the 11th International Conference on Language Resources and Evaluation (LREC 2018)Miyazaki, JapanEuropean Language Resources Association (ELRAPierre Lison, Jörg Tiedemann, and Milen Kouylekov. 2018. OpenSubtitles2018: Statistical Rescoring of Sentence Alignments in Large, Noisy Parallel Cor- pora. In Proceedings of the 11th International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan. European Language Resources Association (ELRA).
A sick cure for the evaluation of compositional distributional semantic models. Marco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014). the 9th International Conference on Language Resources and Evaluation (LREC 2014)Reykjavik, IcelandMarco Marelli, Stefano Menini, Marco Baroni, Luisa Bentivogli, Raffaella Bernardi, Roberto Zamparelli, et al. 2014. A sick cure for the evaluation of com- positional distributional semantic models. In Pro- ceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014), Reykjavik, Iceland.
Collecting and exploring everyday language for predicting psycholinguistic properties of words. Gustavo Henrique Paetzold, Lucia Specia, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanGustavo Henrique Paetzold and Lucia Specia. 2016. Collecting and exploring everyday language for pre- dicting psycholinguistic properties of words. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 669-1679, Osaka, Japan.
PPDB 2.0: Better paraphrase ranking, finegrained entailment relations, word embeddings, and style classification. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, Chris Callison-Burch, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingShort Papers; Beijing, ChinaAssociation for Computational LinguisticsEllie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better paraphrase ranking, fine- grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd An- nual Meeting of the Association for Computational Linguistics and the 7th International Joint Confer- ence on Natural Language Processing (Short Pa- pers), pages 425-430, Beijing, China. Association for Computational Linguistics.
Monolingual machine translation for paraphrase generation. Chris Quirk, Chris Brockett, William B Dolan, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP2004). the Conference on Empirical Methods in Natural Language Processing (EMNLP2004)Barcelona, SpainChris Quirk, Chris Brockett, and William B. Dolan. 2004. Monolingual machine translation for para- phrase generation. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing (EMNLP2004), pages 142-149, Barcelona, Spain.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725. Association for Computational Linguistics.
Learning general purpose distributed sentence representations via large scale multi-task learning. Sandeep Subramanian, Adam Trischler, Yoshua Bengio, Christopher J Pal, International Conference on Learning Representations. Sandeep Subramanian, Adam Trischler, Yoshua Ben- gio, and Christopher J. Pal. 2018. Learning gen- eral purpose distributed sentence representations via large scale multi-task learning. In International Conference on Learning Representations.
Building a multilingual parallel subtitle corpus. Jörg Tiedemann, Proceedings of the 17th Conference on Computational Linguistics in the Netherlands (CLIN 17). the 17th Conference on Computational Linguistics in the Netherlands (CLIN 17)Leuven, BelgiumJörg Tiedemann. 2007. Building a multilingual paral- lel subtitle corpus. In Proceedings of the 17th Con- ference on Computational Linguistics in the Nether- lands (CLIN 17), Leuven, Belgium.
Synchronizing translated movie subtitles. Jörg Tiedemann, Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC 2008). the 6th International Conference on Language Resources and Evaluation (LREC 2008)Marrakech, MoroccoJörg Tiedemann. 2008. Synchronizing translated movie subtitles. In Proceedings of the 6th Interna- tional Conference on Language Resources and Eval- uation (LREC 2008), Marrakech, Morocco.
Finding alternative translations in a large corpus of movie subtitles. Jörg Tiedemann, Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016). the 10th International Conference on Language Resources and Evaluation (LREC 2016)Portorož, SloveniaJörg Tiedemann. 2016. Finding alternative translations in a large corpus of movie subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016), Portorož, Slovenia.
Morfessor 2.0: Python implementation and extensions for Morfessor Baseline. Sami Virpioja, Peter Smit, Stig-Arne Grönroos, Mikko Kurimo, 25/2013Aalto University publication series SCIENCE + TECHNOLOGY, Aalto University. HelsinkiTechnical ReportSami Virpioja, Peter Smit, Stig-Arne Grönroos, and Mikko Kurimo. 2013. Morfessor 2.0: Python im- plementation and extensions for Morfessor Baseline. Technical Report 25/2013, Aalto University publica- tion series SCIENCE + TECHNOLOGY, Aalto Uni- versity, Helsinki.
Measuring the effect of conversational aspects on machine translation quality. Arianna Marlies Van Der Wees, Christof Bisazza, Monz, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanMarlies van der Wees, Arianna Bisazza, and Christof Monz. 2016. Measuring the effect of conversational aspects on machine translation quality. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics: Technical Papers, pages 2571-2581, Osaka, Japan.
From paraphrase database to compositional paraphrase model and back. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, Transactions of the Association for Computational Linguistics. 3John Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. From paraphrase database to com- positional paraphrase model and back. Transactions of the Association for Computational Linguistics, 3:345-358.
Towards universal paraphrastic sentence embeddings. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsJohn Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2016. Towards universal paraphrastic sen- tence embeddings. In Proceedings of the Interna- tional Conference on Learning Representations.
Revisiting recurrent networks for paraphrastic sentence embeddings. John Wieting, Kevin Gimpel, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsJohn Wieting and Kevin Gimpel. 2017. Revisiting re- current networks for paraphrastic sentence embed- dings. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 2078-2088. Associa- tion for Computational Linguistics.
| [
"https://github.com/facebookresearch/faiss/issues/297.)"
] |
[
"Judging Chemical Reaction Practicality From Positive Sample only Learning",
"Judging Chemical Reaction Practicality From Positive Sample only Learning"
] | [
"Shu Jiang \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Zhuosheng Zhang \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Hai Zhao zhaohai@cs.sjtu.edu.cn \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Jiangtong Li \nKey Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nCollege of Zhiyuan\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Yang Yang \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Bao-Liang Lu \nDepartment of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n\nKey Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina\n",
"Ning Xia \nChemical.AI\n200240ShanghaiChina\n"
] | [
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"College of Zhiyuan\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Department of Computer Science and Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering\nShanghai Jiao Tong University\n200240ShanghaiChina",
"Chemical.AI\n200240ShanghaiChina"
] | [] | Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference. Considering that chemical reactions have been represented in a language form, we propose a new solution to generally judge the practicality of organic reaction without considering complex quantum physical modeling or chemistry knowledge. While tackling the practicality judgment as a machine learning task from positive and negative (chemical reaction) samples, all existing studies have to carefully handle the serious insufficiency issue on the negative samples. We propose an 1 arXiv:1904.09824v1 [cs.CL] 22 Apr 2019 auto-construction method to well solve the extensively existed long-term difficulty. Experimental results show our model can effectively predict the practicality of chemical reactions, which achieves a high accuracy of 99.76% on real large-scale chemical lab reaction practicality judgment. | null | [
"https://arxiv.org/pdf/1904.09824v1.pdf"
] | 128,271,760 | 1904.09824 | 1b3a8f6d01c3f388e533e8641696b448675b6612 |
Judging Chemical Reaction Practicality From Positive Sample only Learning
Shu Jiang
Department of Computer Science and Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Zhuosheng Zhang
Department of Computer Science and Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Hai Zhao zhaohai@cs.sjtu.edu.cn
Department of Computer Science and Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Jiangtong Li
Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
College of Zhiyuan
Shanghai Jiao Tong University
200240ShanghaiChina
Yang Yang
Department of Computer Science and Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Bao-Liang Lu
Department of Computer Science and Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Key Laboratory of Shanghai Education Commission for Intelligent Interaction, and Cognitive Engineering
Shanghai Jiao Tong University
200240ShanghaiChina
Ning Xia
Chemical.AI
200240ShanghaiChina
Judging Chemical Reaction Practicality From Positive Sample only Learning
* These authors contributed equally to this work. † Corresponding author.
Chemical reaction practicality is the core task among all symbol intelligence based chemical information processing, for example, it provides indispensable clue for further automatic synthesis route inference. Considering that chemical reactions have been represented in a language form, we propose a new solution to generally judge the practicality of organic reaction without considering complex quantum physical modeling or chemistry knowledge. While tackling the practicality judgment as a machine learning task from positive and negative (chemical reaction) samples, all existing studies have to carefully handle the serious insufficiency issue on the negative samples. We propose an 1 arXiv:1904.09824v1 [cs.CL] 22 Apr 2019 auto-construction method to well solve the extensively existed long-term difficulty. Experimental results show our model can effectively predict the practicality of chemical reactions, which achieves a high accuracy of 99.76% on real large-scale chemical lab reaction practicality judgment.
INTRODUCTION
Organic reactions including addition reactions (1), elimination reaction (2), substitution reactions (3)(4)(5), pericyclic reactions (6), rearrangement reactions (7,8), redox reaction (9) have been studied for hundreds of years. Owing to the development of organic methodology (10), hundreds of millions of reactions have been practised and more and more compounds have been produced. Nevertheless, the mechanism of organic reactions has not been completely understood and the practicality of a new organic reaction still mainly relies on human judge from expertise and the eventual exploratory synthesis verification.
Modeling the organic reactions through physical-level method, such as quantum mechanical modeling, is a traditional way to recognize them, whereas it may lead to over-complicated model with poor informative representation (11), even for simple reaction containing only several atoms, is essentially difficult to model due to the need of considering the combinatorial component arrangement using quantum chemistry method. Predicting a complicated chemical reaction under a certain condition is even much more challenging (12), because it requires considering every transition-state, the combination between molecules and their given environment.
Instead, the latest symbol model for chemical reaction has been proposed, in which chemical elements and molecules are regarded as symbols and reactions are considered as text with chemical information. Consequently, most text processing methods including machine learning, especially deep learning, can be applied to the text in chemical language. Support Vector Machine (SVM) has been proved to be useful to predict the result of crystallization of templated vanadium selenites (13). However, it requires complicated manual feature selection with a basis of necessary chemical knowledge. Information retrieval is also an effective way to predict the products of organic reactions (14,15), which presents a limited candidate set for ranking.
Continuous representation of molecules (16) provides a convenient method to automatically generating chemical structures. More recently, some researchers (17) cast the reaction prediction task as a translation problem by introducing a template-free sequence-to-sequence model, trained end-to-end and fully data-driven and achieved an accuracy of 80.1% without relying on auxiliary knowledge such as reaction rules. Recently, Abigail Doyle et al. (18) proposed the random forest algorithm, which can accurately predict the yield of Buchwald Hartwig crosscoupling reactions with many detailed features of materials in reactions, though their computational model can only process a kind of reaction and needs too much information about the reactions.
Existing work using machine learning for chemical information processing falls either relying on strong chemical knowledge source or focusing on specific types of reactions. Distinctive from previous studies, we provide a cutting-edge symbol alone model on chemical text of organic reactions from a general background. A complete data-driven method is proposed for open type of chemical organic reactions, releasing the inconvenient prerequisite with chemical prior knowledge. Without complex parameter setting or manual chemical knowledge based feature selection, our approach can automatically discover the salient features and reaction patterns for effective reaction practicality judgment.
In recent years, natural language processing has popularly adopted embedding representation for text units which is a sort of low-dimensional continuous representation learned from neural networks. Following the latest advance of deep language processing, embedding is also used to represent chemical text segments for chemical reaction learning. Using a data-driven mode, our model will directly learn from a large scale of available reaction data. We use reaction formulation collected from the publications for about 1.7 million reactions. Practicality judgment can be straightforwardly formulized into a discriminative machine learning task over two types of reactions, positive and negative. However, the latter, negative reactions, are seldom reported in chemical literatures and thus usually hard collected. When quite a lot of positive reactions are collocated with few negative ones, the machine learning models have to struggle on seriously imbalanced training dataset. In this work, we propose an effective chemical rule based method for negative reaction generation to cope such a long-term big challenge. Eventually, given the reactants and products, our model can accurately judge the reaction practicality.
Our model pipeline is given as follows. We first preprocess the SMILES sequence of each reactant and product in atom-wise and adopt an unsupervised segmentation algorithm to tokenize the resulting text into segments in a natural language processing way. Then the text or symbol difference between reactants and products which stands for the reaction steps is extracted and tagged from an edit distance detection operation on both sides of reaction text. For an effective representation, all the resulting chemical text segments are presented in an embedding form so that either the reactants or the products can be put into vector representation as well. At last, the reactants and the products which are all in vectors are fed to a neural network for practicality learning and judgment.
METHOD
For chemical reaction prediction, the key point is to effectively capture the internal relationships between a reactant and the corresponding product representation along with Reaction Symbol Distance (RSD). Note that we assume unreactive reactions will be always kept unreactive under all possible, known reaction conditions, thus we remove all reaction conditions in our judgment. This task is formulized into a language processing over the corresponding chemical text. As shown in Figure 1, the text segments of reactants and products are represented as vectors of low-dimensional embedding representation. Then, a deep neural network is trained to learn the chemical principles of reactions by transforming the feature representation of reactants and products. After training, given a reaction text input, the model will judge the practicality.
Unsupervised Tokenization
SMILES (Simplified Molecular Input Line Entry System) is a line notation for entering and representing molecules and reactions using short ASCII strings, which was initiated by David Weininger at the USEPA Mid-Continent Ecology Division Laboratory in Duluth in the 1980s (19). The primary reason SMILES is more useful than an extended connection table is that it is a linguistic construct, rather than a computer data structure. SMILES is a true language, albeit with a simple vocabulary (atom and bond symbols) and only a few grammar rules. SMILES Table 1.
At the very beginning, we remove the hydrogen atoms and the atom mappings from the reaction string, and canonicalized the molecules. We treat chemical reaction described by SMILES as a kind of text in natural language. Considering that chemical elements (atoms) and various SIMILES bond symbols are characters in the chemical language, a sequence of SIMILES which stands for chemical compound can be regarded as the corresponding sentence. Therefore we need to mine the sequence to find a basic meaningful linguistic unit, word. As SMILES encoding text does not provide a word segmentation with solid chemical meaning to facilitate the chemical text processing, we turn to unsupervised tokenization solution in the existing natural language processing (20). Therefore, we adopt goodness measure based method to tokenize each reactant, product text in SMILES into a sequence of words. Let W = {{w i , g(w i )} i=1,...,n } be a list of character n-grams (namely, word candidates) each associated with a goodness score for how likely it is to be a true word from a linguistic/chemical perspective, where w i is a word candidate and g(w i ) is its goodness function. The adopted segmentation algorithm is a greedy maximal-matching one with respect to a goodness score.
{w * , t * } = arg max w 1 ...w i ...wn=T n i=1 g(w i )(1)
It works on T to output the best current word w * repeatedly with T = t * for the next round as
follows, with each {w, g(w)} ∈ W .
In our work, we use Description Length Gain (DLG) as the goodness measurement for a candidate character n-gram from the chemical text. In principle, the higher goodness score for a candidate, the more likely it is to be a true word. DLG was proposed by Kit and Wilks (21) for compression-based unsupervised segmentation. The DLG extracts all occurrences of x i..j from a corpus X = x 1 x 2 ...x n and its DLG goodness score is defined as
g DLG (x i..j ) = L(X) − L(X[r → x i..j ] ⊕ x i..j ),(2)
where X[r → x i..j ] represents the resultant corpus by replacing all items of x i..j with a new symbol r throughout X, and ⊕ denotes the concatenation operator. L(·) is the empirical description length of a corpus in bits that can be estimated by the Shannon-Fano code or Huffman code as below, following classic information theory (22),
L(X) . = −|X| x∈Vp (x) log 2p (x),(3)
where |·| denotes the string length, V is the character vocabulary of X andp(x) is x's frequency in X.
Reaction Symbol Distance (RSD) Generation
To formally represent the text difference from reactants to products in a reaction formula, we introduce the formal concept of Reaction Symbol Distance (RSD), which indicates how source chemical text can be transformed into target one through a series of symbol inserting and deleting operations. The text operation series can be decoded from calculating the edit distance (23).
Edit distance is used to quantify how dissimilar two strings are to one another by counting the minimum number of operations required to transform one string into the other.
For a source sequence S = s 1 s 2 . . . s n and the target sequence T = t 1 t 2 . . . t m , the RSD sequence R = r 1 r 2 . . . r n is encoded by the following tags:
• AD indicates a string should be added right before the corresponding location.
• AR indicates the corresponding symbol should be replaced by the given string with the tag.
• RR the corresponding symbol should be deleted.
• means that there is no operation at the location.
All compound sequences S and T are split into elements, and the resulting RSD from S to T are illustrated in the Figure 2.
The data processing steps together with examples are summarized in Table 2. The same preprocessing steps were applied to all datasets.
Embedding
In our adopted neural model, an embedding layer is used to map each element or segmented word from a sequence into a vector with dimension d. Our model takes three types of inputs, reactant, RSD and product. After embedding, the reactant sequence with n words is represented as R d×n . Similarly, we obtain the embeddings of the reactant sequence R, the RSD sequence S and the product sequence P. Then, the input sequences are subsequently aggregated into two compact representations through projection and concatenating:
M 1 = R 1 1 ⊕ S 1 1 . . . R 1 h ⊕ S 1 h , M 2 = P 1 1 ⊕ S 1 1 . . . P 1 h ⊕ S 1 h (4)
Siamese Network
In order to learn the optimal representations of chemical reactants M 1 and products M 2 with RSD, we propose to use a pair-based network structure called Siamese network which has been proven as an effective framework for image matching (24), sequence similarity comparison tasks (25,26). Since the negative reaction instances are extremely insufficient and most reported yields concentrate in a narrow range, common neural network suffers from the imbalance learning difficulty. The structure of Siamese network consists of two identical branches that share weights and parameters. Each branch poses a deep neural network for feature learning. In this work, we adopt Long-Short Term Memory (LSTM) Network (27) as the branch component due to its advance for sequence modeling. Figure 3 shows an LSTM based branch architecture. The LSTM unit is defined as follows.
i t = σ(W i w x t + W i h h t−1 + b i ),(5)f t = σ(W f w x t + W f h h t−1 + b f ),(6)u t = σ(W u w x t + W u h h t−1 + b u ),(7)c t = f t c t−1 + i t tanh(W c w x t + W c h h t−1 + b c ),(8)h t = tanh(c t ) u t ,(9)
where σ stands for the sigmoid function and the represents element-wiselayer to form a final representation. multiplication. ⊕ denotes vector concatenation. i t , f t , u t , c t , h t are the input gates, forget gates, memory cells, output gates and the current states, respectively. Given a sequence input, the network computes the hidden sequence h t by applying the formulation for each time step.
After embedding, the vectorized inputs M 1 and M 2 are separately fed to forward LSTM and backward LSTM (BiLSTM) to obtain the internal features of two directions. The output for each input is the concatenation of the two vectors from both directions:
h t = − → h t ← − h t . Hence,y = 1 1 + e x(10)
where x is the output of MLP and y is the prediction.
Training objectives
For practicality judgment, we use binary cross entropy as the loss function.
L = − 1 N n t=1 [y t logŷ t + (1 − y t )log(1 −ŷ t )](11)
whereŷ t denotes the prediction, y t is the target and t denotes the data index.
Setup
For practicality judgment, we let the two positive sets collocate with the two negative dataset to form four combinations. The distributions of positive and negative reaction from the train/dev/test sets are in Table 3.
In our experiments, the ratio of the training set and the test set is 9:1 and 10% of the training set is held out as development (dev) set 4
Evaluation metrics
Our practicality judgment evaluation is based on the following metrics: Accuracy, Precision,
Recall and F1-score. Four types of predictions are as shown in Table 4.
Accordingly, we can calculate the performance of Accuracy, Precision, Recall and F1-score as follows.
Accuracy = TP + TN TP + TN + FN + FP ,(12)Precision = TP TP + FP ,(13)Recall = TP TP + FN ,(14)F1-score = 2 × Precision × Recall Precision + Recall(15)
EXPERIMENT Practicality Judgment
Given input sequences, the model will output the reaction success probabilities. To evaluate the result, a threshold is required to distinguish from positive or negative predictions. According to our preliminary experiments, the threshold is set to 0.5 5 . The experimental result is shown in Table 8. We observe the positive reactions could be recognized essentially (nearly 99%).
Though the proportion of positive and negative cases is over 30:1, our model also ensures a high negative F1-score more than 72%. Besides the rule-based negative dataset have a higher
Negative-F1-score. From the statistics of the datasets, we know the rule-based negative dataset is much bigger than the Chemical.AI-Real-1, which not only alleviates the imbalance between positive and negative examples but also increases the diversity of negative examples.
Generalization Ability
In order to demonstrate the generation ability of our learning model, we report the judgment results on Chemical.AI-Real-2 dataset with different training settings in Table 9.
As the Chemical.AI-Real-2 comes from true laboratory record, our model prediction is actually evaluated in these real chemical experiments. Note that these negative reactions were expected to work by experienced chemists, which means they are literally correct in chemistry rules and the chemists must encounter difficulties to predict the practicality of these reactions.
Therefore, when our model gives correct practicality predication over these actually failed reactions, it means that our model performs better than human experts in these cases.
As we know, it is difficult to get sufficient enough failed reactions because they are rarely reported in literature. Meanwhile negative examples are indispensable in discriminative machine learning on these two types of reactions. Here we show that rule-generated negative reactions for training set building may yield remarkable prediction accuracy in real chemical reaction records by considering that the rule negative samples perform best among all training settings in Table 9. This opens a new way in the research of chemistry reaction prediction.
To have an insight of how the thresholds affect the model performance on the Chemical.AI-Real-2 dataset, we record the predication results by ranging the thresholds from 0.1 to 1 with step=0.1. The visualization results are shown Figure 5, which shows the best performance when the threshold is 0.5. The results in Figure 4 show that even there is only a small amount of data added into the training set as the same source as the test set, the judgment results will be improved greatly.
DLG Segmentation
The adopted unsupervised tokenization over the SMILES text is based on a set of words with significant DLG scores in terms of the goodness measure methods. Despite its usefulness in our computational process, we also observe their chemical meaning. Table 5 lists a small part of the words with high DLG scores. For example, it is not strange to any chemists that the structure of metal complex is the key to many organic reactions and in the words list, we find the number and the metallic element are always put in a same fragment which means that the ligands' position information attaching to the metallic element is useful for a better and more accurate embedding representation in our model. At the meanwhile, most of ordinary functional groups are also in the same fragment, like C=C, C#C, C=O and C#N, which means the model regards them as a group to process the reaction like what organic chemists do in their research. We also find that the ring structure in a molecule is always divided in different fragments. Though seemingly irrational, the model actually recognizes different functional parts in a ring for more targetedly processing, which is indeed helpful to extract the reaction pattern in the later process.
Ablation Experiment
We investigate the effect of different features in our model by removing them one by one. As shown in Table 6 all the features contribute to the performance of our final system. If we remove either RSD or DLG Tokenizaton, the performance drops. This result indicates that both features play an important and complementary role in the feature representation.
Recurrent Neural Network Types
We also compare Siamese network with the different standard recurrent neural networks -LSTM, BiLSTM, GRU and BiGRU, and the comparison of the results is demonstrated in Table 7. Obviously, Siamese network outperforms all the others, especially on the negative cases, which shows Siamese network could effectively handle the data imbalance issue.
Significance in Chemistry
In practice, we have shown that our model uses the rule-based generated data as negative examples to train the model, while in the test, our model obtains significant judgment accuracies in real reaction record. It means that our model has been capable of extracting the feature of both positive and negative examples and filtering the bias introduced by rule, which shows remarkable modeling ability and will helps to improve the development of chemical engineering.
During the test, we find that model can recognize some reactions which seems to be impractical but can react actually and some reactions which seems to be reactive but cannot react actually. Table 9: Results for practicality judgment in Chemical.AI-Real-2 dataset (%)
CCN(CC)CC triethylamine F/C=C/F E-difluoroethene CC(=O)O acetic acid F/C=C\F Z-difluoroethene C1CCCCC1 cyclohexane N[C@@H](C)C(=O)O L-alanine c1ccccc1 benzene N[C@H](C)C(=O)O D-alanine
representations of structure can in turn be used as words in the vocabulary of other languages designed for storage of chemical information and chemical intelligence. Some examples are shown in
we have the processed representations of the reactant and product with RSD,M 1 = BiLST M (M 1 ) andM 2 = BiLST M (M 2 ). Then, our model concatenates the representation ofM 1 andM 2 to a Multi-Layer Perception (MLP) layer to form a final representation. The output of the model is activated by a sigmoid function to ensure the prediction is in [0,1].
DATA
The reaction data for our model evaluation has 5 sources, (1) a public chemical reaction datasetUSPTO, (2) a large scale of reaction dataset extracted from reports of Chemical Journals with High Impact factors (CJHIF), (3) ruled generated negative chemical reactions by Chemical.AI laboratory 1 , (4) real failed reactions from Chemical.AI partner laboratories and (5) real reactions from Chemical.AI laboratories. Statistics • Positive reactions from USPTO (USPTO) This public chemical reaction dataset was extracted from the US patents grants and applications dating from 1976 to September 2016 (28) by Daniel M. Lowe (29). The portion of granted patents contains 1,808,938 reactions described using SMILES 2 . Such reaction strings are composed of three groups of molecules: the reactants, the reagents, and the products, which are separated by a '>' sign. After data cleaning with RDKit (30), an open-source cheminformatics and machine learning tool, it remained 269,132 items at last. • Positive reactions from CJHIF(CJHIF) 3,219,165 reactions mined from high impact factor journals 3 with reagent, solvent and catalyst information, in addition with yield. After data cleaning and selection, we used remaining 1,763,731 items at last. • Rule-generated negative reactions from Chemical.AI (Chemical.AI-Rule) For every product in the positive reaction sets, we adopt a set of chemical rules to generate all possible reactions which may output the respective products. Then we filter the resulted reactions by a very large known positive reaction set from Chemical.AI (which contains 20 million known reactions collected from chemical literatures and patents).Namely, all the remained unreported reactions are taken as negative reactions. Due to memory limitation, we keep 100K rule-generated negative reactions in our dataset.Our idea for auto-construction of negative chemical reaction samples is actually quite intuitive, that is, we simply regard no-show reactions from any known literature are quite possibly negative ones. Only if the reference positive reaction set is large enough, we can receive quite reliable negative reasons from such filtering.• Real negative reactions from Chemical.AI (Chemical.AI-Real-1) 12,225 real failed reactions from chemical experiment record of Chemical.AI partner laboratories. After data deduplication and canonicalization, it remained 8,797 reactions. • Real reactions from Chemical.AI (Chemical.AI-Real-2) 24,514 real reactions from chemical experiment record of Chemical.AI partner laboratories, in which there are 16,137 positive reactions and 8,377 negative reactions, where the productivity of negative reactions is 0%. This data set is equally split into two parts: training set and test set.
Figure 6
6illustrates the ROC (Receiver Operating Characteristic) curve which relates to the diagnostic ability of a binary classifier system, and the Area under the Curve (AUC) of ROC is 80.90%, which means this model can perform quite well when the threshold sets rightly.Incremental ExperimentDifferent datasets may have different statistical distribution characteristics on reaction types.To fully examine the capacity of our model, we conduct a series of incremental experiments by mixing a small part of different dataset to the origin one and using the rest as test set.We divided the Chemical.AI-Real-2 dataset into two parts, the incremental set and test set in the ratio of 1:1. Than we add different sized parts of the incremental set with ratios [0.1, 0.2, ..., 0.9, 1.0] to the training set (USPTO + Chemical.AI-Real-1) and conduct the experiments, respectively.
Figure 7 and
7Figure 8show such highly confused examples.
CONCLUSIONFigure 1 :Figure 2 :Figure 3 :Figure 4 :Figure 5 :
12345We present a deep learning model to model real-world chemical reactions and unearth the factors governing reaction outcomes only from symbol representation of chemical information. In contrast to conventional methods which require massive manual features or are only evaluated on small datasets for specific reaction types, our approach is much more simple, end-to-end and effective. Especially, we use a rule-based method to generate unlimited negative samples, and the results evaluated on real reaction records show satisfactory judgment performance. In a distinctive perspective, this work reveals the great potential to employ deep learning method to help chemists judge the practicality of chemical reactions and develop more efficient experimental strategies to reduce the cost of invalid experiments. The resultant model can be used more than practicality judgment, but has a potential to help effective synthesis route design which has been an ongoing task in our current study and chemical practice. Model for practicality judgment.COC(=O)CCl.Oc1ccccc1N(=O)=O>>COC(=O)COc1ccccc1N(=O)=O _ _ _ _ _ _ _ _ Cl . RR _ _ _ _ _ _ _ _ _ _ _ _ _ _ Generation of Reaction Symbol Distance (RSD). Siamese Network with LSTM based branch architecture. The result in our incremental experiment. Threshold effect of our model on Chemical.AI-Real-2 dataset.
Figure 6 :Figure 7 :Figure 8 :
678ROC Curve of our model on Chemical.AI-Real-2 dataset. Positive-like cases in the test set and "X" means it cannot react actually. =O)C=C.SC1=CC=CC=C1>>CCOC(=O)CCSC1=CC=CC=C1 C=C1C2CCC(C2)C1=O.CCNCC>>CCN(CC)CC1C2CCC(C2)C1=O CO[C@H]1CN(C\C=C/C(=O)[C@@H]1OC)C(=O)CCC(=O)OC >>CO[C@H]1CN(CCCC(=O)[C@@H]1OC)C(=O)CCC(=O)OC Negative-like cases in the test set and "O" means it can react actually.
. For practicality judgment, since there are no negative samples from USPTO dataset, we use the positive reactions from CJHIF and USPTO to pair the real negative and rule-generated negative samples from Chemical.AI. For the data collected from laboratories (Chemical.AI-Real-2), we take them as the test set to examine the generalization ability of our model.Considering the calculation efficiency, we specify a max length of 100 words for each
SMILES sequence and apply truncating or zero-padding when needed. The embedding weights
are randomly initialized with the uniformed distribution in the interval [-0.05, 0.05].
Table 1 :
1Examples of SMILES Cl>N1CCCCC1>[O:14]1[CH2:13][CH:12]1[CH2:10] [O:9][C:4]1[CH:5]=[CH:6][CH:7]=[CH:8][C:3]=1 [C:1]#[N:2] 2) Atom-mapping removal and canonicalization ClCC1CO1.N#Cc1ccccc1O>N1CCCCC1>N#Cc1ccccc1OCStep
Example (reactants > reagents > products)
1) Original string
[C:1]([C:3]1[CH:8]=[CH:7][CH:6]=[CH:5][C:4]=1
[OH:9])#[N:2].[CH2:10]([CH:12]1[O:14][CH2:13]
1)
Table 2 :
2Data processing steps. The tokens are separated by a space and individual molecules by a point token.Data
Case
train
dev
test
CJHIF + Chemical.AI-Real-1
Positive 1,406,259 156,251 173,624
Negative
7,178
798
874
USPTO + Chemical.AI-Real-1
Positive
217,992
24,221
26,919
Negative
7,176
797
877
CJHIF + Chemical.AI-Rule
Positive 1,428,673 158,741
89,948
Negative
158,689
17,632
10,052
USPTO + Chemical.AI-Rule
Positive
217,799
24,200
90,221
Negative
24,421
2,713
9,779
Chemical.AI-Real-2
Positive
8,069
-
8,082
Negative
4,202
-
4,175
Table 3 :
3Distributions of positive and negative reaction from the train/dev/test sets in four combinations.Predicted Positive Predicted Negative
True Positive
TP
FN
True Negative
FP
TN
Table 4 :
4Possible prediction resultsWord
DLG Score Word
DLG Score
[Rh]789%10
78.18
[Ru++]5678
74.61
Mo+6]89%10
68.77
3[Zn++]579
66.74
Mg]Br)cc1.
52.68
ccc3)[Ru++
49.01
C#C[Mg]Br
47.60
C=OBr[Mg]
40.02
\C=C/I
20.09
(C#N)
7.68
Table 5 :
5Examples of SMILES words from DLG segmentation USPTO + Chemical.AI-Real-1 USPTO + Chemical.AI-RuleFeatures
Case Precision Recall
F1
Acc
Precision Recall
F1
Acc
Full
P
98.83
99.21 99.02 97.92
97.89
96.72 97.30 96.26
N
72.45
63.83 67.88
91.47
93.94 92.68
w/o RSD
P
98.81
98.80 98.81 97.78
97.21
96.54 96.87 95.62
N
63.27
63.63 63.45
90.58
92.31 91.44
w/o Tokenization
P
98.58
99.01 98.79 97.53
96.69
92.33 94.46 93.34
N
64.87
56.21 60.23
84.56
91.53 86.31
w/o RSD &
P
98.68
98.61 98.65 97.32
95.75
93.58 94.65 93.31
Tokenization
N
58.28
59.41 58.84
83.77
88.87 86.24
Table 6 :
6Ablation study for practicality judgment(F1 Score on Positive case / Negative case) (%)Model
Case
Precision Recall F1-score Accuracy
Siamese
Positive
98.83
99.21
99.02
97.92
Negative
72.45
63.83
67.88
LSTM
Positive
98.78
98.92
98.85
97.77
Negative
65.24
60.49
63.83
BiLSTM
Positive
98.87
99.04
98.96
97.97
Negative
68.87
65.34
67.06
GRU
Positive
99.13
98.53
98.83
97.74
Negative
61.92
73.43
67.19
BiGRU
Positive
98.83
99.18
99.01
97.93
Negative
71.78
64.08
67.71
Table 7 :
7Comparison of F1-score for practicality judgment on USPTO + Chemical.AI-Real-1(%)Data
Case Precision Recall F1-score Accuracy
CJHIF + Chemical.AI-Real-1
P
99.82
99.95
99.88
99.76
N
86.09
63.73
72.24
USPTO + Chemical.AI-Real-1
P
98.83
99.21
99.02
97.92
N
72.45
63.83
67.88
CJHIF + Chemical.AI-Rule
P
96.19
99.91
98.02
97.96
N
95.15
75.98
84.49
USPTO + Chemical.AI-Rule
P
97.89
96.72
97.30
98.97
N
91.47
93.94
92.68
Table 8 :
8Results for practicality judgment (%)Training Data
Case Precision Recall F1-score Accuracy
CJHIF + Chemical.AI-Real-1
P
66.00
85.81
74.61
61.49
N
34.42
14.42
20.32
USPTO + Chemical.AI-Real-1
P
66.19
26.31
37.65
42.98
N
37.65
34.15
73.99
CJHIF + Chemical.AI-Rule
P
72.03
79.05
75.38
64.75
N
50.01
40.57
44.80
USPTO + Chemical.AI-Rule
P
70.03
66.15
68.03
60.31
N
42.64
41.58
42.10
http://www.chemical.ai 2 https://figshare.com/articles/Chemical_reactions_from_US_patents_ 1976-Sep2016_/5104873 3 The journal list is attached in the Supplemental Material.
Dev set is used to supervise the training process in case of over-fitting or under-fitting in deep learning scenerio.
This is also the common setting for binary classification tasks and our quantitive study shown inFigure 5also verifies the optimal setting.
. G Casiraghi, L Battistini, C Curti, G Rassu, F Zanardi, Chemical Reviews. 423076G. Casiraghi, L. Battistini, C. Curti, G. Rassu, F. Zanardi, Chemical Reviews 42, 3076 (2011).
. Z El-Rub, E Bramer, G Brem, Industrial Engineering Chemistry Research. 436911Z. Abu El-Rub, E. Bramer, G. Brem, Industrial Engineering Chemistry Research 43, 6911 (2004).
S Caron, N M Do, J E Sieser, D C Whritenour, P D Hill, Organic Process Research and development. 2324S. Caron, N. M. Do, J. E. Sieser, D. C. Whritenour, P. D. Hill, Organic Process Research and development 2, 324 (2009).
. H Yao, ACS Applied Materials and Interfaces. 63575H. Yao, et al., ACS Applied Materials and Interfaces 6, 3575 (2016).
. G C Fu, ACS central science. 3692G. C. Fu, ACS central science 3, 692 (2017).
Our prediction models have been online. Our prediction models have been online, the link is http://bcmi.sjtu.edu.cn/˜dl4chem
. O Wiest, D C Montiel, K N Houk, The Journal of Physical Chemical. 1018378O. Wiest, D. C. Montiel, K. N. Houk, The Journal of Physical Chemical 101, 8378 (1997).
. Z L Song, C A Fan, Y Q Tu, Chemical Review. 1117523Z. L. Song, C. A. Fan, Y. Q. Tu, Chemical Review 111, 7523 (2011).
. J A Rincn, Organic Process Research and development. 151428J. A. Rincn, et al., Organic Process Research and development 15, 1428 (2011).
. Z Flisak, W Sun, ACS Catalysis. 54713Z. Flisak, W. Sun, ACS Catalysis 5, 4713 (2015).
. S E Denmark, The Journal of Organic Chemistry. 742915S. E. Denmark, The Journal of Organic Chemistry 74, 2915 (2009).
. A Streitwieser, The Journal of Organic Chemistry. 744433A. Streitwieser, The Journal of Organic Chemistry 74, 4433 (2009).
. R E Plata, D A Singleton, Journal of The Americal Chemical Society. 1373811R. E. Plata, D. A. Singleton, Journal of The Americal Chemical Society 137, 3811 (2015).
. P Raccuglia, Nature. 53373P. Raccuglia, et al., Nature 533, 73 (2016).
. C W Coley, R Barzilay, T S Jaakkola, W H Green, K F Jensen, ACS Central Science. 3434C. W. Coley, R. Barzilay, T. S. Jaakkola, W. H. Green, K. F. Jensen, ACS Central Science 3, 434 (2017).
. J N Wei, D Duvenaud, A Aspuruguzik, ACS Central Science. 2725J. N. Wei, D. Duvenaud, A. Aspuruguzik, ACS Central Science 2, 725 (2016).
. R Gmez-Bombarelli, CoRR abs/1610.02415R. Gmez-Bombarelli, et al., CoRR abs/1610.02415 (2016).
. P Schwaller, T Gaudin, D Lanyi, C Bekas, T Laino, arXiv:1711.04810arXiv preprintP. Schwaller, T. Gaudin, D. Lanyi, C. Bekas, T. Laino, arXiv preprint arXiv:1711.04810 (2017).
. D T Ahneman, J G Estrada, S Lin, S D Dreher, A G Doyle, Science. D. T. Ahneman, J. G. Estrada, S. Lin, S. D. Dreher, A. G. Doyle, Science (2018).
. D Weininger, Journal of Chemical Information and Computer Sciences. 2831D. Weininger, Journal of Chemical Information and Computer Sciences 28, 31 (1988).
H Zhao, C Kit, Proceedings of the Third International Joint Conference on Natural Language Processing. the Third International Joint Conference on Natural Language ProcessingH. Zhao, C. Kit, Proceedings of the Third International Joint Conference on Natural Lan- guage Processing: Volume-I (2008).
C Kit, Y Wilks, Proceedings of 1998 International Conference on Chinese Information Processing. 1998 International Conference on Chinese Information ProcessingC. Kit, Y. Wilks, Proceedings of 1998 International Conference on Chinese Information Processing (1998), pp. 223-229.
C E Shannon, A Mathematical Theory of Communication. Blackwell Publishing Ltd27C. E. Shannon, A Mathematical Theory of Communication, vol. 27 (Blackwell Publishing Ltd, 1948).
. S B Needleman, C D Wunsch, Journal of molecular biology. 48443S. B. Needleman, C. D. Wunsch, Journal of molecular biology 48, 443 (1970).
I Melekhov, J Kannala, E Rahtu, International Conference on Pattern Recognition. I. Melekhov, J. Kannala, E. Rahtu, International Conference on Pattern Recognition (2016), pp. 378-383.
J Mueller, A Thyagarajan, Thirtieth AAAI Conference on Artificial Intelligence. J. Mueller, A. Thyagarajan, Thirtieth AAAI Conference on Artificial Intelligence (2016), pp. 2786-2792.
Computer Vision and Pattern Recognition. S Chopra, R Hadsell, Y Lecun, IEEE Computer Society Conference on. 1S. Chopra, R. Hadsell, Y. Lecun, Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on (2005), pp. 539-546 vol. 1.
Long short-term memory. S Hochreiter, J Schmidhuber, SpringerBerlin HeidelbergS. Hochreiter, J. Schmidhuber, Long short-term memory (Springer Berlin Heidelberg, 1997).
Chemical reactions from us patents (1976-sep2016. D Lowe, D. Lowe, Chemical reactions from us patents (1976-sep2016) (2017).
Extraction of chemical structures and reactions from the literature. D M Lowe, University of CambridgePh.D. thesisD. M. Lowe, Extraction of chemical structures and reactions from the literature, Ph.D. thesis, University of Cambridge (2012).
. G Landrum, rdkit/rdkit: 2017 09 1G. Landrum, et al., rdkit/rdkit: 2017 09 1 (q3 2017) release (2017).
| [] |
[
"Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings",
"Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings"
] | [
"Sean Matthews sean.matthews@thomsonreuters.com \nThomson Reuters Labs\nEaganMinnesotaUSA\n",
"John Hudzina john.hudzina@thomsonreuters.com \nThomson Reuters Labs\nEaganMinnesotaUSA\n",
"Dawn Sepehr dawn.sepehr@thomsonreuters.com \nThomson Reuters Labs\nTorontoCanada\n"
] | [
"Thomson Reuters Labs\nEaganMinnesotaUSA",
"Thomson Reuters Labs\nEaganMinnesotaUSA",
"Thomson Reuters Labs\nTorontoCanada"
] | [] | Studies have shown that some Natural Language Processing (NLP) systems encode and replicate harmful biases with potential adverse ethical effects in our society. In this article, we propose an approach for identifying gender and racial stereotypes in word embeddings trained on judicial opinions from U.S. case law. Embeddings containing stereotype information may cause harm when used by downstream systems for classification, information extraction, question answering, or other machine learning systems used to build legal research tools. We first explain how previously proposed methods for identifying these biases are not well suited for use with word embeddings trained on legal opinion text. We then propose a domain adapted method for identifying gender and racial biases in the legal domain. Our analyses using these methods suggest that racial and gender biases are encoded into word embeddings trained on legal opinions. These biases are not mitigated by exclusion of historical data, and appear across multiple large topical areas of the law. Implications for downstream systems that use legal opinion word embeddings and suggestions for potential mitigation strategies based on our observations are also discussed.These attribute lists were derived from Rice and Zorn (2021) seed term queries against the Legal Opinion Corpus embeddings. The seed terms are as follows: | 10.1609/aaai.v36i11.21461 | [
"https://arxiv.org/pdf/2203.13369v2.pdf"
] | 247,748,998 | 2203.13369 | 2bbaee57eb3478897cf9f689bb7c92f2f2ef4fdb |
Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings
Sean Matthews sean.matthews@thomsonreuters.com
Thomson Reuters Labs
EaganMinnesotaUSA
John Hudzina john.hudzina@thomsonreuters.com
Thomson Reuters Labs
EaganMinnesotaUSA
Dawn Sepehr dawn.sepehr@thomsonreuters.com
Thomson Reuters Labs
TorontoCanada
Gender and Racial Stereotype Detection in Legal Opinion Word Embeddings
Studies have shown that some Natural Language Processing (NLP) systems encode and replicate harmful biases with potential adverse ethical effects in our society. In this article, we propose an approach for identifying gender and racial stereotypes in word embeddings trained on judicial opinions from U.S. case law. Embeddings containing stereotype information may cause harm when used by downstream systems for classification, information extraction, question answering, or other machine learning systems used to build legal research tools. We first explain how previously proposed methods for identifying these biases are not well suited for use with word embeddings trained on legal opinion text. We then propose a domain adapted method for identifying gender and racial biases in the legal domain. Our analyses using these methods suggest that racial and gender biases are encoded into word embeddings trained on legal opinions. These biases are not mitigated by exclusion of historical data, and appear across multiple large topical areas of the law. Implications for downstream systems that use legal opinion word embeddings and suggestions for potential mitigation strategies based on our observations are also discussed.These attribute lists were derived from Rice and Zorn (2021) seed term queries against the Legal Opinion Corpus embeddings. The seed terms are as follows:
Introduction
Recent developments in the field of Artificial Intelligence (AI) have transformed the way data is prepared and turned into information for interpretation in different domains spanning from social media to legal documents. These advancements have predominantly paved the way for creating more accurate predictive models, however, multiple research studies have shown that these systems are not without fault and have inadvertently perpetuated some harmful biases and stereotypes present in society by encoding and replicating patterns of bias present in the data upon which they are trained. Examples of such faulty systems include racial bias detected in hate speech predictive models for social media posts (Mozafari, Farahbakhsh, and Crespi 2020), unequal distribution of health care resources across racial groups due to incorrectly identifying patients who need significant healthcare (Obermeyer et al. 2019), displaying fewer Science, Technology, Engineering, and Mathematics (STEM) job advertisements to women compared to men (Lambrecht and Tucker 2019), and racial disparities demonstrated in recidivism risk prediction algorithms (Dieterich, Mendoza, and Brennan 2016).
While these tools are becoming more and more integrated in our societies and extend great benefits when deployed properly, they also pose high risks of imposing unfair life changing decision making upon minorities and more vulnerable communities. This may be particularly important when developing technologies used within the legal system due to the significant impacts the legal system in general has on individuals, businesses, government entities, and many other aspects of society. Deploying predictive technologies based on biased models into contexts where they are used by individuals interacting with the legal system at various levels could potentially result in a broad array of harmful effects in society including decreased quality of legal representation, increased costs associated with litigation, or even increased likelihood or duration of incarceration for individuals belonging to groups affected by these biases. Hence, it is imperative that we take steps towards identifying, mitigating, and ultimately eliminating these undesired effects in legal technologies relying on predictive systems.
We would like to emphasize that historical bias is not the only form of bias that can be found in AI systems: representation, measurement, aggregation, evaluation, and deployment biases have also been identified at different stages of developing an AI system (Suresh and Guttag 2020). In this article, however, we mainly focus on revealing historical and representational bias found in word embeddings trained on judicial opinions from U.S. case law and the distinct challenges that arise when developing predictive models in the legal domain.
Bias in Word Embeddings
Word embedding approaches such as word2vec (Mikolov et al. 2013a,b), GloVe (Pennington, Socher, and Manning 2014), etc., represent words in an n-dimensional space by encoding contextual co-occurrence statistics for words occurring in large text corpora. Since these associations are obtained from compiling large historical corpora, different types of biases that already exist in these texts will inevitably plague the word representations if appropriate considerations are not anticipated. Multiple previous studies have in-vestigated and shown the presence of these biases in the form of either benign or neutral effects such as associating flowers with pleasant words vs. associating weapons with unpleasant words, or detrimental effects by encoding discrimination based on protected categories such as race, gender, social status, etc. (Bolukbasi et al. 2016;Caliskan, Bryson, and Narayanan 2017).
Different methodologies have been proposed to identify, visualize, and mitigate these effects. One prominent approach draws inspiration from a method originally developed in the field of social psychology to measure implicit bias in humans. The Implicit Association Test (IAT) measures the differential response times of human participants while categorizing sets of target words (e.g., flower and insect names) and attribute terms (e.g., pleasant or unpleasant) when they are paired in stereotypical (e.g. flower-pleasant) or counter-stereotypical (e.g. flower-unpleasant) configurations (Greenwald, McGhee, and Schwartz 1998). Both the stimuli used in the IAT and the general strategy of detecting bias through differential association strength have been adapted to develop bias detection strategies to measure bias encoded in word embeddings such as the Word Embedding Association Test (WEAT; Caliskan, Bryson, and Narayanan 2017). The WEAT measures this difference in association strength between two groups by calculating the similarity of the embeddings in a set of target words used as a proxy for group membership (e.g., common female given names) with the embeddings in two sets of attribute words (e.g., pleasant and unpleasant terms) and computing the difference between these similarities, then comparing the difference in these association strengths to the same difference score calculated for a second target group (e.g., common male given names). Using methods based on this test, Rice et. al found evidence for racial biases being encoded in word embeddings trained on legal texts such as appellate court opinions from US state and federal courts (Rice, Rhodes, and Nteta 2019).
Legal Word Embedding Issues
As mentioned in the previous section, the social impact of encoded bias in word embeddings is becoming more significant in the legal domain which has direct implications on many aspects of our society as legal technologies using these types of representations gain greater adoption. In this section, we review some of the challenges that arise when working with legal text corpora and in the subsequent sections we present our solution for addressing these issues.
Names in Legal Text: Although the WEAT racial and gender stereotype tests relied on given names (Caliskan, Bryson, and Narayanan 2017), legal opinions construct more formal sentences than the wikipedia and news articles used to train the publically available GloVe embeddings (Pennington, Socher, and Manning 2014). For example, Figure 1 demonstrates the co-referencing of a natural person in legal opinions. Note that Gerald Bostock's given name is only referenced once. In most cases, the natural person's full name is typically referenced first followed by the surname and/or pronouns thereafter. If a legal system applied the given name tests only, then bias encoded in surnames and gendered pro-Excerpt from Bostock v. Clayton County:
Gerald Bostock worked for Clayton County, Georgia, as a child welfare advocate. Under his leadership, the county won national awards for its work. After a decade with the county, Mr. Bostock began participating in a gay recreational softball league. Not long after that, influential members of the community allegedly made disparaging comments about Mr. Bostock's sexual orientation and participation in the league. Soon, he was fired for conduct "unbecoming" a county employee. Another concern specific to the legal domain is that legal opinions potentially may introduce gender-occupational stereotypes because they typically state a judge's full name and judicial title. Historically, women only account for 12.3% of federal Title III judicial appointments (Federal Judicial Center 2012). Given that Caliskan, Bryson, and Narayanan (2017) found significant gender-occupation bias in non-legal text and the historical imbalance of female judges, embeddings built upon legal opinions potentially perpetuate this specific stereotype.
Positive & Negative Sentiment: Whereas WEAT evaluated sentiment in a generalized modern web corpus, the legal opinions contain historical domain specific terminology. The WEAT study evaluated several tests measuring positive and negative sentiment for various target groups. These tests are from the IAT with very small vocabularies as required due to fatigue effects in human participants (Caliskan, Bryson, and Narayanan 2017). The sentiment-based test must be adjusted for legal opinions because the general vocabularies used to describe positive and negative sentiment do not align with how positive and negative sentiment is expressed in judicial opinions (Rice and Zorn 2021).
Legal Outcomes: While the IAT tests mainly focus on negative or positive attributes, legal outcome extraction provides a greater risk of harm to protected classes than sentiment analysis. Courts document legal outcomes in docket entries, orders, judgements, and/or opinions. Litigation analytics extract legal outcomes for these free text sources because many jurisdictions do not record outcomes in a structured form at a party level (Vacek et al. 2019). If the word embeddings influence outcome extraction based on a party's gender or race, then embedding-based analytics may amplify racial and gender bias by causing parties to settle for something other than their case's merits.
Contributions
As discussed previously, word embeddings are used in many practical NLP systems which operate on legal language. In this article, we propose an approach for identifying racial and gender biases encoded in word embeddings that are created using the text of legal opinions. This approach addresses multiple issues specific to legal language that have not been addressed in the previous work. These challenges deal with idiomatic phrases as well as specific considerations for adapting the WEAT tests to legal language for detection of bias. We also investigate how these biases have changed over time as well as their strengths in different topical areas of the law.
Proposed Approach
In this section we describe the main approach proposed for identifying bias in word embeddings created based on legal opinions. We first describe the legal corpus under study and the required data preparation. Next, we briefly discuss the Word Embedding Association Test (WEAT). Finally, we state how we addressed the challenges identified in the previous section with domain adapted tests.
Opinion Preparation & Embedding Construction
For our experiments, we examined embeddings created from a large corpus of U.S. legal opinions. The corpus includes over 12 million opinions from 1,949 current and historic jurisdictions dating back to 1650. The corpus size contains 10x more opinions than a previous legal opinion bias study (Rice, Rhodes, and Nteta 2019). The main corpus includes U.S. federal, state, and territorial courts with the notable exception of tribal courts. The tribal court opinions are handled as a supplemental corpus from the source system. To generate the embeddings for the full corpus, topical subcorpora and historical sub-corpora, we follow the process in Figure 2.
Idiomatic Phrase Extraction: Prior to generating the embeddings, we extracted idiomatic phrases. Non-contextual word embeddings assume phrase meanings are composed from representations of individual words. However, this composability assumption does not always hold true for legal jargon and idiomatic phrases. For example, the Latin phrase pro hac vice means "for this time only" and does not have the same semantic meaning as the individual words pro, hac, & vice.
Although contextual embeddings handle this issue by representing the relative relationships between words, non- contextual embeddings only represent idiomatic phrases as single tokens (Mikolov et al. 2013b). To avoid overly large n-gram dictionaries, the phrase extractor only combines tokens that commonly appear together. Our phrase extractor used a Normalized Point-wise Mutual Information (NPMI) score to select the n-grams to add in the dictionary (Bouma 2009). NPMI scores range from -1 (never co-occurs) to 1 (always co-occurs), with 0 meaning the tokens are completely independent. We ran two passes of the phrase extractor that selected phrases with a minimum NPMI score of 0.5.
Embedding Training: Once the phrase extraction was completed, we trained the embeddings against the complete corpus, as well as sub-corpora for temporal cutoff dates, and divided by topic (see section "Experimental Results"). Each embedding followed the same training procedure using a skip-gram word2vec model. For all embeddings, the hyper-parameters included a 300 dimension vector size, a minimum term frequency of 30, a 10 −4 sampling threshold, a learning rate of 0.05, a window size of 10, and 10 negative samples.
Word Embedding Association Test
Our experiments use Caliskan, Bryson, and Narayanan (2017) original word lists applied to the legal opinion embeddings, as well as tests based on domain specific and expanded word lists. For each test, which includes both the target X and Y word lists, and the attribute A and B word lists, we calculate the effect size (Cohen's d). We also calculate the standard error by sub-sampling the word lists with a simple bootstrapping procedure.
Domain Adaptation
Once the embeddings were trained, we extended the Caliskan, Bryson, and Narayanan tests with new domain specific tests. This section details the methodologies used to generate legal specific target and attributes terms for the new tests. These updates include new attribute lists:
• Positive vs. Negative Legal • Legal (Motion) Outcome • Expanded Career vs. Family
The domain updates also include new target lists:
• Surnames by Race • Male vs. Female Terms • Judge Given Names
Positive vs. Negative Legal: In order to generate a legal specific sentiment vocabulary, we implemented a minimally supervised approach developed by Rice and Zorn (2021). This work provided a legal specific list of positive, V p , and negative, V n , seed terms (Rice and Zorn 2019) and a method for expanding the term sets. We then generated the expanded list based on the legal opinion corpus embeddings. The expanded positive valence terms were found by a cosine similarity search on the vector V p − V n . Conversely, The negative valence terms were found on a cosine similarity search on the vector V n − V p . The expanded term lists were then manually reviewed to exclude any terms with obvious race or gender associations (e.g. "gentlemanly") 1 .
Legal (Motion) Outcome: Trial motions are discussed and reviewed within legal opinions. This text typically includes the party's surname, motion type, and disposition (Vacek et al. 2019). The manually created "Grant vs. Deny" attribute lists capture the positive and negative outcomes for a given motion.
Expanded Career vs. Family List: The expanded career list employed the same minimally supervised approach as the positive and negative legal attribute list expansion. Instead of finding terms along a positive/negative dimension of interest, we extracted terms along a career versus family axis in the embedding space. The seed lists contained the career and family terms from Caliskan, Bryson, and Narayanan (2017).
Surnames by Race: As noted in the introduction, court documents reference parties by their surnames throughout the document. For the racial stereotype experiments, we used the surnames list from the 2010 U.S. decennial census as a proxy for race similar to medical outcome studies (Kallus, Mao, and Zhou 2020). The census provides an estimated percentage of each race by surname. We sampled names from the list with over a 90% probability for a given race.
Although the U.S. Census provided a list of surnames, the referenced name is not guaranteed to refer to a natural person. Instead, the name may reference a legal person's (i.e., corporation) name, a place name, or a thing. To reduce the potential name overlap with common words, we employed three methods to either reduce and or eliminate multi-sense words from the surname list:
• Title cased the surnames to target proper nouns.
• Idiomatic phrase extraction to exclude non-person names like the State of Washington (Mikolov et al. 2013b). • Centroid-based filtering to remove multi-sense words.
The centroid-based filter removes candidate surnames based on the following procedure developed for WEAT (Caliskan, Bryson, and Narayanan 2017). We computed a centroid vector based on the embedding vectors for all surnames in the U.S. 2010 Census and then computed the cosine similarity for each surname relative to the centroid. Finally, we removed 20% of the least similar names. Once the filter was applied, we created the name lists for each test. While our target sample size was 200 surnames with at least 300 opinions per racial group, those criteria were not achievable for all races. Specifically, Native American and Alaskan names were proportionally underrepresented in the main corpus because fewer tribal court jurisdictions publish to and or are collected by the source system compared to State and Federal jurisdictions. Table 1 shows the sample sizes for each group. We adjusted the sample size for each test pair of surnames based on the smallest sample size in the pair. Judge Given Name List: In addition to the gendered first name list created by Caliskan, Bryson, and Narayanan (2017), we generated a gendered first name list based on judicial biographical data exported from the free law project's court listener (Free Law Project 2021). The biographical information included both race and gender for both State and Federal Judges. We calculate the percentages of female and male genders for each first name. For each gendered list we select names that occur at least 90% of the time for that gender.
As with the surnames, some first names might overlap with place names, corporations, or other concepts. For example, Virginia might represent a Judge's name or a State. As with the surname we employed the following procedures:
• Title cased the first name to target proper nouns. • Idiomatic phrase extraction to exclude non-person names, like the Commonwealth of Virginia. • Centroid-base filtering to remove multi-sense words.
Experimental Results
Legal Opinion Corpus
Before discussing the results for the legally adapted tests, we evaluate the opinion-based embedding using the Caliskan, Bryson, and Narayanan (2017) tests. Table 2 shows the baseline results 2 . While the legal opinion embeddings show a smaller effect size for the flower/insect control test than Caliskan, the opinion flower/insect control still exhibits a large effect size. In addition, the instrument/weapons control displays equivalent effect sizes between Caliskan's Common Crawl embeddings and the legal opinion embeddings.
Similar to the baseline tests, the racial and gender stereotype tests show a strong effect size. Note that Table 2 uses the exact same target and attributes as Caliskan, Bryson, and Narayanan (2017). These tests use first names as the targets and non-legal terms as the attributes. Yet, we still see a moderate to strong effect for sentiment. The gender specific test replicated the occupational bias seen in past studies. Gender Effects: While the effect sizes were comparable between the Common Crawl corpus and the legal corpus, the legal specific gender tests show some differences. Table 3 includes the original Caliskan and the new Legal attributes. The "Grant vs. Deny" tests all show a medium female negative bias for legal outcome. In comparison, the "Pleasant vs. Unpleasant" shows a positive female bias. In essence, positive sentiment does not necessarily relate to a positive outcome.
Racial Effects: As with the gender tests, we create both legal specific target and attribute word lists. Figure 3 shows the results for the surname-based racial bias experiments. The surname tests demonstrate a large difference in effect size between the "Pleasant vs. Unpleasant" sentiment and the Legal attributes. While the Hispanic surname tests only show a small negative effect for general sentiment, both legal specific tests showed a large negative effect. The Asian Pacific Islander results provided an even greater disparity in results than the Hispanic surname test. Although the "Pleasant vs. Unpleasant" test showed a large positive bias for Asian Pacific Islanders, the "Positive vs. Negative Legal" test showed a large negative bias for Asian Pacific Islanders. In essence, positive stereotypes do not necessarily translate to group fairness in a legal context.
Temporal Effects
Given that the opinion corpus used to train the word embeddings contains opinions dating back to 1650, one possibility is that the observed gender and racial biases are driven by the inclusion of opinions from time periods where these biases were even more explicit and prevalent than in modern society. Our interest in the temporal component of these biases is primarily focused on representational harms that could be caused by NLP systems that may use representations similar to these in legal technology applications rather than in a historical analysis measuring the amount of bias present in any given time period.
To investigate the effect of inclusion of historical opinions on the biases encoded in these representations, we trained word embeddings on temporal subsets of the corpora. These subsets were created by always including modern opinions, but varying the year of cutoffs for inclusion of historical data to incorporate older opinions into the corpus. The year cutoffs we selected were as follows: 2000-2020 (last 20 years), 1980-2020 (last 40 years), 1968-2020 (Post Civil Rights Act), 1954-2020 (Post Brown v. Board of Education), 1930-2020 (Post Great Depression), 1896-2020 (Post Plessy v. Ferguson), and 1865-2020 (Post Civil War). We then applied the legal-adapted WEAT analyses previously described to the embeddings generated for each temporal cutoff. The results of these analyses are shown in Figures 4 and 5 for racial and gender bias WEATs respectively.
In both the gender and racial temporal analyses, it is clear that the biases previously observed in the embeddings trained on the full corpus of judicial opinions were not primarily the result of the inclusion of historical data. For the racial bias tests, we observed WEAT scores with moderate to large effect sizes indicative of negative racial bias in both the positive/negative legal WEAT and the grant/deny WEAT at all time periods for African American and Hispanic surnames as compared to European surnames. The bias effect sizes decreased slightly as less historical data was included for the positive/negative legal attribute WEATs, but remained relatively constant in the grant/deny WEAT. For Asian and Pacific Islander surnames as compared to European surnames, we observed the same pattern of negative biases that decrease slightly over time in the positive/negative 1860 1880 1900 1920 1940 1960 1980 2000 Date Cutoff Similar to results from the full word embeddings, the bias scores for gender bias legal WEATs were dependent on the set of targets used, with bias scores near neutral for the generic male/female terms and large bias scores (both positive and negative) observed for the given name-based WEATs. While the temporal trends were relatively stable for the generic male/female terms, observed bias scores fluctuated in the given name based measures. This may indicate changes in gender bias over time that swing between positive and negative, but it should be noted that the gender specificity of given names also changes over time, making these results more difficult to interpret.
Since the gender-career bias was the strongest observed in our replication of the original WEAT (see Table 2), we also performed a temporal analysis of gender-career bias using both the original career and family terms from Caliskan et al. and the expanded set previously described. Figure 6 shows that the gender career bias was observed at all time periods and across all gender target types. This effect was extremely strong for the given name based measures and moderately strong for the male/female terms.
Topical Effects
In this section, we further investigate how gender and racial biases change when we only consider cases pertaining to 1860 1880 1900 1920 1940 1960 1980 2000 Date Cutoff 1860 1880 1900 1920 1940 1960 1980 2000 Date Cutoff Career vs Family -Extended Figure 6: Temporal WEAT scores for gender-related targets and career/family attributes a specific legal topic (for results related to racial biases, see the Supplementary Material 1 ). To categorize the documents in our dataset, we rely on the seven main divisions of law provided by "West's Analysis of American Law" guide (Thomson Reuters Westlaw 2013). We define topical areas for each opinion using the Key Number classification for the headnotes written for the opinion. The seven main categories are contracts, crimes, government, persons, property, remedies, and torts. This guide also provides more granular subdivisions of the main topics, however, for our experiments here we only focus on the main divisions to capture the overall effects observed under each category. To make sure the analyses are not affected by a small sample size, while preparing the dataset for each legal category we removed the words in the target and attribute lists that have a frequency of less than 30 occurrences in the corresponding sub-corpus (see "Embedding Training" in "Proposed Approach"). Figures 7 -9 illustrate the results of some of these tests (see the Supplementary Material 1 for the results of more experiments). Figure 7 shows the results of three tests where the target list is "Male vs. Female Terms". We observe that the breakdown of the documents by their legal topic in the case of "Positive vs. Negative Legal" attribute list reveals strong biases in two categories: crimes and property. On the other hand, in the case of "Grant vs. Deny" attribute list we observe a significant bias in all legal topics except for crimes. Finally, as mentioned in the previous results, there exists significant bias in the case of "Expanded Career vs. Family" attribute list (see similar results for the "Career vs. Family (Caliskan)" attribute list in the Supplementary Material 1 ) and this bias is consistent in terms of the large magnitude across all the different legal topics. Figures 8 and 9 illustrate the results of four tests to compare the detected gender bias for the "Male vs. Female (Caliskan)" attribute lists (Figure 8) as a baseline against the legally adapted "Male vs. Female Judge Given Name" lists ( Figure 9). These results demonstrate that choosing the legally adapted target lists reveal different type (i.e., sign of the effect size) and magnitude of the bias for each legal topic. Observe, for example, that in the case of "Positive vs. Negative Legal" the magnitude of the effect size of the legally adapted lists is smaller compared to the baseline for topics such as property and remedies, and larger in other topics such as crimes, persons, and torts. We also observe a consistently larger effect size in the case of "Grant vs. Deny" for the legally adapted lists compared to the baseline.
Conclusions and Discussion
In this article we proposed a legally adapted approach for identifying gender and racial biases that are encoded in the word embeddings trained on the text of legal opinions from U.S. case law. This approach considers specific idioms used in legal language and also adapts the general bias detection WEAT method to legal language. The experiments designed in this work demonstrate the importance of domain adaptation for bias detection methods. If general purpose bias identification methods are used to measure gender and racial biases in word embeddings in the legal domain or other domains with specialized vocabularies, the developers of these systems may inadvertently create NLP systems that replicate or even amplify these biases in the world even after trying to screen their word embeddings for potential biases.
Using domain adapted bias detection methods is also important for evaluating the effectiveness of any potential mitigation strategy. We showed that using a date cut off is not an effective strategy for mitigating gender or racial biases present in the legal opinions even though societal opin- ions regarding these issues have changed over time. Our results also demonstrate that gender-career bias is particularly strong for given names in this domain, suggesting that downstream legal NLP systems that operate on these representations (e.g., coreference resolution) may be particularly likely to make biased predictions. Furthermore, we showed that analyzing the bias across different legal topics not only reveals different types of bias but also signifies the need for evaluating the system for fairness under different topics.
Future work in this area should also focus on the downstream effects exhibited by predictive systems that take biased representations as input as well as the effects any mitigation strategies have on these predictions. This work examines only biases in the representations themselves but the way that these biases could potentially cause harm in society is when they are used to make predictions that may be biased and the results of these predictions are displayed to users. The exact nature of the potential harms caused would depend on the specific application, but biased predictions made by these systems could be particularly harmful in contexts where users are not directly viewing the text these models are trained upon but instead are viewing aggregated predictions or summaries of results across many cases.
For example, if a motion outcome prediction system operating on racially biased word representations was deployed within a particularly diverse jurisdiction, it could under count the number of successful motions as compared to model performance in a less diverse jurisdiction. Attorneys representing clients in this jurisdiction might then be less likely to believe that a potential motion in a client's case would succeed based on a summary of historical outcomes and could suggest settlement in scenarios where they would have proposed continuing with the motion if the model had provided a more accurate prediction of outcomes within their jurisdiction. Under-representation of counterstereotypical scenarios in legal research systems due to biases in predictive models operating on biased representations could ultimately contribute to degradation in the quality of legal representation or increased costs related to additional time required for legal research for individuals in protected classes.
Ethical Statement
This paper leveraged identity characteristics from the U.S. Census and a judicial biographical database to create target lists for the WEAT test. This work examined group fairness for both race and gender in word embeddings built from judicial opinions. While the aim of this work is to measure these potentially harmful representational biases in order to facilitate the creation of mitigation strategies for legal NLP systems that take these types of representations as input, the work could also be used to build intentionally harmful or biased legal NLP tools. Unfortunately, blindness itself leads to unfairness and we need to better understand the impact of stereotypes on legal decisions made by the judiciary (Nielsen 2020).
Legal (Motion) Outcome
The following attribute lists were derived from disposition terms for legal motions and appeals: Grant: grant, grants, granting, granted, accept, accepts, accepted, accepting, affirm, affirms, affirmed, affirming, approve, approves, approved, approving, sustain, sustains, sustained, sustaining Deny: deny, denies, denying, denied, decline, declines, declined, declining, vacate, vacates, vacated, vacating, decline, declines, declined, declining, overrule, overrules, overruled, overruling Expanded Career vs. Family
The attribute lists were derived from Rice and Zorn (2021) seed term queries against the Legal Opinion Corpus embeddings. The seed terms from WEAT 6 in Caliskan, Bryson, and Narayanan (2017) are as follows:
• Career Seeds: executive, management, professional, corporation, salary, office, business, career. • Family Seeds: home, parents, children, family, cousins, marriage, wedding, relatives.
The query results as well as the terms excluded after manual review are as follows:
Expanded Career: executive, chief-executive, managerial, salaried, vice-president, salary, operations, operational, Chief-Executive, corporate, CEO, director, COO, president, CFO, management, revenue, board-of-directors, Corporate, chairman, Chief-Financial, hiring-and-firing, organizational-structure, Vice-President, salaries, seniorvice, managing, President, Executive-Vice, corporation, Executive, Chief-Operating, personnel, payroll, Marketing, executives, fiscal, Board-of-Directors, subsidiary, functions, regulatory, automation, duties, managers, commissions, delegated, clerical, assistant, marketing, employing, comptroller, Senior-Vice, vice-presidents, Managing-Director, entity, oversight, audit, Delaware-corporation, departments, usurping, supervisory, President-and-CEO, executive-branch, wholly-owned-subsidiary, offices, Operations, engineering, affiliate, private-sector, performs, demoting, directors, professional, shareholder, competitive, company, reorganizing, Professional, usurpation, secretary-treasurer, revenues, annual-salary, engineer, manager, human-resource, directorship, foreign-corporation, internal, profitability, Comptroller, operating, engineers, Management, promotion, forecasting, restructuring, auditors, competitor, branch, policymaking Expanded Family: cousins, grandparents, aunts, grandmother, stepmother, aunt, aunt-and-uncle, paternalgrandparents, siblings, mother, maternal-grandfather, maternal-grandparents, sisters, maternal-grandmother, stepfather, children, paternal-grandmother, paternal-grandfather, stepchildren, uncles, daughters, maternal-relatives, godmother, maternal-uncle, relatives, daughter, granddaughters, maternal-aunt, paternal-aunt, paternal, parents, grandchildren, youngest, eldest, uncle, cousin, grandchild, granddaughter, niece, twins, nieces, younger-brother, father, youngest-child, younger-siblings, fiance, Aunt, grandson, brothers-and-sisters, stepbrother, boyfriend, childrens, estranged, sister, fiancee, Grandmother, grandsons, oldersiblings, nephews, mothers, stepsister, minor-children, nieces-and-nephews, grandfather, biological-parents, boyfriends, reunited, stepson, maternal, paramour, fosterparents, son, Grandparents, adoptive, teenage, sons, Daughter, stepdaughters, minor-child, girlfriend, biological-father, married, stepdaughter, loved, nephew, adoptive-parents, friends, out-of-wedlock, childless, girls, wedding, kin, girlfriends, loving, teenagers, teenaged, roommates, mom-and-dad Expanded Family Excluded: Brianna, Tabitha
Surnames by Race
The following Surname lists were sampled from the 2010 US Census. As noted in the paper, each name was required to appear in at least 300 opinions except for the Native American and Alaskan Native list.
Temporal Effects: More Experiments
Here we provide the results of more experiments in our temporal study. Figures 10 and 11 show the results of temporal WEATs using the Caliskan, Bryson, and Narayanan(2017) pleasant/unpleasant attributes with race surname and gender targets resepectively.
Topical Effects: More Experiments
Here we provide the results of more experiments in our topical study. Figure 12 illustrates the results of six tests to compare the detected gender bias for the "Male vs. Female (Caliskan)" attribute lists as a baseline against the legally adapted "Male vs. Female Judge Given Name" lists which complement the results provided in section "Topical Effects". Figure 13 shows the results of two tests where the target list is "Male vs. Female Terms". We observe that the breakdown of the documents by their legal topic in the case of "Pleasant3 vs. Unpleasant3" (from Caliskan) attribute list reveals strong bias only in one category: property, and all the legal topics show a strong bias in the case of "Career vs. Family" (from Caliskan) similar to the expanded list provided in section "Topical Effects". Figures 14-17 illustrate the results of WEAT tests on different racial last name target lists. These figures reveal different types and magnitudes of detected bias across racial groups and topics of law.
Figure 1 :
1Legal Opinion Co-referencing Example noun embeddings would be missed.
Figure 2 :
2Corpus Prep & Embedding Generation
Figure 3 :
3Surname WEAT Cohen's effect sizes
Figure 4 :
4Temporal
Figure 5 :
5Temporal WEAT scores for gender-related targets and legal attributes legal WEAT, but only some time periods were observed to have a moderate negative bias on the grant/deny WEAT, with the strongest observed bias being in the embedding trained on opinions from 2000-2020.
vs. Female Given Names -Judges Male vs. Female Given Names -Caliskan
Figure 7 :Figure 8 :
78WEAT Cohen's effect sizes for "Male vs. Female Terms" target list. Different attribute lists are shown in different colors. WEAT Cohen's effect sizes for "Male vs. Female (Caliskan)" target list. Different attribute lists are shown in different colors.
Figure 9 :
9WEAT Cohen's effect sizes for "Male vs. Female Judge Given Name" target list. Different attribute lists are shown in different colors.
Figure 10 :Figure 11 :
1011Temporal WEAT scores for race targets (surnames) and pleasant/unpleasant attributes fromCaliskan, Bryson, and Narayanan (2017) Temporal WEAT scores for gender-related targets and pleasant/unpleasant attributes from Caliskan, Bryson, and Narayanan(2017)Pamela, Patricia, Paula, Phyllis, Rebecca, Robin, Rosemary, Ruth, Sandra, Sara, Sarah, Sharon, Shirley, Stephanie, Sue, Susan, Suzanne, Teresa, Vanessa, Victoria, Wendy
Figure 12 :Figure 13 :
1213WEAT Cohen's effect sizes for "Male vs. Female(Caliskan)" target list on the left and "Male vs. Female Judge Given Name" target list on the right. Different attribute lists (i.e., "Pleasant3 vs Unpleasant", "Career vs. Family", and "Expanded Career vs. Family") are shown in different colors. WEAT Cohen's effect sizes for "Male vs. Female Terms" target list. Different attribute lists (i.e., "Pleasant3 vs. Unpleasant3" and "Career vs. Family") are shown in different colors.
Figure 14 :
14WEAT Cohen's effect sizes for "European vs. African American Last Names" target list. Different attribute lists are shown in different colors.
Figure 15 :Figure 16 :Figure 17 :
151617WEAT Cohen's effect sizes for "European vs. Hispanic Last Names" target list. Different attribute lists are shown in different colors. WEAT Cohen's effect sizes for "European vs. Asian Pacific Island Last Names" target list. Different attribute lists are shown in different colors. WEAT Cohen's effect sizes for "European vs.Native American Last Names" target list. Different attribute lists are shown in different colors.
Table 1: Surname Lists by RaceMale vs. Female Terms: A similar problem exists with the gender tests based on given names since individuals are often referred to primarily by their surnames throughout legal opinions. To address this issue, we created a list of gendered pronouns and common gendered nouns (e.g. man/woman) for use in gender bias WEATs.Group
Sample Size Min. Cases
European
46 -200
300
African American
164
300
Hispanic
200
300
Asian Pacific Islander
200
300
Native American / Alaskan
46
30
Table 2 :
2Cohen's effect size (d) comparison between Com-
mon Crawl GloVe (d C ) and Legal Opinion Word2Vec (d L )
Test
d
error
Male vs Female Terms
Pleasant vs Unpleasant
-0.197 0.009
Positive vs. Negative Legal
0.089 0.007
Grant vs Deny
0.457 0.008
Male vs. Female Names (Judges)
Pleasant vs Unpleasant
-0.495 0.008
Positive vs. Negative Legal
-0.254 0.003
Grant vs Deny
0.603 0.007
Male vs. Female Names (Caliskan)
Pleasant vs Unpleasant
0.208 0.013
Positive vs. Negative Legal
-0.198 0.003
Grant vs Deny
0.506 0.009
Table 3 :
3Cohen's effect size (d) for gender specific test on the Legal Opinion Corpus
See the Supplementary Material for excluded terms: https://arxiv.org/abs/2203.13369
The Cohen's d effect size ranges from -2.0 to 2.0 with ±0.5 representing a medium effect
AcknowledgmentsWe would like to thank Frank Schilder, Brian Romer, and Nadja Herger for their guidance and support.
Man is to computer programmer as woman is to homemaker? debiasing word embeddings. T Bolukbasi, K.-W Chang, J Zou, V Saligrama, A Kalai, 978-1-5108- 3881-9Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16. the 30th International Conference on Neural Information Processing Systems, NIPS'16Red Hook, NY, USACurran Associates IncBolukbasi, T.; Chang, K.-W.; Zou, J.; Saligrama, V.; and Kalai, A. 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Pro- ceedings of the 30th International Conference on Neural Information Processing Systems, NIPS'16, 4356-4364. Red Hook, NY, USA: Curran Associates Inc. ISBN 978-1-5108- 3881-9.
Normalized (pointwise) mutual information in collocation extraction. G Bouma, Proceedings of the GSCL. the GSCLTübingen, GermanyGunter Narr VerlagBouma, G. 2009. Normalized (pointwise) mutual informa- tion in collocation extraction. In Proceedings of the GSCL, 31-40. Tübingen, Germany: Gunter Narr Verlag.
Semantics derived automatically from language corpora contain human-like biases. A Caliskan, J J Bryson, A Narayanan, Science. 3566334Caliskan, A.; Bryson, J. J.; and Narayanan, A. 2017. Seman- tics derived automatically from language corpora contain human-like biases. Science (New York, N.Y.), 356(6334): 183-186.
COM-PAS risk scales: Demonstrating accuracy equity and predictive parity. W Dieterich, C Mendoza, T Brennan, 1. Federal Judicial Center. 2012. Biographical Directory of Article III Federal Judges. ExportNorthpoint Inc7Dieterich, W.; Mendoza, C.; and Brennan, T. 2016. COM- PAS risk scales: Demonstrating accuracy equity and predic- tive parity. Northpoint Inc, 7(7.4): 1. Federal Judicial Center. 2012. Biographical Di- rectory of Article III Federal Judges: Export.
Free Law Project. 2021. Court Listener: Bulk Judicial Database Files. Free Law Project. 2021. Court Listener: Bulk Judicial Database Files. https://www.courtlistener.com/api/bulk- data/people/all.tar.gz. Accessed: 2021-05-03.
Measuring individual differences in implicit cognition: the implicit association test. A G Greenwald, D E Mcghee, J L Schwartz, Journal of personality and social psychology. 7461464Greenwald, A. G.; McGhee, D. E.; and Schwartz, J. L. 1998. Measuring individual differences in implicit cognition: the implicit association test. Journal of personality and social psychology, 74(6): 1464.
Assessing algorithmic fairness with unobserved protected class using data combination. N Kallus, X Mao, A Zhou, 978-1-4503-6936-7Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, 110. the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, 110New York, NY, USAAssociation for Computing MachineryKallus, N.; Mao, X.; and Zhou, A. 2020. Assessing algo- rithmic fairness with unobserved protected class using data combination. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, FAT* '20, 110. New York, NY, USA: Association for Computing Machin- ery. ISBN 978-1-4503-6936-7.
Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads. A Lambrecht, C Tucker, Management Science. 657Lambrecht, A.; and Tucker, C. 2019. Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads. Management Science, 65(7): 2966-2981.
Efficient Estimation of Word Representations in Vector Space. T Mikolov, K Chen, G Corrado, J Dean, arXiv:1301.3781Mikolov, T.; Chen, K.; Corrado, G.; and Dean, J. 2013a. Ef- ficient Estimation of Word Representations in Vector Space. arXiv:1301.3781.
T Mikolov, I Sutskever, K Chen, G Corrado, J Dean, arXiv:1310.4546Distributed Representations of Words and Phrases and their Compositionality. Mikolov, T.; Sutskever, I.; Chen, K.; Corrado, G.; and Dean, J. 2013b. Distributed Representations of Words and Phrases and their Compositionality. arXiv:1310.4546.
Hate speech detection and racial bias mitigation in social media based on BERT model. M Mozafari, R Farahbakhsh, N Crespi, PLOS ONE. 158Mozafari, M.; Farahbakhsh, R.; and Crespi, N. 2020. Hate speech detection and racial bias mitigation in social media based on BERT model. PLOS ONE, 15(8): 1-26.
Practical Fairness: Achieving Fair and Secure Data Models. A Nielsen, 978-1-4920-7573-8O'Reilly MediaNielsen, A. 2020. Practical Fairness: Achieving Fair and Secure Data Models. O'Reilly Media, Incorporated. ISBN 978-1-4920-7573-8.
Dissecting racial bias in an algorithm used to manage the health of populations. Z Obermeyer, B Powers, C Vogeli, S Mullainathan, Science. 3666464Obermeyer, Z.; Powers, B.; Vogeli, C.; and Mullainathan, S. 2019. Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464): 447-453.
GloVe: Global Vectors for Word Representation. J Pennington, R Socher, C Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsPennington, J.; Socher, R.; and Manning, C. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), 1532-1543. Doha, Qatar: Association for Computational Linguistics.
Racial bias in legal language. D Rice, J H Rhodes, T Nteta, Research & Politics. 622053168019848930SAGE Publications LtdRice, D.; Rhodes, J. H.; and Nteta, T. 2019. Racial bias in legal language. Research & Politics, 6(2): 2053168019848930. Publisher: SAGE Publications Ltd.
Replication Data for: "Corpus-Based Dictionaries for Sentiment Analysis of Specialized Vocabularies. D Rice, C Zorn, 10.7910/DVN/4EKHFMHarvard Dataverse, V1Rice, D.; and Zorn, C. 2019. Replication Data for: "Corpus- Based Dictionaries for Sentiment Analysis of Specialized Vocabularies". Harvard Dataverse, V1, https://doi.org/10. 7910/DVN/4EKHFM.
Corpus-based dictionaries for sentiment analysis of specialized vocabularies. D R Rice, C Zorn, Political Science Research and Methods. 91Rice, D. R.; and Zorn, C. 2021. Corpus-based dictionaries for sentiment analysis of specialized vocabularies. Political Science Research and Methods, 9(1): 20-35.
H Suresh, J V Guttag, arXiv:1901.10002A Framework for Understanding Unintended Consequences of Machine Learning. Suresh, H.; and Guttag, J. V. 2020. A Framework for Under- standing Unintended Consequences of Machine Learning. arXiv:1901.10002.
West's Analysis of American Law. Westlaw Thomson Reuters, WestlawThomson Reuters Westlaw, ed. 2013. West's Analysis of American Law. Westlaw.
Litigation Analytics: Extracting and querying motions and orders from US federal courts. T Vacek, D Song, H Molina-Salgado, R Teo, C Cowling, F Schilder, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations). the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations)Minneapolis, MinnesotaAssociation for Computational LinguisticsVacek, T.; Song, D.; Molina-Salgado, H.; Teo, R.; Cowling, C.; and Schilder, F. 2019. Litigation Analytics: Extracting and querying motions and orders from US federal courts. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics (Demonstrations), 116-121. Minneapolis, Minnesota: Asso- ciation for Computational Linguistics.
. Crisco Rae, Deeley, Bjorklund, Mardian, Aloi, Loewen, Schuster, Engelmann, Ulery, Fiorenzo, Buis, Haycock, Hickory, Hudelson, Lembke, Milbauer, Heffelfinger, Gribble, Mahler, Balestra, Lutz, Brincat, Kandel, Dileo, Marter, Frymire, Nielson, Sirota, Callison, Boydston, Yeager, Dressler, Sachs, Guhl, Hufnagle, Warshaw, Spodek, Saporito, Hegel, Borgeson, Hogland, Balick, Rinke, Stunkard, Hegstrom, Donahey, Mastronardi, Zweig, Kleban, Ineichen, Tunheim, Gudgel, Rhomberg, Rohrs, Henslee, Lobdell, Prins, Mohr, Gillson, Simoneaux, Wetherington, Avers, Paine, Piesco, Serota, Hottenstein, Moskowitz, Torkelson, Solly, Scovel, Goerke, Nemecek, Scruton, Montesano, Dekker, Dray, Toomey, Lamson, Caffrey, Pingree, Milos, Offill, Kralik, Roley, Grabowski, Inglese, Barstad, Sestito, Estey, Englander, Hirshfield, Karibian, Secor, Arrowood, Ludtke, Wenner, Silverberg, Klinck, Coxon, Raborn, Kraushaar, Creely, Pellerito, Kiker, Tallman, Spath, Slee, Riedlinger, Lisle, Kleinschmidt, Abbatiello, Zagar, Farquhar, Hudlow, Aulicino, Verni, Caney, Latona, Leif, Sommers, Melear, Hardt, Filippo, Ollis, Cassano, Giaccio, Rosenman, Moeser, Agosti, Malony, Sayer, Caswell, Borowsky, Steffey, Dreyer, Thorman, Halferty, Fridley, Berwald, Tyndall, Formby, Famolare, Winkle, Devall, Severtson, Cloutier, Brindley, Betz, Leonardi, Goetzman, Kraemer, Fronk, Trafford, Setter, Giuliano, Guilmette, Conkwright, Ramstad, Cathell, Sundheim, Vigliotti Ebert, European Last Names. European Last Names: Rae, Crisco, Deeley, Bjork- lund, Mardian, Aloi, Loewen, Schuster, Engelmann, Ulery, Fiorenzo, Buis, Haycock, Hickory, Hudelson, Lembke, Mil- bauer, Heffelfinger, Gribble, Mahler, Balestra, Lutz, Brin- cat, Kandel, Dileo, Marter, Frymire, Nielson, Sirota, Cal- lison, Boydston, Yeager, Dressler, Sachs, Guhl, Hufnagle, Warshaw, Spodek, Saporito, Hegel, Borgeson, Hogland, Balick, Rinke, Stunkard, Hegstrom, Donahey, Mastronardi, Zweig, Kleban, Ineichen, Tunheim, Gudgel, Rhomberg, Rohrs, Henslee, Lobdell, Prins, Mohr, Gillson, Simoneaux, Wetherington, Avers, Paine, Piesco, Serota, Hottenstein, Moskowitz, Torkelson, Solly, Scovel, Goerke, Nemecek, Scruton, Montesano, Dekker, Dray, Toomey, Lamson, Caf- frey, Pingree, Milos, Offill, Kralik, Roley, Grabowski, In- glese, Barstad, Sestito, Estey, Englander, Hirshfield, Karib- ian, Secor, Arrowood, Ludtke, Wenner, Silverberg, Klinck, Coxon, Raborn, Kraushaar, Creely, Pellerito, Kiker, Tall- man, Spath, Slee, Riedlinger, Lisle, Kleinschmidt, Ab- batiello, Zagar, Farquhar, Hudlow, Aulicino, Verni, Caney, Latona, Leif, Sommers, Melear, Hardt, Filippo, Ollis, Cas- sano, Giaccio, Rosenman, Longval, Moeser, Agosti, Mal- ony, Sayer, Caswell, Borowsky, Steffey, Dreyer, Thor- man, Halferty, Fridley, Berwald, Tyndall, Formby, Famo- lare, Winkle, Devall, Severtson, Cloutier, Brindley, Betz, Leonardi, Goetzman, Kraemer, Fronk, Trafford, Setter, Giu- liano, Guilmette, Conkwright, Ramstad, Cathell, Sundheim, Ebert, Vigliotti
. Jessamy Pettaway, Ephriam, Sinkfield, Senegal, Pondexter, Minnifield, Bendolph, Osagie, Okeke, Boateng, Okoro, Mensah, Cephas, Claybrooks, Vaughns, Hardnett, Cephus, Whack, Ndiaye, Kennebrew, Owusu, Madyun, Bangura, Acoff, Hameen, Chukwu, Conteh, Malveaux, Philmore, Dumpson, Marbley, Ojo, Golphin, Mems, Mercadel, Akande, Narcisse, Knowlin, Wigfall, Lavalais, Sinegal, Lucious, Gaitor, Hargro, Idowu, Torain, Tresvant, Adeniji, Eleby, Bluitt, Luvene, Broaden, Opoku, Addo, Shabazz, Ajayi, Bloodsaw, Grandberry, Roulhac, Bodison, Asante, Ducksworth, Killings, Honora, Glasper, Twymon, Poullard, Adu, Hypolite, Whitsey, Beyah, Adeleke, Wrice, Madu, Glinsey, Teamer, Earvin, Wrighten, Broadnax, Sails, Nwachukwu, Gadsden, Cudjoe, Jubilee, Osei, Taiwo, Smalls, Wimes, Salaam, Gadsen, Batiste, Prioleau, Chatmon, Anyanwu, Stepney, Woodfolk, Okafor, Blige, Menefield, Tukes, Okoli, Adeyemi, Lately, Tolefree, Geathers, Arvie Presha, Fluellen, Ofori, Bacote, Seabrooks, Outing, Wysinger, Manigault, Diallo, Expose, Yeboah, Gabbidon, Baymon, Balogun, Haynesworth, Snype, Ancrum, Nutall, Pinkins, Peguese, Okoye, Boykins, Aytch, Ravenell, Hugee, Afriyie, Shelvin, Darensburg, Winbush, Veasley, Macharia, Straughter, Villery, Tasby, Hezekiah, Neverson, Blakes, Petties, Yelder, Contee, Holiness, Goffney, Degrate, Akpan, Junious, Leffall, Stanciel, Jiggetts, Dunkins, Gadson, Summage, Smokes, Cooperwood, Poitier, Taybron Eze, ; Hispanic, Montes, Ocana, Sanabria, Magallon, Bejarano, Camarillo, Fierros, Oviedo, Guevara, Melendrez, Becerril, Osorio, Reynoso, Villasenor, Zepeda, Gastelo, Zacarias, Pomales, Montelongo, Galeana, Mazariegos, Abrego, Garfias, Palacios, Zorrilla, Oquendo, Recinos, Alderete, Iraheta, Zurita, Delgadillo, Aleman, Saldivar, Mendieta, Miramontes, Tellez, Inzunza, Escobar, Cuadrado, Beltre, Penaloza, Coreas, Cardena, Villalba, Rubalcaba, Rizo, Taveras, Echeverry, Medina, Batres, Vences, Carmona, Matamoros, Lazcano, Bencomo, Lizarraga, Alvarenga, Costilla, Preciado, Segovia, Villeda, Aparicio, Yanez, Callejas, Salinas, Estrada, Pulido, Botello, Magdaleno, Cobian, Govea, Medellin, Escobedo, Nava, Conejo, Reynosa, Ascencio, Guajardo, Cardona, Saenz, Santoyo, Galvan, Baez, Adames, Benitez, Sauceda, Cerda, Loaiza, Veliz, Zamorano, Valadez, Pelayo, Veloz, Navarrete, Manjarrez, Polanco, Basurto, Herrera, Espinoza, Obeso, Arzola, Sarabia, Perdomo, Rubio, Ovalle, Arrellano, Rengifo, Jasso, Lombera, Pantoja, Cobos, Sosa, Sanchez, Berroa, Montalvo, Mejia, Alcala, Huerta, Chavez, Solorzano, Olmedo, Morfin, Bastidas, Terrones, Alverio, Giraldo, Mayorga, Lagunas, Lozoya, Olivas, Aguilar, Placencia, Brizuela, Calvillo, Rosalio, Guebara, Carrasco, Germosen, Urias, Olivarez, Cervantes, Zamudio, Banales, Liranzo, Ibarra, Flores, Alvarez, Gonzalez, Cedillo, Altamirano, Galaviz, Villagra, Barrientos, Campuzano, Zuniga, Robledo, Yepez, Cadena, Vargas, Ovando, Genao, Hermosillo, Alatorre, Morales, Mireles, Lemus, Nogueras, Posada, Tapia, Aldaco, Oropeza, Fragozo, Puerta, Pizano, Beniquez, Astorga, Jerez, Reynaga, Rivas, Ortega, Murillo, Colmenares, Limon, Amezquita, Chairez, Abarca, Nuno, Ortuno, Carranza, Aceves, Rincon, Zamora, Mosqueda, Cornejo, Arciniega, Retana, Camarena, ; Londono, Chung, Saxena, Doshi, Tsui, Vue, Mehta, Ryu, Lu, Luk, Lui, Hyun, Nam, Sung, Wang, Chan, Yi, Chon, Liew, Lieu, Shih, Tan, Tuan, Hsieh, Huang, Parekh, Parikh, Ravi, Bhatt, Hoang, Ma, Diep, Nghiem, Shukla, Kwok, Tian, Kao, Jia, Mao, Pathak, Lor, Thi, Bae, Manoharan, Rajesh, Shin, Jeong, Yim, Satish, Huong, Rong, Hou, Qian, Choi, Ou, Cheng, Vitug, Giang, Hu, Moua, Tse, Iyer, Kwon, Garg, Lai, Luong, Saechao, Quach, Kuang, Jie, Thanh, Ip, Suh, Guo, Vuong, Dinh, Li, Tam, Pak, Ng, Le, Truong, Huynh, Bui, Chuang, Duong, Gautam, Thakkar, Cao, Vu, Ho, Vang, Leang, Yuan, Mui, Song, Ye, Eun, Ha, Keung, Desai, Yum, Chae Kang, Chiang, Bansal, Shen, Teng, Szeto, Dao, Jin Phong, Cho, Ding, Agarwal, Sundaram, Kwan, Kulkarni, Aggarwal, Shu, Bhatnagar, Thao, Jang, Sanghera, Hwa, Phu, Shetty, Hsu, Srinivasan, Gandhi, Tsai, Wei, Thang, Yue, Leung, Yin, Choe, Kuo, Poon, Gu, Naik, Chui, Tsang, Tang, Deng, Pei, Chen, Wen, Nguyen, Bhakta, Chuan, Chua, Hwang, Saelee, Phan, Zhou, Han, Tsao, Chu, Xu, Tseng, Vo, Chih, Luu, Su, Nanavati, Goyal, Pham, Kyung, Patel, Trinh, Chau, Zhang, Yu, Kyong, Gupta, Fu, Won, Dang, Goswami, Trivedi, Cai, Ly, Kothari, Trung, Yun, Khurana, Zhao, Adusumilli, Liang, Vyas, Seung, Xiong, Seo, Xiao, Zhu, Liao, ; Yeh, Manygoats, Neztsosie, Quiver, Yazzie, Halona, Calabaza, Blackhorse, Whiteplume, Youngbear, Manuelito, Peshlakai, Haskie, Atcitty, Becenti, Spoonhunter, Peneaux, Kingbird, Benally, Bluebird, Tsinnijinnie, Wassillie, Nez, Hosteen, Kameroff, Ganadonegro Zunie, Tamez Asian Pacific Island Last Names. African American Last Names. Pandya Native American Last Names: Whiteface, Madplume, Stillday, Cheromiah, Denetclaw, Blackbear, Yellowhair, Begaye, Tsosie, Etsitty, Yepa, Greyeyes, Youngbird, Cowboy. Laughing, ChischillyAfrican American Last Names: Pettaway, Jessamy, Ephriam, Sinkfield, Senegal, Pondexter, Minnifield, Ben- dolph, Osagie, Okeke, Boateng, Okoro, Mensah, Cephas, Claybrooks, Vaughns, Hardnett, Cephus, Whack, Ndiaye, Kennebrew, Owusu, Madyun, Bangura, Acoff, Hameen, Chukwu, Conteh, Malveaux, Philmore, Dumpson, Marb- ley, Ojo, Golphin, Mems, Mercadel, Akande, Narcisse, Knowlin, Wigfall, Lavalais, Sinegal, Lucious, Gaitor, Har- gro, Idowu, Torain, Tresvant, Adeniji, Eleby, Bluitt, Luvene, Broaden, Opoku, Addo, Lawal, Shabazz, Ajayi, Blood- saw, Grandberry, Roulhac, Bodison, Asante, Ducksworth, Killings, Honora, Glasper, Twymon, Poullard, Adu, Hy- polite, Whitsey, Beyah, Adeleke, Wrice, Madu, Glinsey, Teamer, Earvin, Wrighten, Broadnax, Sails, Nwachukwu, Gadsden, Cudjoe, Jubilee, Osei, Taiwo, Smalls, Wimes, Salaam, Gadsen, Batiste, Prioleau, Chatmon, Anyanwu, Stepney, Woodfolk, Okafor, Blige, Menefield, Tukes, Okoli, Adeyemi, Lately, Tolefree, Geathers, Presha, Arvie, Fluellen, Ofori, Bacote, Seabrooks, Outing, Wysinger, Manigault, Diallo, Expose, Yeboah, Gabbidon, Baymon, Balogun, Haynesworth, Snype, Ancrum, Nutall, Pinkins, Peguese, Okoye, Boykins, Aytch, Ravenell, Hugee, Afriyie, Shelvin, Darensburg, Winbush, Veasley, Macharia, Straugh- ter, Villery, Tasby, Hezekiah, Neverson, Blakes, Petties, Yelder, Contee, Holiness, Goffney, Degrate, Akpan, Ju- nious, Leffall, Stanciel, Jiggetts, Dunkins, Gadson, Sum- mage, Smokes, Cooperwood, Poitier, Eze, Taybron Hispanic Last Names: Montes, Ocana, Sanabria, Maga- llon, Bejarano, Camarillo, Fierros, Oviedo, Guevara, Me- lendrez, Becerril, Osorio, Reynoso, Villasenor, Zepeda, Gastelo, Zacarias, Pomales, Montelongo, Galeana, Mazarie- gos, Abrego, Garfias, Palacios, Zorrilla, Oquendo, Reci- nos, Alderete, Iraheta, Zurita, Delgadillo, Aleman, Saldivar, Mendieta, Miramontes, Tellez, Inzunza, Escobar, Cuadrado, Beltre, Penaloza, Coreas, Cardena, Villalba, Rubalcaba, Rizo, Taveras, Echeverry, Medina, Batres, Vences, Car- mona, Matamoros, Lazcano, Bencomo, Lizarraga, Al- varenga, Costilla, Preciado, Segovia, Villeda, Aparicio, Yanez, Callejas, Salinas, Estrada, Pulido, Botello, Mag- daleno, Cobian, Govea, Medellin, Escobedo, Nava, Conejo, Reynosa, Ascencio, Guajardo, Cardona, Saenz, Santoyo, Galvan, Baez, Adames, Benitez, Sauceda, Cerda, Loaiza, Veliz, Zamorano, Valadez, Pelayo, Veloz, Navarrete, Man- jarrez, Polanco, Basurto, Herrera, Espinoza, Obeso, Arzola, Sarabia, Perdomo, Rubio, Ovalle, Arrellano, Rengifo, Jasso, Lombera, Pantoja, Cobos, Sosa, Sanchez, Berroa, Mon- talvo, Mejia, Alcala, Huerta, Chavez, Solorzano, Olmedo, Morfin, Bastidas, Terrones, Alverio, Giraldo, Mayorga, Lagunas, Lozoya, Olivas, Aguilar, Placencia, Brizuela, Calvillo, Rosalio, Guebara, Carrasco, Germosen, Urias, Oli- varez, Cervantes, Zamudio, Banales, Liranzo, Ibarra, Flo- res, Alvarez, Gonzalez, Cedillo, Altamirano, Galaviz, Vil- lagra, Barrientos, Campuzano, Zuniga, Robledo, Yepez, Cadena, Vargas, Ovando, Genao, Hermosillo, Alatorre, Morales, Mireles, Lemus, Nogueras, Posada, Tapia, Al- daco, Oropeza, Fragozo, Puerta, Pizano, Beniquez, Astorga, Jerez, Reynaga, Rivas, Ortega, Murillo, Colmenares, Limon, Amezquita, Chairez, Mariscal, Abarca, Nuno, Ortuno, Car- ranza, Aceves, Rincon, Zamora, Mosqueda, Cornejo, Arcin- iega, Retana, Camarena, Londono, Tamez Asian Pacific Island Last Names: Chung, Saxena, Doshi, Tsui, Vue, Mehta, Ryu, Lu, Luk, Lui, Hyun, Nam, Sung, Wang, Chan, Yi, Chon, Liew, Lieu, Shih, Tan, Tuan, Hsieh, Huang, Panchal, Parekh, Parikh, Ravi, Bhatt, Hoang, Ma, Diep, Nghiem, Shukla, Kwok, Tian, Kao, Jia, Mao, Pathak, Lor, Thi, Bae, Manoharan, Rajesh, Shin, Jeong, Yim, Satish, Huong, Rong, Hou, Qian, Choi, Ou, Cheng, Vitug, Giang, Hu, Moua, Tse, Iyer, Kwon, Garg, Lai, Luong, Saechao, Quach, Kuang, Jie, Thanh, Ip, Suh, Guo, Vuong, Dinh, Li, Tam, Pak, Ng, Le, Truong, Huynh, Bui, Chuang, Duong, Gautam, Thakkar, Cao, Vu, Ho, Vang, Leang, Yuan, Mui, Song, Ye, Eun, Agrawal, Ha, Keung, Desai, Yum, Kang, Chae, Chiang, Bansal, Shen, Teng, Szeto, Dao, Phong, Jin, Cho, Ding, Agarwal, Sundaram, Kwan, Kulkarni, Aggarwal, Shu, Bhatnagar, Thao, Jang, Sanghera, Hwa, Phu, Shetty, Hsu, Srinivasan, Gandhi, Tsai, Wei, Thang, Yue, Leung, Yin, Choe, Kuo, Poon, Gu, Naik, Chui, Tsang, Tang, Deng, Pei, Chen, Wen, Nguyen, Bhakta, Chuan, Chua, Hwang, Saelee, Phan, Zhou, Han, Tsao, Chu, Xu, Tseng, Vo, Chih, Luu, Su, Nanavati, Goyal, Pham, Kyung, Patel, Trinh, Chau, Zhang, Yu, Kyong, Gupta, Fu, Won, Dang, Goswami, Trivedi, Cai, Ly, Kothari, Trung, Yun, Khurana, Zhao, Adusumilli, Liang, Vyas, Seung, Xiong, Seo, Xiao, Zhu, Liao, Yeh, Pandya Native American Last Names: Whiteface, Madplume, Stillday, Cheromiah, Denetclaw, Blackbear, Yellowhair, Be- gaye, Tsosie, Etsitty, Yepa, Greyeyes, Youngbird, Cow- boy, Manygoats, Neztsosie, Quiver, Yazzie, Halona, Cal- abaza, Blackhorse, Whiteplume, Youngbear, Manuelito, Peshlakai, Haskie, Atcitty, Becenti, Spoonhunter, Peneaux, Kingbird, Benally, Bluebird, Tsinnijinnie, Wassillie, Nez, Hosteen, Kameroff, Zunie, Ganadonegro, Laughing, Chis- chilly, Fasthorse, Wauneka, Bedonie, Goldtooth
Male Terms: male, males, man, men, boy, boys, he, him, his, himself. Male Terms: male, males, man, men, boy, boys, he, him, his, himself
Female Terms: female, females, woman, women, girl, girls, she, her, hers, herself. Female Terms: female, females, woman, women, girl, girls, she, her, hers, herself
Judge Given Names The following given name lists were sampled from a Judicial biographical database. Federal Judicial Center. Judge Given Names The following given name lists were sampled from a Judicial biographical database (Federal Judicial Center 2012).
. Male Judge Given, ; Dennis, Joe Howard, Stanley, Daniel, Anthony, Bernard, William, Harold, Raymond, Kenneth, Samuel, Carl, Brian, Sidney, Alfred Roger, Horace, Vincent, Eric, Douglas, David, Richard, Larry, Andrew, Herbert, Benjamin, Steven, Walter, Warren, Timothy, Charles, Jon Tom, Kevin, Maurice, Allen, Earl, Henry, Terry, Jerry Matthew, Gregory, Leonard, Arthur, Frank, Fred, Ralph, Edwin, James, Sam, Jeffrey, Scott, Robert, George, Alexander Harry, Albert Gary, John ; Holly, Jacqueline Karen, Katherine, Kathleen, Kimberly, Laura, Laurie, Linda, Lisa, Lori, Louise, Marcia, Margaret, Maria, Marilyn, Marsha, Martha, Mary, Maureen, Nancy Michelle, Ernest, Mark, Jesse, Peter, Clarence, Eugene, Joseph, Marvin, Hugh, Michael, Francis, Donald, Nicholas, Stephen, Paul, Christopher Female Judge Given Name; Alice, Amy, Ann, Anna, Anne, Barbara, Beth, Brenda, Carmen, Carol, Carolyn, Catherine, Cathy, Cheryl, Christine, Cynthia, Deborah, Debra; Denise, Diana, Diane, Donna, Elizabeth, Ellen, HelenMale Judge Given Name: Dennis, Joe, Howard, Stan- ley, Daniel, Anthony, Bernard, William, Harold, Raymond, Kenneth, Samuel, Carl, Brian, Sidney, Roger, Alfred, Ho- race, Vincent, Eric, Douglas, David, Richard, Larry, An- drew, Herbert, Benjamin, Steven, Walter, Warren, Timo- thy, Charles, Tom, Jon, Kevin, Maurice, Allen, Earl, Henry, Terry, Matthew, Jerry, Gregory, Leonard, Arthur, Frank, Fred, Ralph, Edwin, James, Sam, Jeffrey, Scott, Robert, George, Harry, Alexander, Albert, Gary, John, Ernest, Mark, Jesse, Peter, Clarence, Eugene, Joseph, Marvin, Hugh, Michael, Francis, Donald, Nicholas, Stephen, Paul, Christo- pher Female Judge Given Name: Alice, Amy, Ann, Anna, Anne, Barbara, Beth, Brenda, Carmen, Carol, Carolyn, Catherine, Cathy, Cheryl, Christine, Cynthia, Deborah, De- bra, Denise, Diana, Diane, Donna, Elizabeth, Ellen, Helen, Holly, Jacqueline, Jane, Janet, Janice, Jennifer, Jill, Joan, Judith, Julie, Karen, Katherine, Kathleen, Kimberly, Laura, Laurie, Linda, Lisa, Lori, Louise, Marcia, Margaret, Maria, Marilyn, Marsha, Martha, Mary, Maureen, Michelle, Nancy,
| [] |